document.write(''); How Generative AI Changes Strategy - Simo Baha

How Generative AI Changes Strategy

HBR EDITOR IN CHIEF ADI IGNATIUS: Chris, you’ve been a tech company CEO. You’re strategy head at Microsoft. You’ve seen a lot of technologies over the years. There’s some that land and we think, “Okay, this is truly transformative, disruptive. This is an inflection point.” In your mind, is that where we are now? Or do we need to wait and see what the impact of generative AI will be on business and on everything?

MICROSOFT EXECUTIVE VICE PRESIDENT CHRIS YOUNG: We’re at a place where we see this as truly transformative. I might actually draw a contrast to one of the trends that many people were very excited about in the tech landscape just a little bit over a year ago, which was Web3 and the promise of blockchains, and everything that would happen. But there was definitely a segment of the world that was saying, “Well, interesting, but not sure what the practical applications are here, not sure it’s going to change everything.” No one’s saying that about AI. I think immediately, many of us recognize that this is going to be completely game changing, not only for the tech landscape, but for humanity.

ADI IGNATIUS: Welcome to How Generative AI Changes Everything, a special series of the HBR IdeaCast. There are many shorthand definitions of strategy. One that I like is from the author, Gary Hamel, who says that strategy is about making the future happen, not just reacting to it. Well, this sure feels like a time when it’s possible to make the future happen. Generative AI has stormed to the consciousness of practically every business leader, and it’s opening up new strategic paths and new ways to create new businesses. This week, How Generative AI Changes Strategy. I’m Adi Ignatius, editor in chief of Harvard Business Review and your host for this episode. In this special series, we’re talking to experts to find out how this new technology changes workforce productivity and innovation, and we’re asking how leaders should adopt generative AI in their organizations and what it means for their strategy. Later on in this episode, you’re going to hear from Andy Wu, a professor at Harvard Business School. I ask him to talk through the tradeoffs that large and small companies alike will be making in this fast-changing market. But right now, I’m talking to Chris Young. He’s the head of strategy at Microsoft, which has partnered with OpenAI, and is rolling out generative AI in its search product, office productivity suite, and cloud computing services. Chris, thank you for being on the show.

CHRIS YOUNG: Great to be here, Adi.

ADI IGNATIUS: I want to jump right in. Do you feel that there’s an urgency here as a provider, that you need to be first mover to win in this market?

CHRIS YOUNG: You don’t necessarily have to be the first mover, but there are significant advantages in being first mover. And I do think that’s one of the reasons why there’s so much velocity in the startup community. Certainly, that’s one of the things that we are sensitive to in our own business, is making sure that we move quickly because the market is moving quickly. I will tell you the cycles in and around AI are faster than we’ve ever seen in the tech landscape, companies being built, changes every week. Large companies are moving faster, more than we’ve ever seen. And I think it’s because everybody sees the opportunity here and the promise for change.

ADI IGNATIUS: That makes sense. But then of course, you get the backlash, which is, “Let’s slow down. Let’s make sure that this technology, what it creates is safe, not biased.” You hear all the complaints. How do you balance the sense of wanting to innovate, but also not wanting to be reckless with a technology that’s powerful? And we’re not even sure exactly, everything it can do.

CHRIS YOUNG: Well, we’re very committed to safe and responsible AI. That’s a core pillar of everything we’re doing, certainly at Microsoft. And that’s something I’m a personal believer in. And I will tell you that in some ways, while the cycles are moving quickly today, they’ve been many years in the making. There’s a lot of work that’s gone in behind the scenes to build the right foundational principles around AI, how we deliver it, the systems around which we bring AI to the market. Microsoft, our partner, OpenAI, and others have been very grounded in those sets of principles and practices for a while now. And so that’s allowed us to move quickly as the market really shifted after ChatGPT came out in the fall.

ADI IGNATIUS: One of the words in the air now is regulation. Is this a moment where regulation would be welcome in the industry, or is that something, rather, you brace yourself for, it’s inevitable?

CHRIS YOUNG: We welcome it. It’s going to be an important part of this. AI is such a powerful … And these foundational models are really, really quite powerful. So, we at Microsoft are very, very committed to participating with government in shaping the future around this. And so you’ve heard a lot of that in the media very recently. We think it’s an important part of the evolution of AI in our society.

ADI IGNATIUS: How quickly is this market moving more generally in terms of just people out there developing products and services and doing incredible things?

CHRIS YOUNG: I sit in San Francisco. In startup world, I can tell you in the fall, we were tracking about 80 companies that were working on AI. Today, we’re tracking 8,000. It’s literally a Cambrian explosion of organizations that are working on this. Now, some of those 8,000, many of them were probably doing something else as a startup. And now, they’re pivoting to making AI a bigger part of their offer. But the way I think about it is any company that was a SaaS company before has now got to find a way to make AI a part of what they’re doing because in many ways, in the fullness of time, AI is going to change the nature of how software and humans interact with one another. Again, I think that’ll be a very different way in which we expect services to happen. Today, we use software to do things. We use software to bring experiences to us, to augment our capability. With AI, it’s going to abstract a lot of that interface with the software, and it’s going to force us to think differently around, What does a software experience or a SaaS experience look like for a customer? I think that’s going to change a lot of businesses that exist today, and it’s also going to bring new businesses that we haven’t yet thought about.

ADI IGNATIUS: With any technology, we talk about haves and have-nots, and the gap between the two. With this technology, do you imagine this will widen the gap or narrow it, again, between the technological haves and have-nots?

CHRIS YOUNG: Well, one of the big productivity opportunities with AI is writing code. It’s technology development. And traditionally, small companies or small organizations, they don’t generally have the same access to great developers or lots of developers. And big companies have been able to make those investments. But with the capabilities that AI can bring to develop code, the promise of that is it can be really profound. Think about bringing this capability to small community-based organizations where someone who’s not technical now has the ability to create software, to create workflow, to build tooling that allows that organization to achieve its mission more efficiently and more effectively. Whereas before, you would’ve had to hire consultants and spend money that you didn’t have in order to technology-enable that organization. So, if done well, this allows so many more organizations and people to be able to get the promise and the benefit of what technology can do for all of us.

ADI IGNATIUS: So, that argument is that it’s really democratizing access to the highest level of technology.

CHRIS YOUNG: That’s how I feel about it.

ADI IGNATIUS: Imagine for a second, somebody who’s heading a relatively small firm, is aware of all this chatter and early developments with this technology, not quite sure how to get in, but what would the head of strategy for Microsoft tell the head of strategy for a smaller firm how to get into this realm?

CHRIS YOUNG: The first thing I would do is start using it and get familiar with it, then think about, What can I do to improve productivity across different functions in my organization? What can I do then to apply it to workflows in my organization because there will be value there? And then what new experiences can I create? What new business models might be available to me? And again, I think that’s a process that anyone can go through in a big company as well as a small company.

ADI IGNATIUS: To what extent does generative AI change the nature of strategy, how we approach it?

CHRIS YOUNG: Well, the first thing I think it does to strategy is it speeds up the cycles. So, anyone who’s thinking, “Hey, I’m going to do an annual strategy process, and that’s going to be sufficient.” It’s no longer sufficient, certainly not in the tech landscape. You just got to be faster than that. You got to be more iterative than that. I also think that it’s going to help us frame trade-offs and decisions in a more efficient way. For example, one of the things that AI does for us is it brings access to information that you would normally have to send humans out to go gather a lot of information, maybe do a lot of interviewing. And then finally, you synthesize it down into, Hey, how do we think about these trade-offs or these decisions that we want to make? This is just going to speed up all those cycles. You’re going to be able to get at those conversations a lot more quickly than we’ve been able to as we’ve gone through these normal strategy processes. We’ll be able to ask ourselves some different questions because of what’s possible. And then there’s the whole: What are the new things we can go do, we can go create because AI is now available to us? And that’s a big strategy question for literally everyone, whether you’re a business, a nonprofit, or even a government agency.

ADI IGNATIUS: You’ve been talking today mostly about applications that are available right now. Throw this forward. What does the AI powered workplace look like in, say, five or ten years?

CHRIS YOUNG: I think the AI-powered workplace is going to change the nature of work for pretty much everyone because at minimum, you’re going to have a set of co-pilots that allow people who have certain job functions today to do their jobs more efficiently, more effectively, to produce better output. I think the customer experience for everyone is going to be completely transformed. That’s one of the things, honestly, I’m super excited about from a business perspective, is the transformation of the customer experience. So often, you call a company, you’ve got a problem, and it takes a lot of energy and effort, particularly if it’s a thorny problem, to get it fixed because people are searching through different systems. They might hand you off to three or four different reps, and then you got to re-explain your issue. In a world where you have co-pilots that are AI-enabled, I think you could make it far more efficient and pleasant for the user who’s calling in, the customer who’s calling in, who just wants to get whatever experience it is or product it is, or issue it is handled quickly, painlessly so they can move on with their lives. I think that’s going to be completely transformed. For the user, it’s going to allow you to get on with your life. That’s going to reshape some things inside of organizations, but it’s going to, I think, also open up a lot of different opportunities. There’s so much that we still have to go do in our society. For example, we’ve got to electrify vehicles. We’ve got to make supply chains more efficient. We need to educate our students in ways where we’ve got shortages of teachers. We need to provide healthcare in a much more broad based fashion at a cheaper cost basis. And I think AI has potential to change things in a way that’s good for humanity, is something I get very excited about when I think about the future.

ADI IGNATIUS: Chris Young, I really want to thank you. You’re on the front lines in developing these technologies, and I want to thank you for joining us today.

CHRIS YOUNG: Thanks for having me.

ADI IGNATIUS: Coming up after the break, I ask HBS professor Andy Wu about navigating this emerging landscape. Should you build your own generative AI, buy it, something in between? It depends! He breaks down these strategic choices. Stay with us.

ADI IGNATIUS: Welcome back to How Generative AI Changes Strategy. I’m Adi Ignatius. Joining me now to discuss how to navigate this new strategic landscape is Andy Wu, a professor at Harvard Business School who specializes in technology in business. Andy, thank you for joining me.

HBS PROFESSOR ANDY WU: Adi, thank you for having me.

ADI IGNATIUS: Let’s start by talking money. I think a lot of us have been enjoying playing with some of these generative AI tools for free or at a nominal price tag. I get the sense, though, that generative AI requests costs a lot more, maybe ten times more as a commercial web search. I guess my question for you is, is that true? And what is the basic cost structure here?

ANDY WU: Right. I think thinking about the economics is really important now. And Adi, you’re absolutely right about the high cost of inference in terms of how we input and output information from these large language models or any other generative AI model. It is more expensive on a variable cost basis than anything we’ve ever done in computing. Relative to a simple Google search, doing a generative AI type output like ChatGPT could be anywhere from ten to 100 times or more expensive than just a basic Google search. And that’s something that companies are going to have to reckon with.

ADI IGNATIUS: And shall we assume that those costs will come down the way costs sometimes come down?

ANDY WU: Costs will almost certainly come down. But at the same time, we have to remember that we are in an arms race right now of increasingly complicated and large models. So, even as costs come down, the models are going to get more complicated. That’ll raise costs again. So, this is always going to be more expensive on a variable cost basis to how we’ve done computing in the past.

ADI IGNATIUS: All right. Let’s say you’re a large corporation or a mid-size company even. You see the potential for gen AI in your company. You want to get in the game, but what do you need to know about what it is that you bring to the table to start thinking through these decision and whether this technology would have value for you?

ANDY WU: There’s three levels of increasing sophistication of how a company can get involved and really bring AI into their organization or their product. The most simple way of doing it is basically looking for the right applications that you can use in the market. Of course, ChatGPT has made its way into a lot of companies already, and that’s not going to require a lot of deep thinking. But when we get to the next level of, perhaps, integrating a third-party application programming interface into your workflow, into your product, that’s where we’ve really got to think more about, What are the ways that a custom integrated AI tool for your organization might be valuable? And so for instance, consider the example of two retailers. Think about the department store, Macy’s, or the supermarket, Kroger’s. And so they might consider the possibility of using an API, an application programming interface, from OpenAI or elsewhere, and then integrating that into their application so that their customers can have a chat interface to learn about new products. Now, there’s going to be a lot of potential for custom modifications at that level to tailor that AI for your particular customer base or organization. And so in the example I just gave, imagine if a customer typed in the word, dressing, into the chatbot. For Macy’s and for Kroger’s, they want the word, dressing, to deliver completely different responses. And so you’d need to modify the input, say, prompt engineering so that Macy’s would want to append the word clothing in front of the word, dressing. And Kroger’s might want to append the word, salad, in front of the word, dressing, to make sure the chat is giving the right kind of response. One level, even deeper than the API, is many companies, I think, are going to need to look at the serious possibility of putting together their own model. And that doesn’t mean starting from scratch. It means taking an open source model that’s out there that Facebook or others have already committed hundreds of millions of dollars to developing, and then using your own proprietary data to then fine tune that model for your use case. Bloomberg has trained a model already of their own. And what I want managers to know is that the barrier entry to doing this is actually pretty attainable. The numbers I’m hearing from the marketplace right now, it could cost you a million dollars or less. I’ve seen some people fine tune models for under a thousand dollars. And you can have a custom model that is based on your data for your organization. And that’s going to depend on whether or not, A, you have the proprietary data, and B, whether or not a custom model for your specific vertical is going to dominate a more general model.

ADI IGNATIUS: And then are you suggesting that companies could build these themselves? Or how do you think through the buy-versus-build decision here, whether you develop in-house or partner with somebody else that has this know-how?

ANDY WU: Yeah. Well, fortunately, there’s a lot of resources now that you can have your own model, but have other people help you put together that model. For example, Amazon has something called SageMaker where, let’s say, you have your own data available. You can just give it to Amazon, and they will fine tune the model for you. And then boom, you’ve got your nice little custom model already. At the same time, of course, the larger your organization and the more specific tweaks that you’re going to need to the model, then you’re going to want to consider the possibility of bringing a lot more of this in-house. Of course, right now, the talent in AI is very expensive, and so that’s something we’ll have to run into.

ADI IGNATIUS: How do you even know who to partner with? I mean, suddenly there’s a whole, I guess, generative AI industry that’s springing up. So, you know, if you want to partner, who do you partner with? How do you trust the expertise, figure out who’s who and who’s a valuable partner?

ANDY WU: So there’s three factors I would consider here. The first is your trust of that partner in terms of their ability to potentially handle your secure data, whether or not you’re going to give them all your data at once to build a model, or you’re going to have your employees typing in potentially confidential stuff into an application. And I think the process for evaluating that trust and safety and security is the same process our enterprise CTOs have already been doing for two decades. The second part of this is simply just price, in that, in the longer term, I think a lot of the more general technologies we’re going to be using are going to be fairly competitive in the market. And so in the end, a lot of this is going to get driven down to just the variable cost of the cloud computing or computing technology. And so I think price is actually surprisingly important here. And even across the big models that we’ve seen, like from Meta or from OpenAI, actually for most purposes, the difference in quality isn’t that significant. So I, in most cases, I would pick the one that was cheaper. On a variable cost basis especially. And then the third thing, and this is particularly important in today’s environment, is that you really have to think about the stability of your partner and their financial viability. And so a decent amount of the AI companies we’re talking about, the most well-funded ones, they raised a lot of their money before the current downturn in the market. And many of them are going to be struggling with their cash situation this year. A certain number of them will not make it into next year. And so, I would want to have a partner, a provider who you think will make it into the coming years.

ADI IGNATIUS: When you talk about this, there’re going to be a lot of, I guess, new job categories, new specializations. I mean, even when you say, “train the AI on the content,” what does that mean in layman’s terms? What is the job of training an AI bot on a company’s exclusive content or data?

ANDY WU: Well, at least in today’s world, and especially in the next couple years, there is an increasing amount of tools that assist developers in doing the fine-tuning process. So, I think in general, any sort of backend data engineer, software engineer, would increasingly be comfortable doing this kind of thing, particularly as we are updating undergraduate and graduate education around this kind of technology. But what it does entail is, I would say, some hard factors and some soft factors. In terms of hard factors, there are technical skills about how to run the model and to, in particular, run the model efficiently when you’re training it. I’ve heard stories of companies in Silicon Valley where some engineer made an very human mistake of typing a bracket instead of a colon or something. And then in the process of running the training, they literally just lit $500,000 of electricity and hardware on fire because they just started the process with the typo. And then now, we’ve got a completely wasted training cycle. Now, I think the soft factors here are also important when you’re fine-tuning, and this is something that I was really surprised about to see engineers doing this. But a lot of this training process is based on feel and intuition. And so what happens is that they will take data, and then introduce it to the model to help reweight the parameters of an existing, say, open source model. And there isn’t an obvious point where you want to stop an iteration process. And so what they’re doing is they’re feeling it out and eyeballing it, and running some—There’s some mathematical tests, but they’re also just eyeballing the results that are coming out, and then making a judgment call of, okay, have we trained this model enough? Do we have enough data to do what we want?

ADI IGNATIUS: Interesting. When you think about the org chart of the future, you know, what does AI, generative AI do to middle managers?

ANDY WU: I think middle managers are going to be here to stay. And let me tell you why. If we think about this new job category of prompt engineers or people who specialize in interacting with the AI, what are they doing? They’re asking good questions. They’re giving detailed instructions. What does a middle manager do? They’re a prompter of humans. And so in that sense, the skillset of a middle manager, I think, is actually more and more important. And actually, what I think is different here is as we contrast managers with, say, individual or independent contributors, I think it’s actually the middle managers that are coming out ahead in that they’re the ones that actually have the skillset to actually work with the AI as opposed to someone who is, say, writing a contract or writing code as an independent contributor.

ADI IGNATIUS: So, it’s sort of all about the prompt. Is that a burgeoning industry now, chief prompt officer or something like that?

ANDY WU: I think in the short term, it’s going to be quite important, and we’re still early in how that’s exactly going to be organized. But I do imagine that a significant amount of managers in companies are themselves, in addition to managing humans, going to have to be managing an AI. And in that case, those people, I think will also have to develop the skillset of working with the AI and building that intuition over time. And I imagine the training for that would be a combination—kind of how we teach business and management of humans today, it’s a mix of a good HBR article plus years of experience on the job. Right?

ADI IGNATIUS: All right. Let’s say you’re a technology provider. How would you be thinking about these markets right now?

ANDY WU: Oh, this is a very tough one to look at right now as far as being a technology provider, but I think the upside is that there’s a lot of places in the value chain here that you can compete in as a technology provider. You can compete in supplying data, or you can compete in helping other people build models. There’s also a burgeoning industry of going back to private data centers that I think we’ll need more of. And of course, at the application layer, I think we will see a whole different category of applications that are going to be AI first. And then that’s the area where I think we’re going to have a whole new generation of entrepreneurs that are going to be quite disruptive to the older generation. But what I do want to caution people here about is the open questions about the fundamental profitability of being just a model company. Isn’t it interesting that Microsoft has essentially outsourced the model? And that Google and Facebook have historically, actually, open sourced the model? And Facebook today is very aggressive about open sourcing that model. And so what we think of as actually the core AI technology, these large companies are actually, it’s not internal, or they’re handing it out for free. And what does that tell us about where the profitability is? It may not actually be in that part of the technology. If we go back to the internet and look at what are the most important technologies for the rise of the internet, I would put the three technologies as telecommunications: like AT&T, Verizon. Second would be TCP/IP. Third would be HTML. How much money did those people make on the internet? Approximately nothing. And that’s what we have to be careful about here, is that I think there are going to be portions of the value chain for AI that will not capture a lot of the profitability.

ADI IGNATIUS: If you had to guess right now, who’s going to make the big money in this? Who stands to be the big winner?

ANDY WU: I think that the providers of the hardware technology used for this kind of technology are going to come out very strong. There’s nobody in the world I’d prefer to be right now than Jensen Huang at Nvidia. And on the other end of the stack, I do think that entrepreneurs building AI-first applications will be in a great position, particularly as they’re using and thinking about AI and reimagining how we traditionally think about applications. In terms of industry verticals where I think there’s the biggest growth opportunity, the two areas I’m most excited about right now are video gaming and media or social media. And I think a lot of the conversation we’re having today is focused on AI as a productivity tool. I think the better way to look at it is actually AI as enhancing consumption. And so what we are talking about here with generative AI is we, now, have the ability to provide infinite, personalized content to any person to entertain them until the end of time. And so I think that’s an area that is going to be very, very exciting.

ADI IGNATIUS: This is a super-fast-moving market. Presumably, the rules of classic strategies still apply. But I’d love to know, does this feel different from new technologies that you’ve studied before?

ANDY WU: Really, I like to think of this as a new type of computing platform. And so when I teach about technology, we teach students about the past in terms of the rise of operating systems and web browsers, and in terms of the current age platform technologies like Metaverse, virtual reality, telecommunications enhancement through 5G, as well as a variety of other computing interfaces like voice. These are all platforms. This is a new kind of platform that allows us to do a much more human-like form of computing. And I would say that many observers in Silicon Valley, I think, were surprised by this type of ChatGPT model being the thing that really resonated with people. And that part, I’m still surprised by, but I think it speaks to the broader mission of artificial generative intelligence in terms of reaching the singularity and sort of replicating a human. That really does speak to people, quite literally.

ADI IGNATIUS: All right. What’s your basic message to senior leaders who want to lead their organizations into the future with a generative AI solution?

ANDY WU: For leaders that are trying to lead their organizations into the future, my core message would be you need to chill out and panic at the same time. And so when you’re thinking about making investments in AI today, I want you to do it with a sense of urgency, but I don’t want you to play for the next two years. You’re playing for the next ten years. Right now, it’s a difficult time to make decisions around AI because, at least my bet is, we’re almost certainly in the middle of a hype cycle around the technology. And so in that sense, it’s not really about the current phase of the technology, but without a doubt in the next ten years, advancements in computing and AI will continue to be transformative. And they will have transformative effects around the same logic that you and I have talked about today in that the same assumptions about the importance of data, the same assumptions about fixed costs and variable costs, all that will be the same. And the more you can start getting your organizations ready now, then you’ll be ready for the next transformation. Right now, there’s a very specific set of technology that AI is based on. That will almost certainly be different in five to ten years, and I want you to be ready for the next one. The broader competitive part, though, that’s a little more frightening is that, in fact, other companies, your competitors, are almost certainly going through the same exercise. And so really, we’re on a treadmill here. And then if you’re looking at AI right now, you’ve really got to take some action because your competitors are also going to be doing the same kind of thing.

ADI IGNATIUS: Where can companies go wrong? Are you seeing misunderstanding or misconceptions that can push executives in the wrong direction as they try to figure out how to respond to this technological opportunity?

ANDY WU: There’s two directions I would think about in terms of mistakes, and these are actually competing mistakes in that, generally, you make one or the other. And so the first mistake is being suspicious about the technology and suspicious about, particularly, privacy of data and restricting your employees from experimenting in using it. And in particular, over the next couple years, we’re going to see a large battery of new applications that are AI-based coming up. And the only way you’re going to be able to figure out which of those applications is good and that you should diffuse to the organization is by letting some frontline employees play around with that new technology. But that does entail the risk of letting your employees submit some data that’s private to the company to those applications and into those models. So, there’s a little bit of intellectual property risk there, perhaps a lot. And the other risk here is the total opposite of that, is that companies need to really think about how they’re protecting intellectual property, particularly on the open internet. We’re in kind of a pickle right now for intellectual property in that we’ve lived through an era where for, say, text based information, that there’s a distinction between copyrighted and non-copyrighted text. And what is different now is that it’s not just about copyrighted or not. It’s also about whether or not that information is available publicly or privately. For example, a New York Times article would be copyrighted, but public in that Google can index it through its search process and people can find it online. That is the data that is going to be used to train the next generation of models. And that’s data that I think companies like the New York Times and elsewhere really need to think about actually blocking off from the open internet so that they can retain the ability to use it, sell that data, or to train their own models with it, as opposed to letting everybody train their models with it.

ADI IGNATIUS: That’s such a great point. I think for people who generate content, it feels like the generative AI bots have already scraped our information without our permission, and that’s part of this great aggregation of language and data that’s being used to generate these amazing new things. It seems like an IP question that it, at best, is a gray area and at worst, is actionable.

ANDY WU: Right. It’s definitely a gray area right now. There’s a number of ongoing lawsuits, and a lot of legal scholars have been really debating exactly what we should do here. Some have argued that we need to have more stringent protections to prevent people from training AI on some other person’s copyrighted data. If we don’t make any change, then we are in the tough situation that companies will probably have to pull their data off or make it harder to access to the general public, in which case, our notion of an open internet is going to be much more challenged. The internet could be much less open than we would imagine in the past.

ADI IGNATIUS: With any of these big technological developments, there’s sort of a split between techno optimists and techno pessimists, and they’re usually the same arguments: “This is going to improve efficiency and let us do new things, and free up humans to do more creative things.” The pessimists tend to be, “Well, yeah, one or two humans will be able to do creative things, but all the other humans will lose their jobs.” Where do you come down in terms of optimism versus pessimism in terms of this technology?

ANDY WU: Adi, that’s a super tough question, but I guess I would probably come down on the pessimistic side right now in terms of the broader impacts. Again, I’m not of the opinion there’s any real way to stop the current process that we’re undergoing in society. But the pessimism, from my perspective, comes from what we’ve seen already with the digitization of society in the last 20 years. And on the productivity side, on the employment side, automation has already had a tremendous impact on at least the American workforce in terms of lost jobs in Middle America and manufacturing in the retailing sector. The broad transformation of brick and mortar to e-commerce has also cost a lot of jobs as well in local communities. I’m actually more concerned about the transformation on the consumption side, how we spend our time outside of work, and I think that’s actually where it’s most dangerous. And the analogy here would be if you look at the last 20 years, if you look at our young people, how much time they spend playing video games and on social media. And now, we have the ability to at low cost, generate unlimited video games and unlimited social media. And in that world, the thing I would worry about is not necessarily whether or not your kids or my kids, or anyone’s kids can find a job. It’s really whether or not they would want to ever work a job. If you can for a very low cost, just sit there and live in the digital world of unlimited, perfectly personalized content, what’s the point of doing anything else? And then we have to reckon with whether or not that’s a world we want to live in.

ADI IGNATIUS: Andy Wu, I want to thank you for joining us. Thank you for your insights.

ANDY WU: Thank you for having me.

ADI IGNATIUS: That’s Andy Wu, a professor at Harvard Business School. Before that, I spoke with Chris Young, the head of strategy at Microsoft. This is the last episode in our series, How Generative AI changes Everything. To listen to the other episodes on the impact on productivity, creativity, and organizational culture, you could find them in the HBR IdeaCast feed. And for more on this topic, check out HBR’s latest big idea on how to implement this new technology responsibly. That’s at This episode was produced by Curt Nickisch. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox, and Hannah Bates is our audio production assistant. Special thanks to Maureen Hoch. Thank you for listening to How Generative AI Changes Everything, a special series of the HBR IdeaCast. I’m Adi Ignatius.

Source link