Jensen Huang’s Wisdom

Nvidia’s co-founder and CEO was interviewed recently and there was plenty to learn from his sharing.

I listen to, or read the transcripts of, podcasts regularly. One of my favourite podcast episodes this year was Jensen Huang’s appearance earlier this month in an episode of the Acquired FM podcast hosted by Ben Gilbert and David Rosenthal. Huang is the co-founder and CEO of Nvidia, a chip designer with US$32.7 billion in trailing revenue that’s in the epicenter of the AI revolution today. During his 1.5 hour interview with Gilbert and Rosenthal, Huang shared many pieces of wisdom – the passages below in italics are my favourites. 

On how he sped up Nvidia’s chip development process by simulating the future

Jensen: We also made the decision to use this technology called emulation. There was a company called ICOS. On the day that I called them, they were just shutting the company down because they had no customers. I said, hey, look. I’ll buy what you have inventory. No promises are necessary.

The reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip and we got it back from the fab and we started working on our software, by the time that we found all the bugs because we did the software, then we taped out the chip again. We would’ve been out of business already.

David: And your competitors would’ve caught up.

Jensen: Well, not to mention we would’ve been out of business.

David: Who cares?

Jensen: Exactly. If you’re going to be out of business anyway, that plan obviously wasn’t the plan. The plan that companies normally go through—build a chip, write the software, fix the bugs, tape out a new chip, so on and so forth—that method wasn’t going to work. The question is, if we only had six months and you get to tape out just one time, then obviously you’re going to tape out a perfect chip.

I remember having a conversation with our leaders and they said, but Jensen, how do you know it’s going to be perfect? I said, I know it’s going to be perfect, because if it’s not, we’ll be out of business. So let’s make it perfect. We get one shot.

We essentially virtually prototyped the chip by buying this emulator. Dwight and the software team wrote our software, the entire stack, ran it on this emulator, and just sat in the lab waiting for Windows to paint.

David: It was like 60 seconds for a frame or something like that.

Jensen: Oh, easily. I actually think that it was an hour per frame, something like that. We would just sit there and watch it paint. On the day that we decided to tape out, I assumed that the chip was perfect. Everything that we could have tested, we tested in advance, and told everybody this is it. We’re going to tape out the chip. It’s going to be perfect.

Well, if you’re going to tape out a chip and you know it’s perfect, then what else would you do? That’s actually a good question. If you knew that you hit enter, you tape out a chip, and you knew it was going to be perfect, then what else would you do? Well, the answer, obviously, go to production.

Ben: And marketing blitz. And developer relations.

Jensen: Kick everything off because you got a perfect chip. We got in our head that we have a perfect chip.

David: How much of this was you and how much of this was your co-founders, the rest of the company, the board? Was everybody telling you you were crazy?

Jensen: No. Everybody was clear we had no shot. Not doing it would be crazy.

David: Otherwise, you might as well go home.

Jensen: Yeah, you’re going to be out of business anyway, so anything aside from that is crazy. It seemed like a fairly logical thing. Quite frankly, right now as I’m describing it, you’re probably thinking yeah, it’s pretty sensible.

David: Well, it worked.

Jensen: Yeah, so we taped that out and went directly to production.

Ben: So is the lesson for founders out there when you have conviction on something like the RIVA 128 or CUDA, go bet the company on it. This keeps working for you. It seems like your lesson learned from this is yes, keep pushing all the chips in because so far it’s worked every time. How do you think about that?

Jensen: No, no. When you push your chips in I know it’s going to work. Notice we assumed that we taped out a perfect chip. The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out. We developed the entire software stack. We ran QA on all the drivers and all the software. We ran all the games we had. We ran every VGA application we had.

When you push your chips in, what you’re really doing is, when you bet the farm you’re saying, I’m going to take everything in the future, all the risky things, and I pull in in advance. That is probably the lesson. To this day, everything that we can prefetch, everything in the future that we can simulate today, we prefetch it.

On Nvidia’s corporate culture and architecture and why it works

Ben: We have some questions we want to ask you. Some are cultural about Nvidia, but others are generalizable to company-building broadly. The first one that we wanted to ask is that we’ve heard that you have 40+ direct reports, and that this org chart works a lot differently than a traditional company org chart.

Do you think there’s something special about Nvidia that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives, and you’re like, no, you’re just here to do your fricking best work and the most important thing in the world. Now go. (a) Is that correct? and (b) is there something special about Nvidia that enables that?

Jensen: I don’t think it’s something special in Nvidia. I think that we had the courage to build a system like this. Nvidia’s not built like a military. It’s not built like the armed forces, where you have generals and colonels. We’re not set up like that. We’re not set up in a command and control and information distribution system from the top down.

We’re really built much more like a computing stack. The lowest layer is our architecture, then there’s our chip, then there’s our software, and on top of it there are all these different modules. Each one of these layers of modules are people.

The architecture of the company (to me) is a computer with a computing stack, with people managing different parts of the system. Who reports to whom, your title is not related to anywhere you are in the stack. It just happens to be who is the best at running that module on that function on that layer, is in-charge. That person is the pilot in command. That’s one characteristic.

David: Have you always thought about the company this way, even from the earliest days?

Jensen: Yeah, pretty much. The reason for that is because your organization should be the architecture of the machinery of building the product. That’s what a company is. And yet, everybody’s company looks exactly the same, but they all build different things. How does that make any sense? Do you see what I’m saying?

How you make fried chicken versus how you flip burgers versus how you make Chinese fried rice is different. Why would the machinery, why would the process be exactly the same?

It’s not sensible to me that if you look at the org charts of most companies, it all looks like this. Then you have one group that’s for a business, and you have another for another business, you have another for another business, and they’re all supposedly autonomous.

None of that stuff makes any sense to me. It just depends on what is it that we’re trying to build and what is the architecture of the company that best suits to go build it? That’s number one.

In terms of information systems and how you enable collaboration, we’re wired up like a neural network. The way that we say this is that there’s a phrase in the company called ‘mission is the boss.’ We figure out what is the mission of what is the mission, and we go wire up the best skills, the best teams, and the best resources to achieve that mission. It cuts across the entire organization in a way that doesn’t make any sense, but it looks a little bit like a neural network.

David: And when you say mission, do you mean Nvidia’s mission is…

Jensen: Build Hopper.

David: Okay, so it’s not like further accelerated computing? It’s like we’re shipping DGX Cloud.

Jensen: No. Build Hopper or somebody else’s build a system for Hopper. Somebody has built CUDA for Hopper. Somebody’s job is to build cuDNN for CUDA for Hopper. Somebody’s job is the mission. Your mission is to do something.

Ben: What are the trade-offs associated with that versus the traditional structure?

Jensen: The downside is the pressure on the leaders is fairly high. The reason for that is because in a command and control system, the person who you report to has more power than you. The reason why they have more power than you is because they’re closer to the source of information than you are.

In our company, the information is disseminated fairly quickly to a lot of different people. It’s usually at a team level. For example, just now I was in our robotics meeting. We’re talking about certain things and we’re making some decisions.

There are new college grads in the room. There are three vice-presidents in the room, there are two e-staff in the room. At the moment that we decided together, we reasoned through some stuff, we made a decision, everybody heard it exactly the same time. Nobody has more power than anybody else. Does that make sense? The new college grad learned at exactly the same time as the e-staff.

The executive staff, the leaders that work for me, and myself, you earned the right to have your job based on your ability to reason through problems and help other people succeed. It’s not because you have some privileged information that I knew the answer was 3.7, and only I knew. Everybody knew.

On the right way to learn from business books

Jensen: In the last 30 years I’ve read my fair share of business books. As in everything you read, you’re supposed to first of all enjoy it, be inspired by it, but not to adopt it. That’s not the whole point of these books. The whole point of these books is to share their experiences.

You’re supposed to ask, what does it mean to me in my world, and what does it mean to me in the context of what I’m going through? What does this mean to me and the environment that I’m in? What does this mean to me in what I’m trying to achieve? What does this mean to Nvidia and the age of our company and the capability of our company?

You’re supposed to ask yourself, what does it mean to you? From that point, being informed by all these different things that we’re learning, we’re supposed to come up with our own strategies.

What I just described is how I go about everything. You’re supposed to be inspired and learn from everybody else. The education’s free. When somebody talks about a new product, you’re supposed to go listen to it. You’re not supposed to ignore it. You’re supposed to go learn from it.

It could be a competitor, it could be an adjacent industry, it could be nothing to do with us. The more we learn from what’s happening out in the world, the better. But then, you’re supposed to come back and ask yourself, what does this mean to us?

David: You don’t just want to imitate them.

Jensen: That’s right.

On the job of the CEO in a company

Jensen: That’s right. You want to pave the way to future opportunities. You can’t wait until the opportunity is sitting in front of you for you to reach out for it, so you have to anticipate.

Our job as CEO is to look around corners and to anticipate where will opportunities be someday. Even if I’m not exactly sure what and when, how do I position the company to be near it, to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I’m saying? But you’ve got to be close enough to do the diving catch.

On seeing the future of computing and AI before others did

Ben: Speaking of the speed of light—David’s begging me to go here—you totally saw that InfiniBand would be way more useful way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company. Why did you see that when no one else saw that?

Jensen: There were several reasons for that. First, if you want to be a data center company, building the processing chip isn’t the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.

A desktop computer in a data center uses the same CPUs, uses the same GPUs, apparently. Very close. It’s not the processing chip that describes it, but it’s the networking of it, it’s the infrastructure of it. It’s how the computing is distributed, how security is provided, how networking is done, and so on and so forth. Those characteristics are associated with Melanox, not Nvidia.

The day that I concluded that really Nvidia wants to build computers of the future, and computers of the future are going to be data centers, embodied in data centers, then if we want to be a data center–oriented company, then we really need to get into networking. That was one.

The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one training job is orchestrated across millions of processors.

It’s the inverse of hyperscale, almost. The way that you design a hyperscale computer with off-the-shelf commodity ethernet, which is just fine for Hadoop, it’s just fine for search queries, it’s just fine for all of those things—

Ben: But not when you’re sharding a model across.

Jensen: Not when you’re sharding a model across, right. That observation says that the type of networking you want to do is not exactly ethernet. The way that we do networking for supercomputing is really quite ideal.

The combination of those two ideas convinced me that Mellanox is absolutely the right company, because they’re the world’s leading high-performance networking company. We worked with them in so many different areas in high performance computing already. Plus, I really like the people. The Israel team is world class. We have some 3200 people there now, and it was one of the best strategic decisions I’ve ever made.

David: When we were researching, particularly part three of our Nvidia series, we talked to a lot of people. Many people told us the Mellanox acquisition is one of, if not the best of all time by any technology company.

Jensen: I think so, too. It’s so disconnected from the work that we normally do, it was surprising to everybody.

Ben: But framed this way, you were standing near where the action was, so you could figure out as soon as that apple becomes available to purchase, like, oh, LLMs are about to blow up, I’m going to need that. Everyone’s going to need that. I think I know that before anyone else does.

Jensen: You want to position yourself near opportunities. You don’t have to be that perfect. You want to position yourself near the tree. Even if you don’t catch the apple before it hits the ground, so long as you’re the first one to pick it up. You want to position yourself close to the opportunities.

That’s kind of a lot of my work, is positioning the company near opportunities, and the company having the skills to monetize each one of the steps along the way so that we can be sustainable.

On why zero-billion dollar markets are better than $10 billion markets

David: I’ve heard you or others in Nvidia (I think) used the phrase zero billion dollar—

Jensen: That’s exactly right. It’s our way of saying there’s no market yet, but we believe there will be one. Usually when you’re positioned there, everybody’s trying to figure out why are you here. When we first got into automotive, because we believe that in the future, the car is going to be largely software. If it’s going to be largely software, a really incredible computer is necessary.

When we positioned ourselves there, I still remember one of the CTOs told me, you know what? Cars cannot tolerate the blue screen of death. I said, I don’t think anybody can tolerate that, but that doesn’t change the fact that someday every car will be a software-defined car. I think 15 years later we’re largely right.

Oftentimes there’s non-consumption, and we like to navigate our company there. By doing that, by the time that the market emerges, it’s very likely there aren’t that many competitors shaped that way.

We were early in PC gaming, and today Nvidia’s very large in PC gaming. We reimagined what a design workstation would be like. Today, just about every workstation on the planet uses Nvidia’s technology. We reimagine how supercomputing ought to be done and who should benefit from supercomputing, that we would democratize it. And look today, Nvidia’s in accelerated computing is quite large.

We reimagine how software would be done, and today it’s called machine learning, and how computing would be done, we call it AI. We reimagined these things, try to do that about a decade in advance. We spent about a decade in zero billion dollar markets, and today I spent a lot of time on omniverse. Omniverse is a classic example of a zero billion dollar business.

Ben: There are like 40 customers now? Something like that?

David: Amazon, BMW.

Jensen: Yeah, I know. It’s cool.

On protecting a company’s moat (or competitive advantage)

Jensen: Oftentimes, if you created the market, you ended up having what people describe as moats, because if you build your product right and it’s enabled an entire ecosystem around you to help serve that end market, you’ve essentially created a platform.

Sometimes it’s a product-based platform. Sometimes it’s a service-based platform. Sometimes it’s a technology-based platform. But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers and customers who are built around you. That network is essentially your moat.

I don’t love thinking about it in the context of a moat. The reason for that is because you’re now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. That network is about enabling other people to enjoy the success of the final market. That you’re not the only company that enjoys it, but you’re enjoying it with a whole bunch of other people.

On the importance of luck in a company’s success

David: Is it fair to say, though, maybe on the luck side of the equation, thinking back to 1997, that that was the moment where consumers tipped to really, really valuing 3D graphical performance in games?

Jensen: Oh yeah. For example, luck. Let’s talk about luck. If Carmack had decided to use acceleration, because remember, Doom was completely software-rendered.

The Nvidia philosophy was that although general-purpose computing is a fabulous thing and it’s going to enable software and IT and everything, we felt that there were applications that wouldn’t be possible or it would be costly if it wasn’t accelerated. It should be accelerated. 3D graphics was one of them, but it wasn’t the only one. It just happens to be the first one and a really great one.

I still remember the first times we met John. He was quite emphatic about using CPUs and his software render was really good. Quite frankly, if you look at Doom, the performance of Doom was really hard to achieve even with accelerators at the time. If you didn’t have to do bilinear filtering, it did a pretty good job.

David: The problem with Doom, though, was you needed Carmac to program it.

Jensen: Exactly. It was a genius piece of code, but nonetheless, software renders did a really good job. If he hadn’t decided to go to OpenGL and accelerate for Quake, frankly what would be the killer app that put us here? Carmack and Sweeney, both between Unreal and Quake, created the first two killer applications for consumer 3D, so I owe them a great deal.

On the importance of having an ecosystem of 3rd-party developers surrounding your company

David: I want to come back real quick to you told these stories and you’re like, well, I don’t know what founders can take from that. I actually do think if you look at all the big tech companies today, perhaps with the exception of Google, they did all start—and understanding this now about you—by addressing developers, planning to build a platform, and tools for developers.

All of them—Apple, not Amazon. […] That’s how AWS started. I think that actually is a lesson to your point of, that won’t guarantee success by any means, but that’ll get you hanging around a tree if the apple falls.

Jensen: As many good ideas as we have. You don’t have all the world’s good ideas and the benefit of having developers is you get to see a lot of good ideas.

On keeping AI safe, and how AI can change the world for the better

Ben: I want to think about the future a little bit. I’m sure you spend a lot of time on this being on the cutting edge of AI.

We’re moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they’re creating, which has to be amazing for humanity in the long run. In the short term, it’s going to be inevitably bumpy as we figure out what that means.

What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?

Jensen: First of all, we have to keep AI safe. There are a couple of different areas of AI safety that’s really important. Obviously, in robotics and self-driving car, there’s a whole field of AI safety. We’ve dedicated ourselves to functional and active safety, and all kinds of different areas of safety. When to apply human in the loop? When is it okay for a human not to be in the loop? How do you get to a point where increasingly human doesn’t have to be in the loop, but human largely in the loop?

In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention.

You’ve seen some of the work that we’ve done, instead of scraping the Internet we, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence, generative AI.

In the area of large language models in the future of increasingly greater agency AI, clearly the answer is for as long as it’s sensible—and I think it’s going to be sensible for a long time—is human in the loop. The ability for an AI to self-learn, improve, and change out in the wild in a digital form should be avoided. We should collect data. We should carry the data. We should train the model. We should test the model, validate the model before we release it in the wild again. So human is in the loop.

There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity. Obviously, the way autopilot works for a plane, two-pilot system, then air traffic control, redundancy and diversity, and all of the basic philosophies of designing safe systems apply as well in self-driving cars, and so on and so forth. I think there are a lot of models of creating safe AI, and I think we need to apply them.

With respect to automation, my feeling is that—and we’ll see—it is more likely that AI is going to create more jobs in the near term. The question is what’s the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people because they want to expand into more areas.

So the question is, if you think about a company and say, okay, if we improve the productivity, then need fewer people. Well, that’s because the company has no more ideas. But that’s not true for most companies. If you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.

So long as we believe that they’re more areas to expand into, there are more ideas in drugs, there’s drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there are more ideas in technology, so long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.

Now you go back in history. We can fairly say that today’s industry is larger than the world’s industry a thousand years ago. The reason for that is because obviously, humans have a lot of ideas. I think that there are plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it’s likely to generate jobs.

Now obviously, net generation of jobs doesn’t guarantee that any one human doesn’t get fired. That’s obviously true. It’s more likely that someone will lose a job to someone else, some other human that uses an AI. Not likely to an AI, but to some other human that uses an AI.

I think the first thing that everybody should do is learn how to use AI, so that they can augment their own productivity. Every company should augment their own productivity to be more productive, so that they can have more prosperity, hire more people.

I think jobs will change. My guess is that we’ll actually have higher employment, we’ll create more jobs. I think industries will be more productive. Many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off their feet and get back to growth and prosperity. I see it a little bit differently, but I do think that jobs will be affected, and I’d encourage everybody just to learn AI.

David: This is appropriate. There’s a version of something we talked about a lot on Acquired, we call it the Moritz corollary to Moore’s law, after Mike Moritz from Sequoia.

Jensen: Sequoia was the first investor in our company.

David: Of course, yeah. The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia’s returns. He was looking at fund three or four, I think it was four maybe that had Cisco in it. He was like, how are we ever going to top that? Don’s going to have us beat. We’re never going to beat that.

He thought about it and he realized that, well, as compute gets cheaper, and it can access more areas of the economy because it gets cheaper, and can it get adopted more widely, well then the markets that we can address should get bigger. Your argument is basically AI will do the same thing. The cycle will continue.

Jensen: Exactly. I just gave you exactly the same example that in fact, productivity doesn’t result in us doing less. Productivity usually results in us doing more. Everything we do will be easier, but we’ll end up doing more. Because we have infinite ambition. The world has infinite ambition. If a company is more profitable, they tend to hire more people to do more.

On the importance of prioritising your daily activities

David: What is something that you believe today that 40-year-old Jensen would’ve pushed back on and said, no, I disagree.

Jensen: There’s plenty of time. If you prioritize yourself properly and you make sure that you don’t let Outlook be the controller of your time, there’s plenty of time.

David: Plenty of time in the day? Plenty of time to achieve this thing?

Jensen: To do anything. Just don’t do everything. Prioritize your life. Make sacrifices. Don’t let Outlook control what you do every day.

Notice I was late to our meeting, and the reason for that, by the time I looked up, oh my gosh. Ben and David are waiting.

David: We have time.

Jensen: Exactly.

David: Didn’t stop this from being your day job.

Jensen: No, but you have to prioritize your time really carefully, and don’t let Outlook determine that.

On what is the really important thing in a business plan: The problem you want to solve

Jensen: I didn’t know how to write a business plan.

Ben: Which it turns out is not actually important.

Jensen: No. It turns out that making a financial forecast that nobody knows is going to be right or wrong, turns out not to be that important. But the important things that a business plan probably could have teased out, I think that the art of writing a business plan ought to be much, much shorter.

It forces you to condense what is the true problem you’re trying to solve? What is the unmet need that you believe will emerge? And what is it that you’re going to do that is sufficiently hard, that when everybody else finds out is a good idea, they’re not going to swarm it and make you obsolete? It has to be sufficiently hard to do.

There are a whole bunch of other skills that are involved in just product positioning, pricing, go to market and all that stuff. But those are skills, and you can learn those things easily. The stuff that is really, really hard is the essence of what I described.

I did that okay, but I had no idea how to write the business plan. I was fortunate that Wilf Corrigan was so pleased with me in the work that I did when I was at LSI Logic, he called up Don Valentine and told Don, invest in this kid. He’s going to come your way. I was set up for success from that moment and got us off the ground.

On entrepreneurs’ superpower

David: Well, and that being our final question for you. It’s 2023, 30 years anniversary of the founding of Nvidia. If you were magically 30 years old again today in 2023, and you were going to Denny’s with your two best friends who are the two smartest people you know, and you’re talking about starting a company, what are you talking about starting?

Jensen: I wouldn’t do it. I know. The reason for that is really quite simple. Ignoring the company that we would start, first of all, I’m not exactly sure. The reason why I wouldn’t do it, and it goes back to why it’s so hard, is building a company and building Nvidia turned out to have been a million times harder than I expected it to be, any of us expected it to be.

At that time, if we realized the pain and suffering, just how vulnerable you’re going to feel, and the challenges that you’re going to endure, the embarrassment and the shame, and the list of all the things that go wrong, I don’t think anybody would start a company. Nobody in their right mind would do it.

I think that that’s the superpower of an entrepreneur. They don’t know how hard it is, and they only ask themselves how hard can it be? To this day, I trick my brain into thinking, how hard can it be? Because you have to.

On the importance of self-belief

David: I know how meaningful that is in any company, but for you, I feel like the Nvidia journey is particularly amplified on these dimensions. You went through two, if not three, 80%-plus drawdowns in the public markets, and to have investors who’ve stuck with you from day one through that, must be just so much support.

Jensen: It is incredible. You hate that any of that stuff happened. Most of it is out of your control, but 80% fall, it’s an extraordinary thing no matter how you look at it.

I forget exactly, but we traded down at about a couple of $2–$3 billion in market value for a while because of the decision we made in going into CUDA and all that work. Your belief system has to be really, really strong. You have to really, really believe it and really, really want it.

Otherwise, it’s just too much to endure because everybody’s questioning you. Employees aren’t questioning you, but employees have questions. People outside are questioning you, and it’s a little embarrassing.

It’s like when your stock price gets hit, it’s embarrassing no matter how you think about it. It’s hard to explain. There are no good answers to any of that stuff. The CEOs are humans and companies are built of humans. These challenges are hard to endure.

On how technology transforms and grows economic opportunities

Jensen: This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions.

One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That’s one way to think about that. However, if you build a system that, whenever needed, assisted in the driving of the car, what’s the value of an autonomous chauffeur every now and then?

Obviously, the problem becomes much larger, the opportunity becomes larger. What would it be like if we were to magically conjure up a chauffeur for everybody who has a car, and how big is that market? Obviously, that’s a much, much larger market.

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q3 2023

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the third quarter of 2023.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings release and conference call – for the third quarter of 2023 – happened just last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: Consumer spending and the overall economic environment is solid, but there are substantial risks on the horizon.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. Consumer spending is stable, but consumers are now spending their cash buffers down to pre-pandemic levels

Consumer spend growth has now reverted to pre-pandemic trends with nominal spend for customer stable and relatively flat year-on-year. Cash buffers continue to normalize to pre-pandemic levels with lower income groups normalizing faster.

2. Auto loan originations and auto loan growth were strong

And in Auto, originations were $10.2 billion, up 36% year-on-year as we saw competitors pull back and we gained market share…

…In Auto, we’ve also seen pretty robust loan growth recently, both as a function sort of slightly more competitive pricing on our side as the industry was a little bit slow to raise rates. And so we lost some share previously, and that’s come back now. And generally, the supply chain situation is better, so that’s been supported. As we look forward there, it should be a little bit more muted.

3. Businesses have a healthy appetite for funding from capital markets…

In terms of the outlook, we’re encouraged by the level of capital markets activity in September, and we have a healthy pipeline going into the fourth quarter.

4. …although loan demand from businesses appears to be relatively muted

And I think generally in Wholesale, the loan growth story is going to be driven just by the economic environment. So depending on what you believe about soft landing, mild recession, no lending, we have slightly lower or slightly higher loan growth. But in any case, I would expect it to be relatively muted.

5. Loan losses (a.k.a net charge-off rate) for credit cards is improving, with prior expectation for 2023 Card net charge-off rate at 2.6% compared to the current expectation of 2.5%…

On credit, we now expect the 2023 Card net charge-off rate to be approximately 2.5%, mostly driven by denominator effects due to recent balance growth.

6. …and loan growth in credit cards is still robust, although it has tracked down somewhat

So we were seeing very robust loan growth in Card, and that’s coming from both spending growth and the normalization of revolving balances. As we look forward, we’re still optimistic about that, but it will probably be a little bit more muted than it has been during this normalization period.

7. The near-term outlook for the US economy has improved

I think our U.S. economists had their central case outlook to include a very mild recession with, I think, 2 quarters of negative 0.5% of GDP growth in the fourth quarter and first quarter of this year. And that then got revised out early this quarter to now have sort of modest growth, I think around 1% for a few quarters into 2024.

8. There is no weakness from both consumers and businesses in meeting debt-obligations

And I think your other question was, where am I seeing softness in credit? And I think the answer to that is actually nowhere, roughly, or certainly nowhere that’s not expected. Meaning we continue to see the normalization story play out in consumer more or less exactly as expected. And then, of course, we are seeing a trickle of charge-offs coming through the office space. You see that in the charge-off number of the Commercial Bank. But the numbers are very small and more or less just the realization of the allowance that we’ve already built there.

9. Demand for housing loans is constrained

And of course, Home Lending remains fairly constrained both by rates and market conditions.

10. Overall economic picture looks solid, but there are reasons for caution – in fact, JPMorgan’s CEO, Jamie Dimon, thinks the world may be in the most dangerous environment seen in decades 

And of course, the overall economic picture, at least currently, looks solid. The sort of immaculate disinflation trade is actually happening. So those are all reasons to be a little bit optimistic in the near term, but it’s tempered with quite a bit of caution…

…However, persistently tight labor markets as well as extremely high government debt levels with the largest peacetime fiscal deficits ever are increasing the risks that inflation remains elevated and that interest rates rise further from here. Additionally, we still do not know the longer-term consequences of quantitative tightening, which reduces liquidity in the system at a time when market-making capabilities are increasingly limited by regulations. Furthermore, the war in Ukraine compounded by last week’s attacks on Israel may have far-reaching impacts on energy and food markets, global trade, and geopolitical relationships. This may be the most dangerous time the world has seen in decades. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Crises and Stocks

Mankind’s desire for progress is ultimately what fuels the global economy and financial markets.

The past few years may seem especially tumultuous because of the crises that have occurred. 

For example, in 2020, there was the COVID pandemic and oil prices turned negative for the first time in recorded history. In 2021, inflation in the USA rose to a level last seen in the early 1980s. In 2022, Russia invaded Ukraine. This year, there were the high-profile collapses of Silicon Valley Bank and First Republic Bank in the USA, and Credit Suisse in Europe; and just a few days ago, Israel was attacked by Hamas and Hezbollah militants.

But without downplaying the human tragedies, it’s worth noting that crises are common. Here’s a (partial!) list of major crises in every year stretching back to 1990 that I’ve borrowed and added to (the additions are in square brackets) from an old Morgan Housel article for The Motley Fool:

[2023 (so far): Collapse of Silicon Valley Bank and First Republic Bank in the USA; firesale of Credit Suisse to UBS; Israel gets attacked by Hamas and Hezbollah militants

2022: Russia invades Ukraine

2021: Inflation in the USA rises to a level not seen since early 1980s

2020: COVID pandemic; oil prices turn negative for first time in history 

2019: Australia bush fires; US president impeachment; first sign of COVID

2018: US-China trade war

2017: Bank of England hikes interest rates for first time in 10 years; UK inflation rises to five-year high

2016: Brexit; Italy banking system crises

2015: Euro currency crashes against the Swiss franc; Greece defaults on loan to European Central Bank

2014: Oil prices collapse

2013: Cyprus bank bailouts; US government shuts down; Thai uprising

2012: Speculation of Greek exit from Eurozone; Hurricane Sandy]

2011: Japan earthquake, Middle East uprising.

2010: European debt crisis; BP oil spill; flash crash.

2009: Global economy nears collapse.

2008: Oil spikes; Wall Street bailouts; Madoff scandal.

2007: Iraq war surge; beginning of financial crisis.

2006: North Korea tests nuclear weapon; Mumbai train bombings; Israel-Lebanon conflict.

2005: Hurricane Katrina; London terrorist attacks.

2004: Tsunami hits South Asia; Madrid train bombings.

2003: Iraq war; SARS panic.

2002: Post 9/11 fear; recession; WorldCom bankrupt; Bali bombings.  

2001: 9/11 terrorist attacks; Afghanistan war; Enron bankrupt; Anthrax attacks.  

2000: Dot-com bubble pops; presidential election snafu; USS Cole bombed.  

1999: Y2K panic; NATO bombing of Yugoslavia.

1998: Russia defaults on debt; LTCM hedge fund meltdown; Clinton impeachment; Iraq bombing. 

1997: Asian financial crisis.

1996: U.S. government shuts down; Olympic park bombing.

1995: U.S. government shuts down; Oklahoma City bombing; Kobe earthquake; Barings Bank collapse.

1994: Rwandan genocide; Mexican peso crisis; Northridge quake strikes Los Angeles; Orange County defaults.

1993: World Trade Center bombing.

1992: Los Angeles riots; Hurricane Andrew.

1991: Real estate downturn; Soviet Union breaks up.

1990: Persian Gulf war; oil spike; recession.”

Yet through it all, the MSCI World Index, a good proxy for global stocks, is up by more than 400% in price alone (in US dollar terms) from January 1990 to 9 October this year, as shown in the chart below. 

Source: MSCI

To me, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. There are more than 8.0 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc, but I have faith that humanity can fix these problems. 

The trailing price-to-earnings (P/E) ratio of the MSCI World Index was roughly the same for the start and end points for the chart shown above. This means that the index’s rise over time was predominantly the result of the underlying earnings growth of its constituent-companies. This is a testament to how human ingenuity always finds a way and to how stocks do reflect this over the long run. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

7 Investing Mistakes to Avoid 

Investing is a negative art. It’s more important to avoid mistakes than it is to find ways to win.

From what I see, most investors are often on the lookout for ways to win in the stock market. But that may be the wrong focus, as economist Erik Falkenstein writes:

“In expert tennis, 80% of the points are won, while in amateur tennis, 80% are lost. The same is true for wrestling, chess, and investing: Beginners should focus on avoiding mistakes, experts on making great moves.”

In keeping with the spirit of Falkenstein’s thinking, here are some big investing blunders to avoid.

1. Not realising how common volatility is even with the stock market’s biggest long-term winners

From 1971 to 1980, the American retailer Walmart produced breath-taking business growth. Table 1 below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return.

Based on the earliest data I could find, Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the US stock market, represented by the S&P 500, was down by ‘only’ 40%. 

Table 1; Source: Walmart annual reports

But by the end of 1979, Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972. Still, the 2x-plus increase in Walmart’s stock price was far below the huge increase in earnings per share the company generated.

This is where the passage of time helped – as more years passed, the weighing machine clicked into gear (I’m borrowing from Ben Graham’s brilliant analogy of the stock market being a voting machine in the short run but a weighing machine in the long run). At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972. 

Volatility is a common thing in the stock market. It does not necessarily mean that anything is broken.

2. Mixing investing with economics

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. 

Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Chinese companies behaved the way they did? It turns out that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

Economic trends and investing results can at times be worlds apart. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. 

3. Anchoring on past stock prices

A 2014 study by JP Morgan showed that 40% of all stocks in the Russell 3000 index in the US from 1980 to 2014 suffered a permanent decline of 70% or more from their peak values.

There are stocks that fall hard – and then stay there. Thinking that a stock will return to a particular price just because it had once been there can be a terrible mistake to make. 

4. Think a stock is cheap based on superficial valuation metrics

My friend Chin Hui Leong from The Smart Investors had suffered through this mistake before and he has graciously shared his experience for the sake of letting others learn. In an April 2020 article, he wrote:

“The other company I bought in May 2009, American Oriental Bioengineering, has shrunk to such a tiny figure, making it a total loss…

…In contrast, American Oriental Bioengineering’s revenue fell from around $300 million in 2009 to about US$120 million by 2013. The company also recorded a huge loss of US$91 million in 2013…

…Case in point: when I bought American Oriental Bioengineering, the stock was only trading at seven times its earnings. And yet, the low valuation did not yield a good outcome in the end.”

Superficial valuation metrics can’t really tell us if a stock’s a bargain or not. Ultimately, it’s the business which matters.

5. Not investing due to fears of a recession

Many investors I’ve spoken to prefer to hold off investing in stocks if they fear a recession is around the corner, and jump back in only when the coast is clear. This is a mistake.

According to data from Michael Batnick, the Director of Research at Ritholtz Wealth Management, a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint. 

Said another way, history’s verdict is that avoiding recessions flawlessly would cause serious harm to your investment returns.

6. Following big investors blindly

Morgan Housel is currently a partner with the venture capital firm Collaborative Fund. Prior to this, he was a writer for The Motley Fool for many years. Here’s what Housel wrote in a 2014 article for the Fool (emphasis is mine):

I made my worst investment seven years ago.

The housing market was crumbling, and a smart value investor I idolized began purchasing shares in a small, battered specialty lender. I didn’t know anything about the company, but I followed him anyway, buying shares myself. It became my largest holding — which was unfortunate when the company went bankrupt less than a year later.

Only later did I learn the full story. As part of his investment, the guru I followed also controlled a large portion of the company’s debt and and preferred stock, purchased at special terms that effectively gave him control over its assets when it went out of business. The company’s stock also made up one-fifth the weighting in his portfolio as it did in mine. I lost everything. He made a decent investment.”

We may never be able to know what a famous investor’s true motives are for making any particular investment. And for that reason, it’s important to never follow anyone blindly into the stock market.

7. Not recognising how powerful simple, common-sense financial advice can be

Robert Weinberg is an expert on cancer research from the Massachusetts Institute of Technology. In the documentary The Emperor of All Maladies, Weinberg said (emphases are mine):

If you don’t get cancer, you’re not going to die from it. That’s a simple truth that we [doctors and medical researchers] sometimes overlook because it’s intellectually not very stimulating and exciting.

Persuading somebody to quit smoking is a psychological exercise. It has nothing to do with molecules and genes and cells, and so people like me are essentially uninterested in it — in spite of the fact that stopping people from smoking will have vastly more effect on cancer mortality than anything I could hope to do in my own lifetime.”

I think Weinberg’s lesson can be analogised to investing. Ben Carlson is the Director of Institutional Asset Management at Ritholtz Wealth Management. In a 2017 blog post, Carlson compared the long-term returns of US college endowment funds against a simple portfolio he called the Bogle Model.

The Bogle Model was named after the late index fund legend John Bogle. It consisted of three, simple, low-cost Vanguard funds that track US stocks, stocks outside of the US, and bonds. In the Bogle Model, the funds were held in these weightings: 40% for the US stocks fund, 20% for the international stocks fund, and 40% for the bonds fund. Meanwhile, the college endowment funds were dizzyingly complex, as  Carlson describes:

These funds are invested in venture capital, private equity, infrastructure, private real estate, timber, the best hedge funds money can buy; they have access to the best stock and bond fund managers; they use leverage; they invest in complicated derivatives; they use the biggest and most connected consultants…”

Over the 10 years ended 30 June 2016, the Bogle Model produced an annual return of 6.0%. But even the college endowment funds that belonged to the top-decile in terms of return only produced an annual gain of 5.4% on average. The simple Bogle Model had bested nearly all the fancy-pants college endowment funds in the US.

Simple advice can be very useful and powerful for many investors. But they’re sometimes ignored because they’re too simple, despite how effective they can be. Don’t make this mistake.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI. In it, I shared commentary in earnings conference calls for the second quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s second quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe is using its rich datasets to create foundation models in areas where the company has expertise; Firefly has generated >2 billion images in 6 months 

Our rich datasets enable us to create foundation models in categories where we have deep domain expertise. In the 6 months since launch, Firefly has captivated people around the world who have generated over 2 billion images.

Adobe will allow users to create custom AI models using their proprietary data as well as offer Firefly APIs so that users can embed Firefly into their workflows

Adobe will empower customers to create custom models using proprietary assets to generate branded content and offer access to Firefly APIs so customers can embed the power of Firefly into their own content creation and automation workflows.

Adobe is monetising its generative AI features through generative credits; the generative credits have limits to them, but the limits are set in a way where users can really try out Adobe’s generative AI functions and build the use of generative AI into a habit

We announced subscription offerings, including new generative AI credits with the goal of enabling broad access and user adoption. Generative credits are tokens that enable customers to turn text-based prompts into images, vectors and text effects, with other content types to follow. Free and trial plans include a small number of monthly fast generative credits that will expose a broad base of prospects to the power of Adobe’s generative AI, expanding our top of funnel. Paid Firefly, Express and Creative Cloud plans will include a further allocation of fast generative credits. After the planned specific number of generative credits is reached, users will have an opportunity to buy additional fast generative credits subscription packs…

…First of all, it was a very thoughtful, deliberate decision to go with the generative credit model. And the limits, as you can imagine, were very, very considered in terms of how we set them. The limits are, of course, fairly low for free users. The goal there is to give them a flavor of it and then help them convert. . And for paid users, especially for people in our Single Apps and All Apps plans, one of the things we really intended to do is try and drive real proliferation of the usage. We didn’t want there to be generation anxiety, put in that way. We wanted them to use the product. We wanted the Generative Fill and Generative Expand. We wanted the vector creation. We want to build the habits of using it. And then what will happen over time as we introduce 3D, as we introduce video and design and vectors, and as we introduce these Acrobat capabilities that Shantanu was talking about, the generative credits that are used in any given month continues to go up because they’re getting more value out of it. And so that’s the key thing. We want people to just start using it very actively right now and build those habits.

Brands around the world are using Adobe’s generative AI – through products such as Adobe GenStudio – to create personalised customer experiences at scale; management sees Adobe GenStudio as a huge new opportunity; Adobe itself is using GenStudio for marketing its own products successfully and it’s using its own success as a selling point

Brands around the globe are working with Adobe to accelerate personalization at scale through generative AI. With the announcement of Adobe GenStudio, we are revolutionizing the entire content supply chain by simplifying the creation-to-activation process with generative AI capabilities and intelligent automation. Marketers and creative teams will now be able to create and modify commercially safe content to increase the scale and speed at which experiences are delivered…

…Shantanu and David already talked about the Adobe GenStudio, and we’re really excited about that. This is a unique opportunity, as you said, for enterprises to really create personalized content and drive efficiencies as well through automation and efficiency. And when you look at the entire chain of what enterprises go through from content creation, production workflow and then activation through DX through all the apps we have on our platform, we have the unique opportunity to do that. We already have deployed it within Adobe for our own Photoshop campaign, and we’re working with a number of agencies and customers to do that. So this is a big net new opportunity for us with Adobe GenStudio…

…And if I could actually just add one quick thing at the GenStudio work that Anil team has been doing, we’ve actually been using that within the Digital Media business already to release some of the campaigns that we’ve released this quarter. So it’s one of these things that it’s great to see the impact it’s having on our business and that becomes a selling point for other businesses, too.

Inferencing costs for generative AI are expensive, but Adobe’s management is still confident of producing really strong margins for FY2023

[Question] We’ve been told generative AI is really expensive to run. The inference and training costs are really high. 

[Answer] Our customers have generated over 2 billion images. And I know it’s not lost on people, all this was done while we’re delivering strong margins. But when we take a step back and think about these technologies, we have investments from a COGS standpoint, inferencing, content; from an R&D standpoint, training, creating foundation models. And David alluded to it in his prepared comments, the image model for Firefly family of models is out, but we’re going to bring other media types to market as well so we’re making substantive investments. When I go back to the framing of my prepared comments, we really have a fundamental operating philosophy that’s been alive at the company for a long time: growth and profitability. We’re going to prioritize, we’re going to innovate and we’re going to execute with rigor…

…As we think about going — the profile going forward, what I’ll come back to is when we initially set fiscal 2023 targets, implicit in those targets was a 44.5% operating margin. If you think about how we just guided Q4… implicit in that guide is an operating margin of around 45.5%.

So as you think about us leading this industry, leading the inflection that’s unfolding in front of us, that mid-40s number, we think, is the right ballpark to think about the margin structure of the company as we continue to drive this technology and leadership. 

Adobe’s management thinks about generative AI’s impact on the company’s growth through two lenses: (1) acquiring new users, and (2) growing the spend of existing customers; for growing the spend of existing customers, Adobe has recently increased the pricing of its products

Yes, Shantanu said that we look at the business implications of this through those two lenses: new user adoption, first and foremost; and then sort of opportunity to continue to grow the existing book of business. On the new user side, we’ve said this for years: our focus continues to be on proliferation. We believe that there — we have a massive number of users in front of us. We continue to have our primary focus being net user adds and subscribers. And so the goal here in proliferation is to get the right value to the right audience at the right price…

…The second thing is going to be on the book of business. And here, we’re — basically, the pricing changes, just as a reminder, they have a rolling impact. 

Adobe’s management took a differentiated approach with Firefly when building the company’s generative AI capabilities, with a focus on using licensed content for training where Adobe has the rights to use the content 

So from the very beginning of Firefly, we took a very different approach to how we were doing generative. We started by looking at and working off the Adobe Stock base, which are contents that are licensed and very clearly we have the rights to use. And we looked at other repositories of content where they didn’t have any restrictions on usage, and we’ve pulled that in. So everything that we’ve trained on has gone through some form of moderation and has been cleared by our own legal teams for use in training. And what that means is that the content that we generate is, by definition, content that isn’t then stepping on anyone else’s brand and/or leveraging content that wasn’t intended to be used in this way. So that’s the foundation of what we’ve done.

Adobe is sharing the economic spoils with the creators of the content it has been training its generative AI models on

We’ve been working with our Stock contributors. We’ve announced, and in fact, yesterday, we had our first payout of contributions to contributors that have been participating and adding stock for the AI training. And we’re able to leverage that base very effectively so that if we see that we need additional training content, we can put a call to action, call for content, out to them, and they’re able to bring content to Adobe in a fully licensed way. So for example, earlier this quarter, we decided that we needed 1 million new images of crowd scenes. And so we put a call to action out. We were able to gather that content in. But it’s fully licensed and fully moderated in terms of what comes in. So as a result, all of the content we generate is safe for commercial use.

Adobe’s management is seeing that enterprise customers place a lot of importance on working with generated AI content that is commercially safe

The second thing is that because of that, we’re able to go to market and also indemnify customers in terms of how they’re actually leveraging that content and using it for content that’s being generated. And so enterprise customers find that to be very important as we bring that in not just in the context of Firefly stand-alone but we integrated into our Creative Cloud applications and Express applications as well. 

Adobe’s management has been very focused on generating fair (in population diversity, for example) and safe content in generative AI and they think this is a good business decision

We’ve been very focused on fair generation. So we look intentionally for diversity of people that are generated, and we’re looking to make sure that the content we generate doesn’t create or cause any harm. And all of those things are really good business decisions and differentiate us from others. 

One of the ways Adobe’s management thinks generative AI could be useful in PDFs is for companies to be able to have conversations with their own company-wide knowledge base that is stored in PDFs – Adobe is already enabling this through APIs

Some of the things that people really want to know is how can I have a conversational interface with the PDF that I have, not just the PDF that I have opened right now but the PDF that are all across my folder, then across my entire enterprise knowledge management system, and then across the entire universe. So much like we are doing in Creative, where you can start to upload your images to get — train your own models within an enterprise, well, it is often [ hard-pressed ]. The number of customers who want to talk to us now that we’ve sort of designed this to be commercially safe and say, “Hey, how do we create our own model,” whether you’re a Coke or whether you’re a Nike, think of them as having that. I think in the document space, the same interest will happen, which is we have all our knowledge within an enterprise associated with PDFs, “Adobe, help me understand how your AI can start to deliver services like that.” So I think that’s the way you should also look at the PDF opportunity that exists, just more people taking advantage of the trillions of PDFs that are out there in the world and being able to do things…

… So part of what we are also doing with PDFs is the fact that you can have all of this now accessible through APIs. It’s not just the context of the PDF, the semantic understanding of that to do specific workflows, we’re starting to enable all of that as well. 

When it comes to generative AI products, Adobe’s goal for enterprises and partners is to provide (1) API access, (2) ability to train their own models, and (3) core workflows that gel well with Adobe’s existing products; management is thinking about extending the same metering concepts as Adobe’s generative credits to API calls too

Our goal right now, for enterprises and third-parties that we work with, is to provide a few things. The first is this ability, obviously, to have API access to everything that we are building in, so that they can build it into their workflows and their automation stack. The second thing is to give them the ability to extend or train their own models as well. So if — as we mentioned earlier, our core model, foundation model is a very clean model. It generates great content and you can rely on it commercially. We want our customers and partners to be able to extend that model with content that is relevant to them so that Firefly is able to generate content in their brand or in their style. So we’ll give them the ability to train their own model as well. And then last, but certainly not least, we’ll give them some core workflows that will work with our existing products, whether it’s Express or whether it’s Creative Cloud or GenStudio as well, so that they can then integrate everything they’re doing onto our core platform.

And then from a monetization perspective, you can imagine the metering concepts that we have for generative credits extending to API calls as well. And of course, those will all be custom negotiated deals with partners and enterprises.

Adobe is its own biggest user of the AI products it has developed for customers – management thinks this is a big change for Adobe because the extent of usage internally of its AI products is huge, and it has helped improve the quality of the company’s AI products

So I think the pace of innovation internally of what we have done is actually truly amazing. I mean relative to a lot of the companies that are out there and the fact that we’ve gone from talking about this to very, very quickly, making it commercially available, I don’t want to take for granted the amount of work that went into that. I think internally, it is really galvanized because we are our own biggest user of these technologies. What we are doing associated with the campaigns and the GenStudio that we are using, as David alluded to it, our Photoshop Everyone Can Campaign or the Acrobat’s Got It campaign or how we will be further delivering campaigns for Express as well as for Firefly, all of this is built on this technology. And we use Express every day, much like we use Acrobat every day. So I think it’s really enabled us to say are we really embracing all of this technology within the company. And that’s been a big change because I think the Creative products, we’ve certainly had phenomenal usage within the company, but the extent to which the 30,000 employees can now use our combined offering, that is very, very different internally

DocuSign (NASDAQ: DOCU)

DocuSign has a new AI-powered feature named Liveness Detection for ID verification, which has reduced the time needed for document signings by 60%

Liveness Detection technology leverages AI-powered biometric checks to prevent identity spoofing, which results in more accurate verification without the signee being present. ID Verification is already helping our customers. Our data shows that it has reduced time to sign by about 60%.

DocuSign is already monetising AI features directly

Today, we’re already monetizing AI directly through our CLM+ product and indirectly through its use in our products such as search. 

DocuSign is partnering with AI Labs to build products in closer collaboration with customers

Our next step on that journey is with AI Labs. With AI Labs, we are co-innovating with our customers. We provide a sandbox where customers can share a select subset of agreements, try new features we’re testing. Our customers get early access to developing technology and re-receive early feedback that we will incorporate into our products. By working with our customers in the development phase, we’re further reinforcing the trusted position we’ve earned over the last 20 years. 

DocuSign’s management is excited about how AI – especially generative AI – can help the company across the entire agreement workflow

We think AI will impact practically all of our products at every step with the agreement workflow. So I don’t know that there’s a — just a one call out. But maybe to off a couple that I’m most interested in, I certainly think that the broader, should we say, agreement analytics category is poised to be completely revamped with generative AI. 

DocuSign has been an early investor in AI but had been held back by fundamental technology until the introduction of generative AI

We were an early investor in that category. We saw that coming together with CLM 4 or 5 years ago and made a couple of strategic investments and been a leader in that space, but have been held back by fundamental technology. And I think now with generative AI, we can do a substantially better job more seamlessly, lighter weight with less professional services. And so I’m very excited to think about how it transformed the CLM category and enables us to deliver more intelligent agreements. I think you mentioned IDV [ID Verification]. I agree 100%. Fundamentally, that entire category is AI-enabled. The upload and ingestion of your ID recognition of it and then that Liveness Detection where we’re detecting who you are and that you are present and matching that to ID, that would simply not be possible without today’s AI technology and really takes just dramatically reshapes the ability to trade off risk and convenience. So I think that’s a good one. 

MongoDB (NASDAQ: MDB)

There are 3 important things to focus on when migrating off a relational database, and MongoDB’s management thinks that generative AI can help with one of them (the rewriting of the application code)

So with regards to Gen AI, I mean, we do see opportunities essentially, the reason when you migrate off using relational migrator, there’s really 3 things you have to focus on. One is mapping the schema from the old relational database to the MongoDB platform, moving the data appropriately and then also rewriting some, if not all, of the application code. Historically, that last component has been the most manually intensive part of the migration, obviously, with the advance of cogeneration tools. These opportunities to automate the rewriting of the application code. I think we’re still in the very early days. You’ll see us continue to add new functionality to relational migrator to help again reduce the switching costs of doing so. And that’s obviously an area that we’re going to focus. 

MongoDB introduced Atlas Vector Search, its vector database which allows developers to build AI applications, and it is seeing significant interest; management hopes to bring Atlas Vector Search to general availability (GA) sometime next year, but some customers are already deploying it in production

We also announced Atlas Vector Search, which enables developers to store, index and query Vector embeddings, instead of having to bolt on vector search functionality separately, adding yet another point solution and creating a more fragmented developer experience. Developers can aggregate and process the vectorized data they need to build AI applications while also using MongoDB to aggregate and process data and metadata. We are seeing significant interest in our vector search offering from a large and sophisticated enterprise customers even though it’s only — still only in preview. As one example, a large global management consulting firm is using Atlas Vector Search for internal research applications that allows consultants to semantically search over 1.5 million expert interview transcripts…

…Obviously, Vector is still in public preview. So we hope to have a GA sometime next year, but we’re really excited about the early and high interest from enterprises. And obviously, some customers are already deploying it in production, even though it’s a public preview product.

MongoDB’s management believes that AI will lead developers to write more software and these software will be exceptionally demanding and will thus require high-performance databases

Over time, AI functionality will make developers more productive to the use of code generation and code assist tools that enable them to build more applications faster. Developers will also be able to enrich applicants with compelling AI experiences by enabling integration with either proprietary or open source large language models to deliver more impact. Now instead of data being used only by data scientists who drive insights, data can be used by developers to build smarter applications that truly transform a business. These AI applications will be exceptionally demanding, requiring a truly modern operational data platform like MongoDB. 

MongoDB’s management believes MongoDB has a bright future in the world of AI because (1) the company’s document database is highly versatile, (2) AI applications need a high-performant, scalable database and (3) AI applications have the same requirements for transactional guarantees, security, privacy etc as other applications

In fact, we believe MongoDB has even stronger competitive advantage in the world of AI. First, the document models inherent in flexibility and versatility renders it a natural fit for AI applications. Developers can easily manage and process various data types all in one place. Second, AI applications require high performance, parallel computations and the ability to scale data processing on an ever-growing base of data. MongoDB supports its features, with features like shorting and auto-scaling. Lastly, it is important to remember AI applications have the same demands as any other type of application: Transactional guarantees, security and privacy requirements, tech search, in-app analytics and more. Our developer data platform that gives developer a unified solution to smarter AI applications.

AI startups as well as industrial equipment suppliers are using MongoDB for their AI needs 

We are seeing these applications developed across a wide variety of customer types and use cases. For example, observe.ai is an AI start-up that leverages 40 billion parameter LLM to provide customers with intelligence and coaching that maximize performance of their frontline support and sales teams. Observe.ai processes and run models on millions of support touch points daily to generate insights for their customers. Most of this rich, unstructured data is stored in MongoDB. Observe.ai chose to build on MongoDB because we enable them to quickly innovate, scale to handle large and unpredictable workloads and meet their security requirements of their largest enterprise customers. On the other end of the spectrum is one of the leading industrial equipment suppliers in North America. This company relies on Atlas and Atlas Device sync to deploy AI models at the edge. To their field teams mobile devices to better manage and predict inventory in areas with poor physical network connectivity, they chose MongoDB because of our ability to efficiently handle large quantities of distributed data and to seamlessly integrate between network edge and their back-end systems.

MongoDB’s management sees customers saying that they prefer being able to have one platform handle all their data use-cases (AI included) rather than stitching point solutions together

People want to use one compelling, unified developer experience to address a wide variety of use cases of which AI is just one of them. And we’re definitely hearing from customers to being able to do that on one platform versus bolting on a bunch of point solutions is far more the preferable approach. And so we’re excited about the opportunity there.

MongoDB is working with Google on a number of AI projects

On the other thing on partners, I do want to say that we’re seeing a lot of work and activity with our partner channel on the AI front as well. We’re working with Google in the AI start-up program, and there’s a lot of excitement. Google had their next conference this week. We’re also working with Google to help train Codey, their code generation tool to help people accelerate the development of AI and other applications. And we’re seeing a lot of interest in our own AI innovators program. We’ve had lots of customers apply for that program. So we’re super excited about the interest that we’re generating.

MongoDB’s management thinks there’s a lot of hype around AI in the short term, but also thinks that AI is going to have a huge impact in the long-term, with nearly every application having some AI functionality embedded within over the next 3-5 years

I firmly believe that we, as an industry, tend to overestimate the impact of a new technology in the short term and underestimate the impact in the long term. So as you may know, there’s a lot of hype in the market right now, in the industry right around AI and in some of the early stage companies in the space, have the valuations to the roof. In some cases, almost — it’s hard to see how people can make money because the risk reward doesn’t seem to be sized appropriately. So there’s a lot of hype in the space. But I do think that AI will be a big impact for the industry and for us long term. I believe that almost every application, both new and existing, will have some AI functionality embedded into the application over the next — in your horizon 3 to 5 years.

MongoDB’s management thinks that vector search (the key distinguishing feature of vector databases) is just a feature and not a product, and it will eventually be built into every database as a feature

Vector Search is really a reverse index. So it’s like an index that’s built into all databases. I believe, over time, Vector Search functionality will be built into all databases or data platforms in the future. There are some point products that are just focused solely on Vector Search. But essentially, it’s a point product that still needs to be used with other technologies like MongoDB to store the metadata, the data to be able to process and analyze all that information. So developers have spoken loudly that having a unified and elegant developer experience is a key differentiator. It removes friction in how they work. It’s much easier to build and innovate on one platform versus learning and supporting multiple technologies. And so my strong belief is that, ultimately, Vector Search will be embedded in many platforms and our differentiation will be a — like it always has been a very compelling and elegant developer experience

MongoDB’s management thinks that having vector search as a feature in a database does not help companies to save costs, but instead, improves the overall developer experience

Question: I know that we’re talking about the developers and how they — they’re voting here because they want the data in a unified platform, a unified database that preserves all that metadata, right? But I would think there’s probably also a benefit to having it all in a single platform as well just because you’re lowering the TCO [total cost of ownership] for your customers as well, right? 

Answer: Vectors are really a mathematical representation of different types of data, so there is not a ton of data, unlike application search, where there’s a profound benefits by storing everything on one platform versus having an operational database and a search database and some glue to keep the data in sync. That’s not as much the case with Vector because you’re talking about storing essentially an elegant index. And so it’s more about the user experience and the development workflow that really matters. And what we believe is that offering the same taxonomy in the same way they know how to use MongoDB to also be able to enable Vector Search functionality is a much more compelling differentiation than a developer have to bolt on a separate vector solution and have to provision, configure and manage that solution along with all the other things they have to do.

MongoDB’s management believes developers will become more important in organisations than data scientists because generative AI will position AI in front of software

Some of the use cases are really interesting, but the fact is that we’re really well positioned because what generative AI does is really instantiate AI in front of — in software, which means developers play a bigger role rather than data scientists, and that’s where you’ll really see the business impact. And I think that impact will be large over the next 3 to 5 years.

Okta (NASDAQ: OKTA)

Okta has been using AI for years and management believes that AI will be transformative for the identity market

AI is a paradigm shift in technology that is transformative opportunities for identity, from stronger security and faster application development to better user experiences and more productive employees. Okta has been utilizing AI for years with machine learning models for spotting attack patterns and defending customers against threats, and we’ll have more exciting AI news to share at Oktane.

Okta’s management believes that every company must have an AI strategy, which will lead to more identities to be protected; a great example is how OpenAI is using Okta; Okta’s relationship with OpenAI started a few years ago and OpenAI is now a big customer, accounting for a significant chunk of the US$100m in TCV (total contract value) Okta had with its top 25 transactions in the quarter

Just like how every company has to be a technology company, I believe every company must have an AI strategy. More companies will be founded on AI, more applications will be developed with AI and more identities will need to be protected with a modern identity solution like Okta. A great example of this is how Okta’s Customer Identity Cloud is being utilized for the massive number of daily log-ins, in authentications by OpenAI, which expanded its partnership with Okta again in Q2…

…So OpenAI is super interesting. So they’re — OpenAI as a Customer Identity Cloud customer, which so when you log in, in ChatGPT, you log in through Okta. And it’s interesting because a developer inside of OpenAI 3 years ago picked our Customer Identity Cloud because it had a great developer experience and from the website and started using it. And this Chat — and at the time, it was the log-in for their APIs and then ChatGPT took off. And now, as you mentioned, we’ve had really pretty sizable transactions with them over the last couple of quarters. And so it’s a great testament to our strategy on Customer Identity, having something that appeals to developers.

And you saw they did something pretty interesting — and so this is really a B2C app, right, of ChatGPT but they — now they recently launched their enterprise offering, and they want to connect ChatGPT to enterprises. So this is — Okta is really good at this, too, because our customer identity cloud connects our customers to consumers, but also connects our customers to workforces. So then you have to start supporting things like Single Sign-On and SAML and Open ID and authorization. And so it’s just open API continues to get the benefits of being able to focus on what they want to focus on, which is obviously their models in the LLMs and the capabilities, and we can focus on the identity plumbing that wires it together.

So the transaction was — it was one of the top — I mentioned the top 25 transactions. The total TCV of all this transaction was — this quarter was $100 million. It was one of those top 25 transactions, but I don’t — I haven’t done the math on the TCV for how much of the $100 million it was. But it was one of our — it was on the larger side this quarter.

Okta’s management thinks that identity is a key building block in a number of digital trends, including AI

It’s always a good reminder that identity is a key building block for Zero Trust security, digital transformation, cloud adoption projects and now AI. These trends will continue in any macroeconomic environment as organizations look for ways to become more efficient while strengthening their security posture.

Salesforce (NYSE: CRM)

Salesforce is driving an AI transformation to become the #1 AI CRM (customer relationship management)

And last quarter, we told you we’re now driving our AI transformation. We’re pioneering AI for both our customers and ourselves leading the industry through this incredible new innovation cycle, and I couldn’t be happier with Srini and David and the entire product and technology team for the incredible velocity of AI products that were released to customers this quarter and the huge impact that they’re making in the market and showing how [ tran ] Salesforce is transforming from being not only the #1 CRM, but to the #1 AI CRM, and I just express my sincere gratitude to our entire [ TNP ] team.

Salesforce’s management will continue to invest in AI

We’re in a new AI era, a new innovation cycle that we will continue to invest into as we have over the last decade. As a result, we expect nonlinear quarterly margins in the back half of this year, driven by investment timing, specifically in AI-focused R&D.

Salesforce’s management believes the world is at the dawn of an AI revolution that will spark a new tech buying cycle and investment cycle

AI, data, CRM, trust, let me tell you, we are at the dawn of an AI revolution. And as I’ve said, it’s a new innovation cycle which is sparking amounts of tech buying cycle over the coming years. It’s also a new tech investment cycle…

…And when we talk about growth, I think it’s going to start with AI. I think that AI is about to really ignite a buying revolution. I think we’ve already started to see that with our customers and even some of these new companies like OpenAI. And we certainly see that in our customers’ base as well. 

Salesforce has been investing in many AI startups through its $500 million generative AI fund

 We’ve been involved in the earliest rounds many of the top AI start-ups. Many of you have seen that, we are in there very early…

… Now through our $500 million generative AI fund, we’re seeing the development of ethical AI with amazing companies like Anthropic, [ Cohere ], Hugging Face and some others,

Salesforce has been working on AI early on

But I’ll tell you, this company has pioneered AI, and not just in predictive, a lot of you have followed up the development and growth of Einstein. But also, you’ve seen that we’ve published some of the first papers on prompt engineering in the beginnings of generative AI, and we took our deep learning routes, and we really demonstrated the potential for generative AI and now to see so many of these companies become so successful.

Every CEO Salesforce’s leadership has met thinks that AI is essential to improving their businesses

So every CEO I’ve met with this year across every industry believes that AI is essential to improving both their top and bottom line, but especially their productivity AI is just augmenting what we can do every single day…

…I think many of our customers and ultimately, all of them believe they can grow their businesses by becoming more connected to their customers than ever before through AI and at the same time, reduce cost, increase productivity, drive efficiency and exceed customer expectations through AI. 

All management teams in Salesforce are using Einstein AI to improve their decision-making

Every single management team that we have here at Salesforce every week, we’re using our Einstein AI to do exactly the same thing. We go back, we’re trying to augment ourselves using Einstein. So what we’ll say is, and we’ve been doing this now and super impressive, we’ll say, okay, Brian, what do you think our number is and we’ll say, okay, that’s very nice, Brian. But Einstein, what do you really think the number is? And then Einstein will say, I think Brian is sandbagging and then the meeting continues. 

Salesforce’s management thinks that every company will undergo an AI transformation with the customer at the centre, and this is why Salesforce is well positioned for the future

The reality is every company will undergo an AI transformation with the customer at the center, because every AI transformation begins and ends with the customer, and that’s why Salesforce is really well positioned with the future.

Salesforce has been investing a lot in Einstein AI, and Einstein is democratising generative AI for users of Salesforce’s products; Salesforce’s management thinks that the real value Salesforce brings to the world is the ability to help users utilise AI in a low code or no code way 

And with this incredible technology, Einstein that we’ve invested so much and grown and integrated into our core technology base. We’re democratizing generative AI, making it very easy for our customers to implement every job, every business in every industry. And I will just say that in the last few months, we’ve injected a new layer of generative AI assistance across all of the Customer 360. And you can see it with our salespeople who are now using our Sales Cloud GPT, which has been incredible, what we’ve released this quarter to all of our customers and here inside Salesforce. And then when we see that, they all say to themselves, you know what, in this new world, everyone can now be in Einstein.

But democratizing generative AI at scale for the biggest brands in the world requires more than — that’s just these large language models and deep learning algorithms, and we all know that because a lot of our customers kind of think and they have tried and they go and they pull something off a Hugging Face, it is an amazing company. We just invested in their new round and grab a model and put some data in it and nothing happens. And then they don’t understand and they call us and say, “Hey, what’s happening here? I thought that this AI was so amazing and it’s like, well, it takes a lot to actually get this intelligence to occur. And that’s what I think that’s the value that Salesforce is bringing is that we’re really able to help our customers achieve this kind of technological superiority right out of the box just using our products in a low code, no code way. It’s really just democratization of generative AI at scale. And that is really what we’re trying to achieve that at the heart of every one of these AI transformations becomes our intelligent, integrated and incredible sales force platform, and we’re going to show all of that at Dreamforce

Salesforce is seeing strong customer momentum on Einstein generative AI (a customer – PenFed – used Einstein-powered chatbots to significantly improve their customer service)

We’re also seeing strong customer momentum on Einstein generative AI. PenFed is a great example of how AI plus data plus CRM plus Trust is driving growth for our customers. PenFed is one of the largest credit unions in the U.S., growing at a rate of the next 9 credit unions combined. They’re already using Financial Services Cloud, Experience Cloud and MuleSoft, and our Einstein-powered chatbots handling 40,000 customer service sessions per month. In fact, today, PenFed resolves 20% of their cases on first contact with Einstein-powered chatbots resulting in a 223% increase in chatbot activity in the past year with incredible ROI. In Q2, PenFed expanded with Data Cloud to unify all the customer data from its nearly 3 million members and increase their use of Einstein to roll out generative AI assistant for every single one of their service agents.

Salesforce’s management thinks that customers who want to achieve success with AI needs to have their data in order

But what you can see with Data Cloud is that customers must get their data together if they want to achieve success with AI. This is the critical first step for every single customer. And we’re going to see that this AI revolution is really a data revolution. 

Salesforce takes the issue of trust very seriously in its AI work; Salesforce has built a unique trust layer within Einstein that allows customers to maintain data privacy, security, and more

Everything Einstein does has also delivered with trust and especially ethics at the center, and I especially want to call out the incredible work of our office of ethical and humane use, pioneering the use of ethics and technology. If you didn’t read their incredible article in HBR this quarter. It was awesome. And they are doing incredible work really saying that it’s not just about AI, it’s not just about data, but it’s also about trust and ethics. And that’s why we developed this Einstein trust layer. This is completely unique in the industry. It enables our customers to maintain their data privacy, security, residency and compliance goals.

Salesforce has seen customers from diverse industries (such as Heathrow Airport and Schneider Electric) find success using Salesforce’s AI tools

Heathrow is a great example of transformative power of AI, data, CRM and trust and the power of a single source of truth. They have 70 million passengers who pass through their terminal annually, I’m sure many of you have been one of those passengers I have as well, Heathrow is operating in a tremendous scale, managing the entire airport experience with the Service Cloud, Marketing Cloud, Commerce Cloud, but now Heathrow, they’ve added Data Cloud also giving them a single source of truth for every customer interaction and setting them up to pioneer the AI revolution. And with Einstein, Heathrow’s service agents now have this AI-assisted generator applies to service inquiries, case deflection, writing case summaries, all the relevant data and business context coming from Data Cloud…

…Schneider Electric has been using Customer 360 for over a decade, enhancing customer engagement, service and efficiency. With Einstein, Schneider has refined demand generation, reduced close times by 30%. And through Salesforce Flow, they’ve automated order fulfillment. And with Service Cloud, they’re handling over 8 million support interactions annually, much of it done on our self-service offering. In Q2, Schneider selected Marketing Cloud to further personalize the customer experience.

Salesforce’s management thinks the company is only near the beginning of the AI evolution and there are four major steps on how the evolution will happen

And let me just say, we’re at the beginning of quite a ballgame here and we’re really looking at the evolution of artificial intelligence in a broad way, and you’re really going to see it take place over 4 major zones.

And the first major zone is what’s played out in the last decade, which has been predictive. That’s been amazing. That’s why Salesforce will deliver about [ 1 trillion ] transactions on Einstein this week. It’s incredible. 

These are mostly predictive transactions, but we’re moving rapidly into the second zone that we all know is generative AI and these GPT products, which we’ve now released to our customers. We’re very excited about the speed of our engineering organization and technology organization, our product organization and their ability to deliver customer value with generative AI. We have tremendous AI expertise led by an incredible AI research team. And this idea that we’re kind of now in a generative zone means that’s zone #2.

But as you’re going to see at Dreamforce, zone #3 is opening up with autonomous and with agent-based systems as well. This will be another level of growth and another level of innovation that we haven’t really seen unfold yet from a lot of companies, and that’s an area that we are excited to do a lot of innovation and growth and to help our customers in all those areas.

And then we’re eventually going to move into [ AGI ] and that will be the fourth area. And I think as we move through these 4 zones, CRM will become more important to our customers than ever before. Because you’re going to be able to get more automation, more intelligence, more productivity, more capabilities, more augmentation of your employees, as I mentioned.

Salesforce can use AI to help its customers in areas such as call summaries, account overviews, responding to its customers’ customers, and more

And you’re right, we’re going to see a wide variety of capability is exactly like you said, whether it’s the call summaries and account overviews and deal insights and inside summaries and in-product assistance or mobile work briefings. I mean, when I look at things like service, when we see the amount of case deflection we can do and productivity enhancements with our service teams not just in replies and answers, but also in summaries and summarization. We’ve seen how that works with generative and how important that is in knowledge generation and auto-responding conversations and then we’re going to have the ability for our customers to — with our product.

Salesforce has its own AI models, but Salesforce has an open system – it’s allowing customers to choose any models they wish

We have an open system. We’re not we’re not dictating that they have to use any one of these AI systems. We have an ecosystem. Of course, we have our own models and our own technology that we have given to our customers, but we’re also investing in all of these companies, and we plan to be able to offer them as opportunities for those customers as well, and they’ll be able to deliver all kinds of things. And you’ll see that whether it’s going to end up being contract digitization and cost generation or survey generators or all kinds of campaign assistance.

Slack is going to be an important component of Salesforce’s AI-related work; management sees Slack as an easy-to-use interface for Salesforce’s AI systems

Slack has become incredible for these AI companies, every AI company that we’ve met with is a Slack company. All of them make their agents available for Slack first. We saw that, for example, with Anthropic, where Cloud really appeared first and [ Cloud 2 ], first in Slack.

And Anthropic, as a company uses Slack internally and they have a — they take their technology and develop news digest every day and newsletters and they do incredible things with Slack — Slack is just a treasure trove of information for artificial intelligence, and you’ll see us deliver all kinds of new capabilities in Slack along these lines.

And we’re working, as I’ve mentioned, get Slack to wake up and become more aware and also for Slack to be able to do all of the things that I just mentioned. One of the most exciting things I think you’re going to see a Dreamforce is Slack very much as a vision for the front end of all of our core products. We’re going to show you an incredible new capability that we call Slack Sales Elevate, which is promoting our core Sales Cloud system running right inside Slack.

That’s going to be amazing, and we’re going to also see how we’re going to release and deliver all of our core services in sales force through Slack. This is very important for our company to deliver Slack very much as a tremendous easy-to-use interface on the core Salesforce, but also all these AI systems. So all of that is that next generation of artificial intelligence capability, and I’m really excited to show all of that to you at Dreamforce as well as Data Cloud as well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, and Salesforce. Holdings are subject to change at any time.

Mind The Gap

A favourable macroeconomic trend does not necessarily mean a company’s business – and hence stock – will do well.

There’s a gap in the investing world that I think all investors should beware. It’s a gap that can be a mile (or kilometre – depending on which measurement system you prefer) wide. It’s the gap between a favourable macroeconomic trend and a company’s stock price movement.

Suppose you could go back in time to 31 January 2006, when gold was trading at US$569 per ounce. You have an accurate crystal ball and you know the price of gold would more than triple to reach US$1,900 per ounce over the next five years. Would you have wanted to invest in Newmont Corporation, one of the largest gold producing companies in the world, on 31 January 2006? If you said yes, you would have made a small loss on your Newmont investment, according to O’Higgins Asset Management. 

Newmont’s experience of having its stock price not perform well even in the face of a highly favourable macroeconomic trend (the tripling in the price of gold) is not an isolated incident. It can be seen even in an entire country’s stock market.

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Newmont and Chinese companies behaved the way they did? I think the reason can be traced to some sage wisdom that the great Peter Lynch once shared in a 1994 lecture (link leads to a video; see the 14:20 min mark):

“This is very magic: it’s a very magic number, easy to remember. Coca-cola is earning 30 times per share what they did 32 years ago; the stock has gone up 30 fold. Bethlehem Steel is earning less than they did 30 years ago – the stock is half its price 30 years ago.”

It turns out that Newmont’s net income attributable to shareholders was US$1.15 billion in 2006; in 2011, it was US$972 million, a noticeable decline. As for China’s stocks, Figure 2 below shows that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

There can be a massive gap between a favourable macroeconomic trend and a company’s stock price movement. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. Always mind the gap when you’re thinking about investing in a company simply because it’s enjoying some favourable macroeconomic trend. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

The Worst (Best) Time To Invest Feels The Best (Worst)

Stocks can go on to do deliver great gains even when the economy is in shambles; stocks can also go on to crumble when the economy is booming.

The world of investing is full of paradoxes. In a recent article, I described the example of stability itself being destabilising. Another paradox is that the worst time to invest can feel the best, and vice versa. 

This paradox can be aptly illustrated by the State of the Union Address, a speech that the President of the USA delivers near the start of every year. It’s a report on how the country fared in the year that passed and what lies ahead. It’s also a barometer for the sentiment of US citizens on the country’s social, political, and economic future.

 This is part of the speech for one particular year: 

“We are fortunate to be alive at this moment in history. Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats. Never before have we had such a blessed opportunity — and, therefore, such a profound obligation — to build the more perfect union of our founders’ dreams.

We begin the [year] with over 20 million new jobs; the fastest economic growth in more than 30 years; the lowest unemployment rates in 30 years; the lowest poverty rates in 20 years; the lowest African-American and Hispanic unemployment rates on record; the first back-to-back budget surpluses in 42 years. And next month, America will achieve the longest period of economic growth in our entire history.

My fellow Americans, the state of our union is the strongest it has ever been.”

In short, American citizens were feeling fabulous about their country. There was nothing much to worry about and the economy was buzzing. In another particular year, the then-president commented:

“One in 10 Americans still cannot find work. Many businesses have shuttered. Home values have declined. Small towns and rural communities have been hit especially hard. And for those who’d already known poverty, life has become that much harder. This recession has also compounded the burdens that America’s families have been dealing with for decades — the burden of working harder and longer for less; of being unable to save enough to retire or help kids with college.”

This time, Americans were suffering, and there were major problems in the country’s economy.

The first speech was delivered in January 2000 by Bill Clinton. What happened next: The S&P 500 – a widely followed barometer for the US stock market – peaked around the middle of 2000 and eventually declined by nearly 50% at its bottom near the end of 2002. Meanwhile, the second speech was from Barack Obama and took place in January 2010, when the US was just starting to recover from the Great Financial Crisis. In turned out that the next recession took more than 10 years to arrive (in February 2020, after COVID-19 emerged) and the S&P 500 has increased by nearly 450% – or 14% annually – since the speech, as shown in Figure 1.

Figure 1; Source: Yahoo Finance; S&P 500 (including dividends) from January 2010 to September 2023

It’s not always the case where a crumbling economy equates to fantastic future returns in stocks. But what I’ve shown is the important idea that the best time to invest could actually feel like the worst, while the worst time to invest could feel like the best time to do so. Bear this in mind, for it could come in handy the next time a deep recession hits. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

One Of The Largest Disconnects Between Fundamentals & Price I’ve Ever Seen

VinFast Auto has a mammoth market capitalisation but the same may not be said for its business fundamentals.

VinFast Auto (NASDAQ: VFS) became a public-listed entity in the US stock market on 15 August this year through a SPAC (Special Purpose Acquisition Company) merger. I think it is also a company with one of the largest disconnects between fundamentals and price that I’ve ever seen. I’ll lay out what I know, and you can judge my thought.

Founded in 2017, VinFast manufactures EVs (electric vehicles), e-scooters, and e-buses. The company started producing e-scooters in 2018, ICE cars in 2019 (the production of internal combustion engine vehicles was phased out in late-2022), and e-buses in 2020. Its first EV product line consists of a range of SUVs (sport utility vehicles) which it began manufacturing in December 2021. VinFast’s manufacturing facility – which has 1,400 robots and is highly automated – is located in Hai Phong, Vietnam and has an annual production capacity of 300,000 EVs. Through June 2023, VinFast has delivered 105,000 vehicles – most of which are ICE vehicles – and 182,000 e-scooters. 

Vietnam is VinFast’s headquarters and the company’s primary market at the moment. As of 30 June 2023, VinFast had sold around 18,700 EVs, mostly in Vietnam, since inception; the deliveries of the 182,000 e-scooters since the company’s founding all happened in the same country too. The company has ambitions beyond Vietnam and has set its sights on the USA, Canada, France, Germany, and the Netherlands as its initial international markets. VinFast commenced US deliveries of EVs in March this year while it expects to start delivering EVs into Europe in the second half of 2023. The company has recorded around 26,000 reservations for its EVs globally as of 30 June 2023.

Controlling nearly all of VinFast’s shares currently (99.7%) is Pham Nhat Vuong, the founder and majority shareholder of Vingroup, a Vietnam-based conglomerate. Vingroup has a major economic presence in Vietnam – the company and all of its listed subsidiaries collectively accounted for 1.1% of Vietnam’s GDP in 2022 and they have a combined market capitalisation of US$21.0 billion (note that this does not include the value of VinFast) as of 30 June 2023.   

In the two weeks since VinFast’s listing, the company’s stock price closed at a high of US$82, on 28 August 2023. This gave VinFast a staggering US$190 billion market capitalisation based on an outstanding share count of 2.307 billion (as of 14 August 2023). At the market-close on 29 August 2023, VinFast’s share price was US$46. Though a painful 44% fall from the previous day’s closing, the US$46 stock price still gives VinFast a massive market capitalisation of US$107 billion, which easily makes it one of the top five largest auto manufacturers in the world by market capitalisation. But behind VinFast’s market size are the following fundamentals:

  • 2022 numbers (I would have used trailing numbers, but they’re not readily available): Revenue of US$633.8 million, an operating loss of US$1.8 billion, and an operating cash outflow of US$1.5 billion
  • As I already mentioned, VinFast has (1) 26,000 reservations for its EVs globally as of 30 June 2023, and (2) delivered 105,000 vehicles – most of which are ICE vehicles – and 182,000 e-scooters from its founding through June 2023.

For perspective, here are the equivalent numbers for Tesla, the largest auto manufacturer in the world by market capitalisation (US$816 billion on 29 August 2023), and a company whose valuation ratios are often said by stock market participants to be rich:

  • Trailing numbers: Revenue of US$94.0 billion, operating income of US$12.7 billion, and operating cash inflow of US$14.0 billion
  • Trailing deliveries of 1.638 million vehicles worldwide.

So given all the above, what do you think about my statement above, that VinFast is “a company with one of the largest disconnects between fundamentals and price that I’ve ever seen”?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Tesla. Holdings are subject to change at any time. 

The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the second quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a once-in-a-generation platform shift (a similar comment was also made in the company’s 2023 first-quarter earnings call)

I think AI is basically like a once-in-a-generation platform shift, probably bigger than the shift to mobile, probably more akin to something like the Internet as far as what it can do for new businesses and new business opportunities. And I think that it is a huge opportunity for us to really be in the leading edge of innovation.

Airbnb is already using a fair amount of AI in its product but there’s not much generative AI at the moment; management also believes that AI can continue to help Airbnb lower its fixed cost base

I mean, remember that we actually use a fair amount of AI right now on the product, like we do it for our party prevention technology, a lot of our matching technologies. A lot of the underlying technologies we have is actually AI-driven. It’s not so much gen AI, which is such a huge kind of future opportunity. I think we’ll see more leverage in our fixed cost base, so needing fewer people to do more work overall. And so I think that, that’s going to help both on our fixed costs and some our variable costs. So you’ll see us being able to automate more customer service contacts, et cetera, over time…

…So customer — the strength of Airbnb is that we’re one-of-a-kind. We have 7 million active listings, more than 7 million listings, and everyone is unique and that is really special. But the problem with Airbnb is it’s one-of-a-kind, and sometimes you don’t know what you’re going to get. And so I think that if we can continue to increase reliability and then if there’s something that goes unexpected, if customer serves can quickly fix, remediate the issue, then I think there will be a tipping point where many people that don’t consider Airbnb and they only stay in hotels would consider Airbnb. And to give you a little more color about this customer service before I go to the future, there are so many more types of issues that could arise staying in Airbnb than a hotel. First of all, when you call a hotel, they’re usually one property and they’re aware of every room. We’re in nearly every country in the world. Often a guest or host will call us, and they will even potentially speak a different language than the person on the other side, the host — the guest and host.

There are nearly 70 different policies that you could be adjudicating. Many of these are 100 pages long. So imagine a customer service agent trying to quickly deal with an issue with somebody from 2 people from 2 different countries in a neighborhood that the agent may never even heard of. What AI can do, and we’re using a pilot to GPT-4, is AI can read all of our policies. No human can ever quickly read all those policies. It can read the case history of both guests and hosts. It could summarize the case issue, and it could even recommend what the ruling should be based on our policies. And that can then write a macro that the customer search agent can basically adopt and amend. If we get all this right, it’s going to 2 things. In the near term, it’s going to actually make customer service a lot more effective because agents will actually be able to handle a lot more tickets and make the ticket, you’ll never even have to talk to an agent, but also the service to be more reliable, which will unlock more growth.

Airbnb’s management believes that they can use build a breakthrough multi-modal AI interface to learn more about Airbnb’s users and provide a lot of personalisation (a.k.a an AI concierge)

If you were to go to ChatGPT right now and you ask it a question and I were to go to ChatGPT and ask it a question, we’re going to get mostly the same answer. And the reason why is it doesn’t know who you are and it doesn’t know who I am. So it does really good with like immutable truths, like how far is the earth to the moon or something like that. And — there’s no conditional answers to that. But it turns out in life, there’s a whole bunch of questions, and travel is one of these areas where the answer isn’t right for everyone. Where should I travel? Where should I stay? Who should I go with? What should I bring? Every one of these questions depends on who you are…

… And we can design, I think, a breakthrough interface for AI. I do not think that the AI interface is chat. Chat, I do not think is the right interface because we want to interface that’s multimodal. It’s text, it’s image and it’s video and you can — it’s much faster than typing to be able to see what you want. So we think there’s a whole new interface. And also, I think it’s really important that we provide a lot of personalization, that we learn more about you, that you’re not just a unanimous customer. And that’s partly why we’re investing more and more in account profiles, personalization, really understanding the guests. We want to know more about every guest in Airbnb than any travel company knows about their customer in the world. And if we do that, we can provide much more personalized service and that our app can almost be like an AI concierge that can match to the local experiences, local homes, local places all over the world.

Airbnb’s management is not interested in building foundational AI models – they are only keen on building the interface (a similar comment was also made in the company’s 2023 first-quarter earnings call)

And so we’re not going to be building like large research labs to develop these large language models. Those are like infrastructure projects, building bridges. But we’re going to build the applications on top of the bridges, like the car. And I think Airbnb is best-in-class at designing interfaces. I think you’ve seen that over the last few years.

Airbnb’s management believes that the companies that will best succeed in AI are the most product-led companies

And I think the last thing I’ll just say about AI is I think the companies that will best succeed in AI, well, think of it this way, which company’s best adopted in mobile? Which company is best adopted in the Internet? It was the companies that were most innovative, the most product-led. And I think we are very much a product-led, design-led, technology-led company, and we always want to be on the frontier of new tech. So we’re working on that, and I think you’ll see some exciting things in the years to come.

Alphabet (NASDAQ: GOOG)

Alphabet is making AI helpful for everyone in four important ways

 At I/O, we shared how we are making AI helpful for everyone in 4 important ways: first, improving knowledge and learning…

…Second, we are helping people use AI to boost their creativity and productivity…

…Third, we are making it easier for others to innovate using AI…

…Finally, we are making sure we develop and deploy AI technology responsibly so that everyone can benefit.

2023 is the seventh year of Alphabet being an AI-first company and it knows how to incorporate AI into its products

This is our seventh year as an AI-first company, and we intuitively know how to incorporate AI into our products.

Nearly 80% of Alphabet’s advertisers use at least one AI-powered Search ads product 

In fact, today, nearly 80% of advertisers already use at least one AI-powered Search ads product.

Alphabet is using AI to help advertisers create campaigns and ads more easily in Google Ads and also help advertisers better understand their campaigns

Advertisers tell us they’re looking for more assistive experience to get set up with us faster. So at GML, we launched a new conversational experience in Google Ads powered by a LLM tuned specifically from ads data to make campaign construction easier than ever. Advertisers also tell us they want help creating high-quality ads that work in an instant. So we’re rolling out a revamped asset creation flow in Performance Max that helps customers adapt and scale their most successful creative concepts in a few clicks. And there’s even more with PMax. We launched a new asset insights and new search term insights that improve campaign performance understanding and new customer life cycle goals that led advertisers optimize for new and existing customers while maximizing sales. We’ve long said it’s all about reaching the right customer with the right creative at the right time. 

So later this year, Automatically Created Assets, which are already generating headlines and descriptions for search ads, will start using generative AI to create assets that are even more relevant to customer queries. Broad match also got updates. AI-based keyword prioritization ensures the right keyword, bid, budget, creative and landing page is chosen when there are multiple overlapping keywords eligible. And then to make it easier for advertisers to optimize visual storytelling and drive consideration in the mid funnel, we’re launching 2 new AI-powered ad solutions, Demand Gen and Video View campaigns, and both will include Shorts inventory. 

Alphabet’s management thinks the integration of LLMs (large language models) and generative AI make Alphabet’s core Search product even better

Large language models make them even more helpful models like PaLM 2 and soon Gemini, which we are building to be multimodal. These advances provide an opportunity to reimagine many of our products, including our most important product, Search. We are in a period of incredible innovation for Search, which has continuously evolved over the years. This quarter saw our next major evolution with the launch of the Search Generative Experience, or SGE, which uses the power of generative AI to make Search even more natural and intuitive. User feedback has been very positive so far. It can better answer the queries people come to us with today, while also unlocking entirely new types of questions that Search can answer. For example, we found that generative AI can connect the dots for people as they explore a topic or project, helping them weigh multiple factors and personal preferences before making a purchase or booking a trip. We see this new experience as another jumping off point for exploring the web, enabling users to go deeper to learn about a topic.

Alphabet’s management thinks the company has done even better in integrating generative AI into search than they thought it would be at this point in time

Look, on the Search Generative Experience, we definitely wanted to make sure we’re thinking deeply from first principles, while it’s exciting new technology, we’ve constantly been bringing in AI innovations into Search for the past few years, and this is the next step in that journey. But it is a big change so we thought about from first principles. It really gives us a chance to now not always be constrained in the way Search was working before, allowed us to think outside the box. And I see that play out in experience. So I would say we are ahead of where I thought we’d be at this point in time. The feedback has been very positive. We’ve just improved our efficiency pretty dramatically since the product launch. The latency has improved significantly. We are keeping a very high bar, and — but I would say we are ahead on all the metrics in terms of how we look at it internally.

Alphabet’s management believes that even with the introduction of generative AI (Search Generative Experience) in the company’s core Search product, advertising will still continue to play a critical role in the company’s business model and the monetisation of Search will not be harmed

Ads will continue to play an important role in this new search experience. Many of these new queries are inherently commercial in nature. We have more than 20 years of experience serving ads relevant to users’ commercial queries, and SGE enhances our ability to do this even better. We are testing and evolving placements and formats and giving advertisers tools to take advantage of generative AI…

…Users have commercial needs, and they are looking for choices, and there are merchants and advertisers looking to provide those choices. So those fundamentals are true in SGE as well. And we have a number of experiments in flight, including ads, and we are pleased with the early results we are seeing. And so we will continue to evolve the experience, but I’m comfortable at what we are seeing, and we have a lot of experience working through these transitions, and we’ll bring all those learnings here as well.

Alphabet’s management believes that Google Cloud is a leading platform for training and running inference of generative AI models with more than 70% of generative AI unicorns using Google Cloud

Our AI-optimized infrastructure is a leading platform for training and serving generative AI models. More than 70% of gen AI unicorns are Google Cloud customers, including Cohere, Jasper, Typeface and many more. 

Google Cloud uses both Nvidia chips as well as Google’s own TPUs (this combination helps customers get 2x better price performance than competitors)

We provide the widest choice of AI supercomputer options with Google TPUs and advanced NVIDIA GPUs, and recently launched new A3 AI supercomputers powered by NVIDIA’s H100. This enables customers like AppLovin to achieve nearly 2x better price performance than industry alternatives. 

Alphabet is seeing customers using Google Cloud’s AI capabilities for online travelling, retail marketing, anti-money laundering, drug discovery, and more

Among them, Priceline is improving trip planning capabilities. Carrefour is creating full marketing campaigns in a matter of minutes. And Capgemini is building hundreds of use cases to streamline time-consuming business processes. Our new Anti-Money Laundering AI helps banks like HSBC identify financial crime risk. And our new AI-powered target and lead identification suite is being applied at Cerevel to help enable drug discovery…

… I mentioned Duet AI earlier. Instacart is using it to improve customer service workflows. And companies like Xtend are scaling sales outreach and optimizing customer service.

Alphabet’s management thinks that open-source AI models will be important in the ecosystem and

Google Cloud will be offering not just first-party AI models, but also third-party and open source models

So similarly, you would see with AI, we will embrace — we will offer not just our first-party models, we’ll offer third-party models, including open source models. I think open source has a critical role to play in this ecosystem. Google contributes, we are one of the largest contributors to — if you look at hugging phase and in terms of the contribution there, when you look at projects like Android, Chromium and so on, Kubernetes and so on. So we’ll embrace that and we’ll stay at the cutting edge of technology, and I think that will serve us well for the long term.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks generative AI is going to be transformative, but it’s still very early days in the adoption and success of generative AI, and consumer applications is only one opportunity in the area

It’s important to remember that we’re in the very early days of the adoption and success of generative AI, and that consumer applications is only one layer of the opportunity…

… I think it’s going to be transformative, and I think it’s going to transform virtually every customer experience that we know. But I think it’s really early. I think most companies are still figuring out how they want to approach it…

…What I would say is that we have had a very significant amount of business in AWS driven by machine learning and AI for several years. And you’ve seen that largely in the form of compute as customers have been doing a lot of machine learning training and then running their models and production on top of AWS and our compute instances. But you’ve also seen it in the form of the 20-plus machine learning services that we’ve had out there for a few years. I think when you’re talking about the big potential explosion in generative AI, which everybody is excited about, including us, I think we’re in the very early stages there. We’re a few steps into a marathon in my opinion. 

Amazon’s management sees LLMs (large language models) in generative AI as having three key layers and Amazon is participating heavily in all three: The first layer is the compute layer; the second would be LLMs-as-a-service; and the third would be the applications that run on top of LLMs, with ChatGPT being an example

We think of large language models in generative AI as having 3 key layers, all of which are very large in our opinion and all of which AWS is investing heavily in. At the lowest layer is the compute required to train foundational models and do inference or make predictions…

…We think of the middle layer as being large language models as a service…

…Then that top layer is where a lot of the publicity and attention have focused, and these are the actual applications that run on top of these large language models. As I mentioned, ChatGPT is an example. 

Amazon has AI compute instances that are powered by Nvidia H100 GPUs, but the supply of Nvidia chips is scarce, so management built Amazon’s own training (Trainium) and inference (Inferentia) chips and they are an appealing price performant option

Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there’s only been one viable option in the market for everybody and supply has been scarce. That, along with the chip expertise we’ve built over the last several years, prompted us to start working several years ago on our own custom AI chips for training called Trainium and inference called Inferentia that are on their second versions already and are a very appealing price performance option for customers building and running large language models.

Amazon’s management optimistic that a lot of LLM training and inference will be running on Trainium and Inferentia in the future

We’re optimistic that a lot of large language model training and inference will be run on AWS’ Trainium and Inferentia chips in the future.

Amazon’s management believes that most companies that want to work with AI do not want to build foundational LLMs themselves as it is time consuming and expensive, and companies only want to customize the LLMs with their own data in a secure way (this view was also mentioned in Amazon’s 2023 first-quarter earnings call) 

Stepping back for a second, to develop these large language models, it takes billions of dollars and multiple years to develop. Most companies tell us that they don’t want to consume that resource building themselves. Rather, they want access to those large language models, want to customize them with their own data without leaking their proprietary data into the general model, have all the security, privacy and platform features in AWS work with this new enhanced model and then have it all wrapped in a managed service. 

AWS has a LLM-as-a-service called Bedrock that provides access to LLMs from Amazon and multiple startups; large companies are already using Bedrock to build generative AI applications; Bedrock allows customers to create conversation AI agents 

This is what our service Bedrock does and offers customers all of these aforementioned capabilities with not just one large language model but with access to models from multiple leading large language model companies like Anthropic, Stability AI, AI21 Labs, Cohere and Amazon’s own developed large language models called Titan. Customers, including Bridgewater Associates, Coda, Lonely Planet, Omnicom, 3M, Ryanair, Showpad and Travelers are using Amazon Bedrock to create generative AI application. And we just recently announced new capabilities from Bedrock, including new models from Cohere, Anthropic’s Claude 2 and Stability AI’s Stable Diffusion XL 1.0 as well as agents for Amazon Bedrock that allow customers to create conversational agents to deliver personalized up-to-date answers based on their proprietary data and to execute actions.

Amazon’s management believes that AWS is democratizing access to generative AI and is making it easier for companies to work with multiple LLMs

If you think about these first 2 layers I’ve talked about, what we’re doing is democratizing access to generative AI, lowering the cost of training and running models, enabling access to large language model of choice instead of there only being one option.

Amazon’s management sees coding companions as a compelling early example of a generative AI application and Amazon has CodeWhisperer, which is off to a very strong start

We believe one of the early compelling generative AI applications is a coding companion. It’s why we built Amazon CodeWhisperer, an AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code. It’s off to a very strong start and changes the game with respect to developer productivity.

Every team in Amazon are building generative AI applications but management believes that most of these applications will be built by other companies, although these applications will be built on AWS

Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience. But while we will build a number of these applications ourselves, most will be built by other companies, and we’re optimistic that the largest number of these will be built on AWS… 

…Coupled with providing customers with unmatched choices at these 3 layers of the generative AI stack as well as Bedrock’s enterprise-grade security that’s required for enterprises to feel comfortable putting generative AI applications into production, we think AWS is poised to be customers’ long-term partner of choice in generative AI…

…On the AI question, what I would tell you, every single one of our businesses inside of Amazon, every single one has multiple generative AI initiatives going right now. And they range from things that help us be more cost effective and streamlined in how we run operations in various businesses to the absolute heart of every customer experience in which we offer. And so it’s true in our stores business. It’s true in our AWS business. It’s true in our advertising business. It’s true in all our devices, and you can just imagine what we’re working on with respect to Alexa there. It’s true in our entertainment businesses, every single one. It is going to be at the heart of what we do. It’s a significant investment and focus for us.

Amazon’s management believes that (1) data is the core of AI, and companies want to bring generative AI models to data, not the other way around and (2) AWS has a data advantage

Remember, the core of AI is data. People want to bring generative AI models to the data, not the other way around. AWS not only has the broadest array of storage, database, analytics and data management services for customers, it also has more customers and data store than anybody else.

Amazon’s management is of the view that in the realm of generative AI as well as cloud computing in general, the more demand there is, the more capex Amazon needs to spend to invest in data centers for long-term monetisation; management wants the challenge of having more capex to spend on because that will mean that AWS customers are successful with building generative AI on top of AWS

And so it’s — like in AWS, in general, one of the interesting things in AWS, and this has been true from the very earliest days, which is the more demand that you have, the more capital you need to spend because you invest in data centers and hardware upfront and then you monetize that over a long period of time. So I would like to have the challenge of having to spend a lot more in capital in generative AI because it will mean that customers are having success and they’re having success on top of our services.

Apple (NASDAQ: AAPL)

Apple has been doing research on AI for years and has built these technologies as integral features of its products; management intends for Apple to continue investing in AI in the years ahead

If you take a step back, we view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build. And so if you think about WWDC in June, we announced some features that will be coming in iOS 17 this fall, like Personal Voice and Live Voicemail. Previously, we had announced lifesaving features like fall detection and crash detection and ECG. None of these features that I just mentioned and many, many more would be possible without AI and machine learning. And so it’s absolutely critical to us.

And of course, we’ve been doing research across a wide range of AI technologies, including generative AI for years. We’re going to continue investing and innovating and responsibly advancing our products with these technologies with the goal of enriching people’s lives. And so that’s what it’s all about for us.

ASML (NASDAQ: ASML)

ASML’s management believes that AI has strengthened the long-term megatrends powering the growth of the semiconductor industry

Beyond 2024, it’s really the solid believe we have in the megatrends that are not going to go away. You can even argue that some of these megatrends, when you think about AI, are even more important than we thought, let’s say at the end of last year. But it’s not only AI, it’s also the energy transition, it’s the electrification of mobility, it’s industrial Internet Of Things. It’s everything that’s driven by sensors and actuators. So, effectively, we see very strong growth across the entire semiconductor space. Whether it’s mature or whether it’s advanced. Because of these megatrends we have still a very strong confidence in what we said at the end of last year, that by 2025 – depending on what market scenario you are choosing, higher or lower – we will have between €30 billion and €40 billion of sales and gross margin by that 2025 timeframe between 54% and 56%. And if you extend that then to 2030, we are still very confident that by that time, also dependent on a lower or higher market scenario, sales will be anywhere between €44 billion and €60 billion with gross margin between 56% and 60%. So, we have short-term cycles. This is what the industry is all about. But we have very strong confidence, even stronger confidence, in what the longer-term future is going to bring for this company.

ASML’s management thinks the world is at the beginning of an AI high-power compute wave, but AI will not be a huge driver of the company’s growth in 2024

But I think we’re at the beginning of this, you could say, AI high-power compute wave. So yes, you’ll probably see some of that in 2024. But you have to remember that we have some capacity there, which is called the current underutilization. So yes, we will see some of that, but that will be taken up, the particular demand, by the installed base. Now — and that will further accelerate. I’m pretty sure. But that will definitely mean that, that will be, you could say, the shift to customer by 2025. So I don’t see that or don’t particularly expect that, that will be a big driver for additional shipments in 2024, given the utilization situation that we see today.

Arista Networks (NYSE: ANET)

Arista Networks’ management is seeing AI workloads drive an upgrade from 400 gigabit networking ports to 800 gigabit ports

As we surpassed 75 million cumulative cloud networking ports, we are experiencing 3 refresh cycles with our customers, 100 gigabit migration in the enterprises, 200 and 400 gigabit migration in the cloud and 400 going to 800 gigabits for AI workloads…

…We had the same discussion when the world went to 400 gig. Are we switching for 100 to 400. The reality was the customers continue to buy both 100 and 400 for different use cases. [ 51T ] and 800 gig especially are being pulled by AI clusters, the AI teams, they’re very anxious to get their hands on it, move the data as quickly as possible and reduce their job completion times. So you’ll see early traction there.

At least one of Arista Networks’ major cloud computing customers is shifting capital expenditure from other cloud computing areas to AI-related areas

During the past couple of years, we have enjoyed significant increase in cloud CapEx to support our Cloud Titan customers for their ever-growing needs, tech refresh and expanded offerings. Each customer brings a different business and mix of AI networking and classic cloud networking for their compute and storage clusters. One specific Cloud Titan customer has signaled a slowdown in CapEx from previously elevated levels. Therefore, we expect near-term Cloud Titan demand to moderate with spend favoring their AI investments. 

Arista Networks is a founding member of a consortium that is promoting the use of Ethernet for networking needs in AI data centres

Arista is a proud founding member of the Ultra Ethernet Consortium that is on a mission to build open, multivendor AI networking at scale based on proven Ethernet and IP.

Arista Networks’ management thinks AI networking will be an extension of cloud networking in the future

In the decade ahead, AI networking will become an extension of cloud networking to form a cohesive and seamless front-end and back-end network.

Arista Networks’ management thinks that Ethernet – and not Inifiniband – is the right networking technology when it comes to the training of large language models (LLMs) because they involve a massive amount of data; but in the short run, management thinks Infiniband will be more widely adopted

Today, I would say, in the back end of the network, there are basically 3 classes of networks. One is very, very small networks that are within a server where customers use PCIe, CXL, there is proprietary NVIDIA-specific technologies like NVLink that Arista does not participate. Then there’s more medium clusters, you can think generative AI, mostly inference where they may well get built on Ethernet. For the extremely large clusters with large language training models, especially with the advent of ChatGPT 3 and 4 you’re not talking about not just billion parameters, but an aggregate of trillion parameters. And this is where Ethernet will shine. But today, the only technology that is available to customers is InfiniBand. So obviously, InfiniBand with 10, 15 years of similarity in an HPC environment is often being bundled with the GPU. But the right long-term technology is Ethernet, which is why I’m so proud of what the Ultra Ethernet Consortium and a number of vendors are doing to make that happen. So near term, there’s going to be a lot of InfiniBand and Arista will be watching that outside in…

…And what is their network foundation. In some cases, where they just need to go quick and fast, as I explained before, it would not be uncommon to just bundle their GPUs with an existing technology like InfiniBand. But where they’re really rolling out into 2025, they’re doing more trials and pilots with us to see what the performance is, to see what the drop is, to see how many they can connect, what’s the latency, what’s the better entropy, what’s the efficiency, et cetera. That’s where we are today.

Arista Networks’ management thinks that neither Ethernet nor Infiniband were purpose-built for AI

But longer term, Arista will be participating in an Ethernet [ AI ] network. And neither technology, I want to say, were perfectly designed for AI, InfiniBand was more focused on HPC and Ethernet was more focused on general purpose networking. So I think the work we are doing with the UEC to improve Ethernet for AI is very important.

Arista Networks’ management thinks that there’s a 10-year AI-drive growth opportunity for Ethernet networking technology

I think the way to look at our AI opportunity is it’s 10 years ahead of us. And we’ll have early customers in the cloud with very large data sets, trialing our Ethernet now. And then we will have more cloud customers, not only Titans, but other high-end Tier 2 cloud providers and enterprises with large data sets that would also trial us over time. So in 2025, we expect to have a large list of customers, of which Cloud Titans will still end up being some of the biggest but not the only ones.

Datadog (NASDAQ: DDOG)

Datadog has introduced functionalities related to generative AI and LLMs (large language models) on its platform that include (1) the ability for software teams to monitor the performance of their AI models, (2) an incident management copilot, and (3) new integrations across AI stacks including GPU infrastructure providers, vector databases, and more

To kick off our keynote, we launched our first innovation for generative AI and large language model. We showcased our LLM observability product, enabling ML engineers to safely deploy and manage the model production. This includes the motor catalog centralized place to view and manage every model in every state of our customer development pipeline; analysis and insight on model performance, which allows all engineers to identify and address performance and quality issue with the model themselves; and help identify model drift, the performance the performance degradation that happens over time as model interact with the world data. We also introduced Bits AI. Bits understands natural language and provide insights from across the Datadog platform as well as from our customers’ collaboration and documentation tools. Among its many features, Bits AI can act as an incident management copilot identifying and suggesting succes, generating synthetic tests and triggering workflows to automatically remediate critical issue. And we announced 15 new integrations across the next-generation AI stack from GPU infrastructure providers to Vector databases, motor vendors and orchestration frameworks.

Management is seeing Datadog get early traction with AI customers

And although it’s early days for everyone in this space, we are getting traction with AI customers. And in Q2, our next-gen AI customers contributed about 2% of ARR.

Datadog’s AI customers are those that are selling LLM services or companies that are built on differentiated AI technology

So it’s — you can see it as the customers that are either selling AI themselves. So that would be LM vendors and the like. Our customers whose whole business is so is built on differentiated AI technology. And we’ve been fairly selective in terms of who we put in a category because companies everywhere are very eager to said that they differentiate we are today. 

Datadog expanded a deal with one of the world’s largest tech companies that is seeing massive adoption of its new generative AI product and was using homegrown tools for tracking and observability, but those were slowing it down

Next, we signed a 7-figure expansion with 1 of the world’s largest tech companies. This customer is seeing massive adoption of its new generative AI product and needs to scale their GPU fleet to meet increasing demand for a workload. Using their homegrown tools were slowing them down and put at risk critical product launches. With Datadog, this team is able to programmatically manage new environments as they come online, track and alert on their service level objectives and provide real-time visibility for GPs.

Etsy (NASDAQ: ETSY)

Etsy is using machine learning (ML) models to better predict how humans would perceive the quality of a product

Our product teams are helping buyers more easily navigate the breadth and depth of our sellers’ inventory, leveraging the latest AI advances to improve our discovery and inspiration experiences while surfacing the very best of Etsy. These latest technologies, combined with training and guidance from our own talented team, is making the superhuman possible in terms of organizing and curating at scale, which I believe can unlock an enormous amount of growth in the years to come. One great example. Over the past quarter, we’ve more than doubled the size of our best of Etsy library, which is curated by expert merchandisers based on the visual appeal, uniqueness and apparent craftsmanship of an item. We’re now using this library to train our ML models to better predict the quality of items as perceived by humans. We’re seeing encouraging results from our first iterations on these models, and I’m optimistic that this work will have a material impact, helping us to surface the best of Etsy in every search.

Etsy’s use of ML has helped it to dramatically reduce the time it takes to resolve customer issues

Specific to our trust and safety work, advances in ML capabilities have enabled our enforcement models to detect an increasing number of policy violations, which, combined with human know-how, is starting to have a meaningful impact on the buyer and seller experience. Since Etsy Purchase Protection was launched about a year ago, we’ve reduced the issue resolution time for cases by approximately 85%, dramatically streamlining the service experience on the rare occasion that something goes wrong, demonstrating to buyers and sellers that we have their backs in these key moments. 

Etsy’s management wants to use AI to improve every touch point a customer has with Etsy

Of course, much of the focus was on the myriad ways we can continue to harness AI and ML technologies in almost every customer touch point, with the potential to further transform buyer-facing experiences like enhancing search and recommendations, seller tools like streamlining the listing process and assisting with answering customer queries, improving fraud detection and trust and safety models, et cetera. The opportunities are nearly endless.

Etsy has a small ML team and the company is streamlining its machine learning (ML) workflow so that it’s easy for any Etsy software engineering to deploy their own ML models without asking for help from the ML team

But all of this innovation also takes time and effort and relies on our relatively small but mighty team of ML experts, talent that is obviously in high demand. Historically, all new ML models have been created by this team of highly specialized data scientists. And the full process of creating a new model, from cleaning and organizing the data to training and testing the model, then putting it into production, could take as long as 4 months. That’s why we kicked off a major initiative over a year ago we call Democratizing ML with the goal to streamline and automate much of this work so that virtually any Etsy engineer can deploy their own ML models in a matter of days instead of months. And I’m thrilled to report that we’re starting to see the first prototypes from this effort come live now. For example, if you’re on the Etsy team working on buyer recommendations, you can now use a drag-and-drop modeling tool to create a brand-new recommendations module without needing our ML team to build that model for you. 

Etsy’s management is currently testing ML technology developed by other companies to drive its own efficiency

We’ve also been leveraging the investments that other companies, many of them are existing partners have invested already in machine learning. And so we’re doing a lot of beta testing and experimentation with other companies. And at the moment, that is coming at a very low cost to us. We would imagine that at some point, there will be some kind of license fee arrangement. But we are — typically, we do not invest in anything unless we see a high ROI for that investment.

Etsy’s management believes that generative AI can be good for the company’s search experience for consumers, but consumer-adoption and the AI-integration will take time

For buyers, the idea that the search experience can become more conversational, I think, can be a very big deal for Etsy, and maybe more for Etsy than for most people. I talked to 2 earnings calls ago now about how you don’t walk into a store and shout, “Dress blue, linen,” to a sales agent. You actually have a conversation with them that has more context. And I think that’s especially important in a place like Etsy, where we’ve got 115 million listings to choose from and no catalog. So the idea that it can be conversational, I think, can give a lot of context and really help. And I think a lot of the technology behind that is becoming a self-problem. What’s going to be longer is the consumer adoption curve. What do customers expect when they enter something into a search bar? And how do they get used to interacting with chatbots? And what’s the UI look like? And that’s something that I think we’re going to need to — we’re testing a lot right now. What do people expect? How do they like to interact with things? And in my experience now, having a few decades of consumer technology leadership, the consumer adoption curve is often the long pole in the tent, but I think over time, can yield really big gains for us.

Fiverr (NYSE: FVRR)

Fiverr Neo is a new matching service from Fiverr that provides better matching for search queries using data, AI, and conversational search

Essentially, what we’ve done with Fiverr Neo is to tackle head-zone, the largest challenge every market has, which is matching. Now being able to produce a great match is far more than just doing search. And search by definition is very limited because customers provide 3 or 4 awards. And based on that, you need to understand their intent, their need and everything surrounding that need. And providing good matching for us is really about not just pairing business with a professional or with an agency, but actually being able to produce a product and end result where the 2 parties to that transaction are very happy that they work together.To do this perfect match, you need a lot of information because that allows you to create a very, very precise type of match. And what we’ve developed with Fiverr Neo using the latest technologies alongside our deep data tech that we’ve developed along the years and the tens of millions of transactions that we’ve already processed and the learnings from that is a product that can have a human-like discussion where our technology deeply understands and can have a conversation that would guide the customer to define their exact needs.

Fiverr’s management is seeing high interest for AI services on the company’s marketplace

So on the AI services, pretty much the same as last quarter, meaning we’ve launched tens of categories around AI. The interest is very high. It’s very healthy. And we continue to invest in it. So basically introducing more and more categories that have to do with AI in general and Gen AI in particular. And our customers love it. They use it and we’re happy with what we’re seeing on that front.

Mastercard (NYSE: MA)

Mastercard’s management sees AI as a foundational technology for the company and the technology has been very useful for the company’s fraud-detection solutions, where Mastercard has helped at least 9 UK banks stop payment scams before funds leave a victim’s account

We recently launched our Consumer Fraud Risk solution, which leverages our latest AI capabilities and the unique network view of real-time payments I just mentioned to help banks predict and prevent payment scams. AI is a foundational technology used across our business and has been a game changer in helping identify such fraud patterns. We’ve partnered with 9 U.K. banks, including Barclays, Lloyds Bank, Halifax, Bank of Scotland, NatWest, Monzo and TSB to stop scam payments before funds leave a victim’s account.TSB, one of the first banks to adopt the solution, indicated that it has already dramatically increased its fraud detection since deploying the capability.

Meta Platforms (NASDAQ: META)

Meta’s management currently does not have a clear handle on how much AI-related capital expenditure is needed – it will depend on how fast Meta’s AI products grow

The other major budget point that we’re working through is what the right level of AI CapEx is to support our road map. Since we don’t know how quickly our new AI products will grow, we may not have a clear handle on this until later in the year…

…There’s also another component, which is the next-generation AI efforts that we’ve talked about around advanced research and gen AI, and that’s a place where we’re already standing up training clusters and inference capacity. But we don’t know exactly what we’ll need in 2024 since we don’t have any at-scale deployments yet of consumer business-facing features. And the scale of the adoption of those products is ultimately going to inform how much capacity we need.

Meta’s management is seeing the company’s investments in AI infrastructure paying off in the following ways: (1) Increase in engagement and monetisation of Reels; and (2) an increase in monetisation of automated advertising products

Investments that we’ve made over the years in AI, including the billions of dollars we’ve spent on AI infrastructure, are clearly paying off across our ranking and recommendation systems and improving engagement and monetization. AI-recommended content from accounts you don’t follow is now the fastest-growing category of content on Facebook’s Feed. Now since introducing these recommendations, they’ve driven a 7% increase in overall time spent on the platform. This improves the experience because you can now discover things that you might not have otherwise followed or come across.

Reels is a key part of this discovery engine. And Reels plays exceed 200 billion per day across Facebook and Instagram. We’re seeing good progress on Reels monetization as well, with the annual revenue run rate across our apps now exceeding $10 billion, up from $3 billion last fall.

Beyond Reels, AI is driving results across our monetization tools through our automated ads products, which we call Meta Advantage. Almost all our advertisers are using at least one of our AI-driven products. We’ve also deployed Meta Lattice, a new model architecture that learns to predict an ad’s performance across a variety of data sets and optimization goals. And we introduced AI Sandbox, a testing playground for generative AI-powered tools like automatic text variation, background generation and image outcropping.

Meta’s management believes the company is building leading foundational AI models, including Llama2, which is open-sourced; worth noting that Llama2 comes with a clause that large enterprises that sell Llama2 need to have a commercial agreement with Meta

Beyond the recommendations and ranking systems across our products, we’re also building leading foundation models to support a new generation of AI products. We’ve partnered with Microsoft to open source Llama 2, the latest version of our large language model and to make it available for both research and commercial use…

……in addition to making this open through the open source license, we did include a term that for the largest companies, specifically ones that are going to have public cloud offerings, that they don’t just get a free license to use this. They’ll need to come and make a business arrangement with us. And our intent there is we want everyone to be using this. We want this to be open. But if you’re someone like Microsoft or Amazon or Google, and you’re going to basically be reselling these services, that’s something that we think we should get some portion of the revenue for. So those are the deals that we intend to be making, and we’ve started doing that a little bit. I don’t think that, that’s going to be a large amount of revenue in the near term. But over the long term, hopefully, that can be something.

Meta’s management believes that open-sourcing allows Meta to benefit from (a) innovations that come from everywhere, in areas such as safety and efficiency, and (b) being able to attract potential employees

We have a long history of open sourcing our infrastructure and AI work from PyTorch, which is the leading machine learning framework, to models like Segment Anything, ImageBind and DINO to basic infrastructure as part of the Open Compute Project. And we found that open-sourcing our work allows the industry, including us, to benefit from innovations that come from everywhere. And these are often improvements in safety and security, since open source software is more scrutinized and more people can find and identify fixes for issues. The improvements also often come in the form of efficiency gains, which should hopefully allow us and others to run these models with less infrastructure investment going forward…

……One of the things that we’ve seen is that when you release these projects publicly and in open source, there tend to be a few categories of innovations that the community makes. So on the one hand, I think it’s just good to get the community standardized on the work that we’re doing. That helps with recruiting because a lot of the best people want to come and work at the place that is building the things that everyone else uses. It makes sense that people are used to these tools from wherever else they’re working. They can come here and build here. 

Meta is building new products itself using Llama and Llama2 will underpin a lot of new Meta products

So I’m really looking forward to seeing the improvements that the community makes to Llama 2. We are also building a number of new products ourselves using Llama that will work across our services…

…e wanted to get the Llama 2 model out now. That’s going to be — that’s going to underpin a lot of the new things that we’re building. And now we’re nailing down a bunch of these additional products, and this is going to be stuff that we’re working on for years.

Meta partnered with Microsoft to open-source Llama2 because Meta does not have a public cloud offering

We partnered with Microsoft specifically because we don’t have a public cloud offering. So this isn’t about us getting into that. It’s actually the opposite. We want to work with them because they have that and others have that, and that was the thing that we aren’t planning on building out.

Meta’s management thinks that AI be integrated into Meta’s products in the following ways: Help people connect, express themselves, create content, and get digital assistance in a better way (see also Point 30)

But you can imagine lots of ways that AI can help people connect and express themselves in our apps, creative tools that make it easier and more fun to share content, agents that act as assistance, coaches that can help you interact with businesses and creators and more. And these new products will improve everything that we do across both mobile apps and the metaverse, helping people create worlds and the avatars and objects that inhabit them as well.

Meta’s management expects the company to spend more on AI infrastructure in 2024 compared to 2023

We’re still working on our ’24 CapEx plans. We haven’t yet finalized that, and we’ll be working on that through the course of this year. But I mentioned that we expect that CapEx in ’24 will be higher than in ’23. We expect both data center spend to grow in ’24 as we ramp up construction on sites with the new data center architecture that we announced late last year. And then we certainly also expect to invest more in servers in 2024 for both AI workloads to support all of the AI work that we’ve talked about across the core AI ranking, recommendation work, along with the next-gen AI efforts. And then, of course, also our non-AI workloads, as we refresh some of our servers and add capacity just to support continued growth across the site.

There are three categories of products that Meta’s management plans to build with generative AI: (1) Building ads, (2) improving developer efficiency, and (3) building AI agents, especially for businesses so that businesses can interact with humans effectively (right now, human-to-business interaction is still very labour intensive)

I think that there are 3 basic categories of products or technologies that we’re planning on building with generative AI. One are around different kinds of agents, which I’ll talk about in a second. Two are just kind of generative AI-powered features.

So some of the canonical examples of that are things like in advertising, helping advertisers basically run ads without needing to supply as much creative or, say, if they have an image but it doesn’t fit the format, be able to fill in the image for them. So I talked about that a little bit upfront in my comments. But there’s stuff like that across every app. And then the third category of things, I’d say, are broadly focused on productivity and efficiency internally. So everything from helping engineers write code faster to helping people internally understand the overall knowledge base at the company and things like that. So there’s a lot to do on each of those zones.

For AI agents, specifically, I guess what I’d say is, and one of the things that’s different about how we think about this compared to some others in the industry is we don’t think that there’s going to be one single AI that people interact with, just because there are all these different entities on a day-to-day basis that people come across, whether they’re different creators or different businesses or different apps or things that you use. So I think that there are going to be a handful of things that are just sort of focused on helping people connect around expression and creativity and facilitating connections. I think there are going to be a handful of experiences around helping people connect to the creators who they care about and helping creators foster their communities.

And then the one that I think is going to have the fastest direct business loop is going to be around helping people interact with businesses. And you can imagine a world on this, where, over time, every business has as an AI agent that basically people can message and interact with. And it’s going to take some time to get there, right? I mean, this is going to be a long road to build that out. But I think that, that’s going to improve a lot of the interactions that people have with businesses, as well as if that does work, it should alleviate one of the biggest issues that we’re currently having around messaging monetization is that in order to — for a person to interact with a business. It’s quite human labor-intensive for a person to be on the other side of that interaction, which is one of the reasons why we’ve seen this take off in some countries where the cost of labor is relatively low. But you can imagine in a world where every business has an AI agent, that we can see the kind of success that we’re seeing in Thailand or Vietnam with business messaging could kind of spread everywhere. And I think that’s quite exciting.

Meta’s management believes that there will be both open and closed AI models in the ecosystem

I do think that there will continue to be both open and closed AI models. I think there are a bunch of reasons for this. There are obviously a lot of companies that their business model is to build a model and then sell access to it. So for them, making it open would undermine their business model. That is not our business model. We want to have the — like we view the model that we’re building as sort of the foundation for building products. So if by sharing it, we can improve the quality of the model and improve the quality of the team that we have that is working on that, that’s a win for our business of basically building better products. So I think you’ll see both of those models…

…But for our business model, at least, since we’re not selling access to this stuff, it’s a lot easier for us to share this with the community because it just makes our products better and other people’s…

…And it’s not just going to be like 1 thing is what everyone uses. I think different businesses will use different things for different reasons.

Meta’s management is aware that AI models could be dangerous if they become too powerful, but does not think the models are anywhere close to this point yet; he also thinks there are people who are genuinely concerned about AI safety, and AI companies who are trying to be opportunistic

There are a number of people who are out there saying that once the AI models get past a certain level of capability, it can become dangerous for them to become just in the hands of everyone openly. I think — what I think is pretty clear is that we’re not at that point today. I think that there’s consensus generally among people who are working on this in the industry and policy folks that we’re not at that point today. And it’s not exactly clear at what point you reach that. . So I think there are people who are kind of making that argument in good faith, who are actually concerned about the safety risk. So I think that there are probably some businesses that are out there making that argument because they want it to be more closed, because that’s their business, so I think we need to be wary of that.

Microsoft (NASDAQ: MSFT)

11,000 organisations are already using Azure OpenAI services, with nearly 100 new customers added each day during the quarter

We have great momentum across Azure OpenAI Service. More than 11,000 organizations across industries, including IKEA, Volvo Group, Zurich Insurance, as well as digital natives like FlipKart, Humane, Kahoot, Miro, Typeface, use the service. That’s nearly 100 new customers added every day this quarter…

…We’re also partnering broadly to scale this next generation of AI to more customers. Snowflake, for example, will increase its Azure spend as it builds new integrations with Azure OpenAI.

Microsoft’s management believes that every AI app has to start with data

Every AI app starts with data, and having a comprehensive data and analytics platform is more important than ever. Our intelligent data platform brings together operational databases, analytics and governance so organizations can spend more time creating value and less time integrating their data estate. 

Microsoft’s management believes that software developers are see Azure AI Studio as the tool of choice for AI software development

Now on to developers. New Azure AI Studio is becoming the tool of choice for AI development in this new era, helping organizations ground, fine-tune, evaluate and deploy models, and do so responsibly. VS Code and GitHub Copilot are category-leading products when it comes to how developers code every day. Nearly 90% of GitHub Copilot sign-ups are self-service, indicating strong organic interest and pull-through. More than 27,000 organizations, up 2x quarter-over-quarter, have chosen GitHub Copilot for Business to increase the productivity of their developers, including Airbnb, Dell and Scandinavian Airlines.

Microsoft is using AI for low-code, no-code software development tools to help domain experts automate workflows, create apps etc

We’re also applying AI across low-code, no-code tool chain to help domain experts automate workflows, create apps and web pages, build virtual agents, or analyze data using just natural language. Copilot in Power BI combines the power of large language models with an organization’s data to generate insights faster, and Copilot in Power Pages makes it easier to create secure low-code business websites. One of our tools that’s really taken off is Copilot in Power Virtual Agents, which is delivering one of the biggest benefits of this new area of AI, helping customer service agents be significantly more productive. HP and Virgin Money, for example, have both built custom chatbots with Copilot and Power Virtual Agents that were trained to answer complex customer inquiries. All-up, more than 63,000 organizations have used AI-powered capabilities in Power Platform, up 75% quarter-over-quarter.

The feedback Microsoft’s management has received for Microsoft 365 Copilot is that it is a gamechanger for productivity

4 months ago, we introduced a new pillar of customer value with Microsoft 365 Copilot. We are now rolling out Microsoft 365 Copilot to 600 paid customers through our early access program, and feedback from organizations like Emirates NBD, General Motors, Goodyear and Lumen is that it’s a game changer for employee productivity.

Microsoft’s management believes that revenue growth from the company’s AI services will be gradual

At a total company level, revenue growth from our Commercial business will continue to be driven by the Microsoft Cloud and will again outpace the growth from our Consumer business. Even with strong demand and a leadership position, growth from our AI services will be gradual as Azure AI scales and our copilots reach general availability dates. So for FY ’24, the impact will be weighted towards H2.

Microsoft’s management believes that AI will accelerate the growth of overall technology spending
We do think about what’s the long-term TAM here, right? I mean this is — you’ve heard me talk about this as a percentage of GDP, what’s going to be tech spend? If you believe that, let’s say, the 5% of GDP is going to go to 10% of GDP, maybe that gets accelerated because of the AI wave…

…And of course, I think one of the things that people often, I think, overlook is, and Satya mentioned it briefly when you go back to the pull on Azure, I think in many ways, lots of these AI products pull along Azure because it’s not just the AI solution services that you need to build an app. And so it’s less about Microsoft 365 pulling it along or any one Copilot. It’s that when you’re building these, it requires data and it requires the AI services. So you’ll see them pull both core Azure and AI Azure along with them. 

Microsoft’s management believes that companies need their own data in the cloud in order to utilise AI efficiently

Yes, absolutely. I think having your data, in particular, in the cloud is sort of key to how you can take advantage of essentially these new AI reasoning engines to complement, I’ll call it, your databases because these AI engines are not databases, but they can reason over your data and to help you then get more insights, more completions, more predictions, more summaries, and what have you.

Nvidia (NASDAQ: NVDA)

Nvidia is enjoying incredible demand for its AI chips

Data Center Compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA Accelerated Computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.

Nvidia is seeing tremendous demand for accelerated computing

There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade.

Nvidia is seeing strong demand for AI from consumer internet companies as well as enterprises

Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud as well as demand for on-premise infrastructure. 

Nvidia’s management believes that virtually every industry can benefit from AI

Virtually, every industry can benefit from generative AI. For example, AI Copilot, such as those just announced by Microsoft, can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.  

Nvidia’s management is seeing some of the earliest applications of generative AI in companies in marketing, media, and entertainment 

We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world’s largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider, Shutterstock, is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI.

Nvidia’s management believes that Infiniband is a much better networking solution for AI compared to Ethernet

Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of [indiscernible] for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners…

…We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant, some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings. Basically, the networking is free. And so if you have a single application, if you will, infrastructure where it’s largely dedicated to large language models or large AI systems, InfiniBand is really, really a terrific choice.

Nvidia’s management thinks that general purpose computing is too costly and slow, and that the world will shift to accelerated computing, driven by the demand for generative AI; this shift from general purpose computing to accelerated computing contains massive economic opportunity

It is recognized for some time now that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It’s too energy costly, it’s too expensive, and the performance of the applications are too slow. 

And finally, the world has a new way of doing it. It’s called accelerated computing, and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that’s already in the data center. And by using it, you offload the CPUs. You save a ton of money and order of magnitude, in cost and order of magnitude and energy and the throughput is higher. And that’s what the industry is really responding to.

Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It’s ready to be deployed. And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers…

…The world has something along the lines of about $1 trillion worth of data centers installed in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We’re seeing 2 simultaneous platform shifts at the same time. One is accelerated computing. And the reason for that is because it’s the most cost-effective, most energy-effective and the most performant way of doing computing now. So what you’re seeing, and then all of a sudden, enabled by generative AI — enabled by accelerated compute and generative AI came along. And this incredible application now gives everyone 2 reasons to transition, to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It’s about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year. You’re seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition, and we’re seeing these 2 platform shifts happening at the same time.

PayPal (NASDAQ: PYPL)

PayPal’s management believes the use of AI will allow the company to operate faster at lower cost

Our initial experiences with AI and continuing advances in our processes, infrastructure and product quality enable us to see a future where we do things better, faster and cheaper.

PayPal’s management believes that the use of AI has accelerated the company’s product innovation and improved developers’ productivity

As we discussed in our June investor meeting, we are meaningfully accelerating new product innovations into the market, scaling our A/B testing and significantly improving our time to market. We are now consistently delivering against our road map on schedule. This is the result of significant investments in our platform infrastructure and tools and enhanced set of measurements and performance indicators, hiring new talent and early successes using AI in our software development process…

There’s no question that AI is going to impact every single company and every function just as it will inside of PayPal. And we’ve been experimenting with a couple of hundred of our developers using tools from both Google, Microsoft as well as Amazon. And we are seeing 20% to 40% increases in engineering productivity. 

PayPal’s management believes that companies with unique, large data sets will have an advantage when using AI technologies; management sees PayPal as one of these companies

We believe that only those companies with unique and scaled data sets will be able to fully utilize the power of AI to drive actionable insights and differentiated value propositions for their customers…

…We capture 100% of the data flows, which really is feeding our AI engines. It’s fueling what will be our next-generation checkout. And most importantly, it’s fueling kind of our ability to have best-in-class auth rates in the industry and the lowest loss rates in the industry. 

Shopify (NASDAQ: SHOP)

Shopify’s management believes that entrepreneurship is entering an era where AI will become the most powerful sidekick for business creation

We are quickly positioning ourselves to build on the momentum we are seeing across our business, making purposeful change that support our core focus on commerce and unlock what we believe is a new era of data-driven entrepreneurship and growth, an era where AI becomes the most powerful sidekick for business creation.

Shopify recently introduced Shopify Magic, a suite of AI features that is integrated across Shopify’s products and workflows, and will soon launch Sidekick, an AI-powered chat interface commerce assistant; Shopify Magic is designed specifically for commerce, unlike other generative AI products

We recognize the immense potential of AI to transform the consumer landscape and commerce more broadly. And we are committed to harnessing its power to help our merchants succeed. We believe AI is making the impossible possible, giving everyone superpowers to be more productive, more creative, and more successful than ever before. So, of course, we are building that directly into Shopify. In our additions last week, we unveiled Shopify Magic, our suite of free AI-enabled features that are integrated across Shopify’s products and workflows, everything from inbox to online store builder and app store to merchandising to unlock creativity and increased productivity.

One of the most exciting products we will be launching soon in early access is our new AI-enabled commerce assistant, Sidekick. Powered by Shopify Magic, Sidekick is a new chat interface packed with advanced AI capabilities purposely built for commerce. Merchants will now have a commerce expert in their corner who is deeply competent, incredibly intelligent, and always available. With Sidekick, no matter your expertise or skillset, it allows entrepreneurs to use everyday language to have conversations that jump-start the creative process, tackle time-consuming tasks, and make smarter business decisions. By harnessing a deep understanding of systems and available data, Sidekick integrates seamlessly with the Shopify admin, enhancing and streamlining merchant operations. While we’re at the very early stages, the power of Sidekick is already incredible, and it’s developing fast…

……  I mean, unlike other generative AI products, Shopify Magic is specifically designed for commerce. And it’s not just embedded in one place, it’s embedded throughout the entire product. So, for example, the ability to generate blog posts instantaneously or write incredibly, high-converting product descriptions or create highly contextualized content for your business. That is where we feel like AI really can play a big role here in making merchants lives better..

… . And with Sidekick, you can do these incredible things like you can analyze sales and you can ideate on store design or you can even give instructions on how to run promotions.

Shopify’s management does not seem keen to raise the pricing of its services to account for the added value from new AI features such as Magic and Sidekick 

So, certainly there is opportunities for us to continuously review our pricing and figure out where the right pricing is. And we will continue to do that. But in terms of, you know, features like Magic and Sidekick, which are really excited about, remember, when our merchants do better, Shopify does better. That’s the business model. And so, the more that they can sell, the faster they can grow, the more we can share in that upside. But the other part that we talked about in the prepared remarks that’s just worthwhile mentioning again is that product attach rate. The fact that we’re still growing at — we’re still above 3%, which is really high, it means that as we introduce new products, new merchant solutions, whether it’s payment solutions, shipping, things like Audiences, anything like collabs, collective, more of our merchants are taking more of our solutions.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

Management sees AI as a positive for TSMC and the company did see an increase in AI-related demand, but it was not enough to offset declines elsewhere

Moving into third quarter 2023, while we have recently observed an increase in AI-related demand, it is not enough to offset the overall cyclicality of our business…

… The recent increase in AI-related demand is directionally positive for TSMC. Generative AI requires higher computing power and interconnected bandwidth, which drives increasing semiconductor content. Whether using CPUs, GPUs or AI accelerator and related ASIC for AI and machine learning, the commonality is that it requires use of leading-edge technology and a strong foundry design ecosystem. These are all TSMC’s strengths…

…Of course, we have a model, basically. The short-term frenzy about the AI demand definitely cannot extrapolate for the long term. And neither can we predict the near future, meaning next year, how the sudden demand will continue or will flatten out. However, our model is based on the data center structure. We assume a certain percentage of the data center processor are AI processors, and based on that, we calculate the AI processor demand. And this model is yet to be fitted to the practical data later on. But in general, I think the — our trend of a big portion of data center processor will be AI processor is a sure thing. And will it cannibalize the data center processors? In the short term, when the CapEx of the cloud service providers are fixed, yes, it will. It is. But as for the long term, when their data service — when the cloud service is having the generative AI service revenue, I think they will increase the CapEx. That should be consistent with the long-term AI processor demand. And I mean the CapEx will increase because of the generative AI services…

…But again, let me emphasize that those kind of applications in the AI, be it CPUs, GPUs or AI accelerator or ASIC, they all need leading-edge technologies. And they all have one symptom: they are using the very large die size, which is TSMC’s strength. 

AI server processors currently account for just 6% of TSMC’s total revenue but is expected to grow at 50% annually in the next 5 years to become a low-teens percentage of TSMC’s total revenue

Today, server AI processor demand, which we define as CPUs, GPUs and AI accelerators that are performing training and inference functions accounts for approximately 6% of TSMC’s total revenue. We forecasted this to grow at close to 50% CAGR in the next 5 years and increase to low teens percent of our revenue.

AI has reinforced the view of TSMC’s management that there will be healthy long-term growth in the semiconductor industry in general, and TSMC’s business in particular

The insatiable need for energy-efficient computation is starting from data centers and we expect it will proliferate to edge and end devices over time, which will further long term — which will drive further long-term opportunities. We have already embedded a certain assumption for AI demand into our long-term CapEx and growth forecast. Our HPC platform is expected to be the main engine and the largest incremental contributor to TSMC’s long-term growth in the next several years. While the quantification of the total addressable opportunity is still ongoing, generative AI and large language model only reinforce the already strong conviction we have in the structural megatrend to drive TSMC’s long-term growth, and we will closely monitor the development for further potential upside.

TSMC currently can’t fulfil all the demand for certain AI chips because of the lack of product capacity, but the company is expanding capacity

For the AI, right now, we see very strong demand, yes. For the front-end part, we don’t have any problem to support. But for the back end, the advanced packaging side, especially for the CoWoS, we do have some very tight capacity to — very hard to fulfill 100% of what customers needed. So we are working with customers for the short term to help them to fulfill the demand, but we are increasing our capacity as quickly as possible. And we expect these tightening somewhat be released in next year, probably towards the end of next year. But in between, we’re still working closely with our customers to support their growth…

… I will not give you the exact number, but let me give you a roughly probably 2x of the capacity will be added…

… I think the second question is about the pricing of the — on the CoWoS. As I answer the question, we are increasing the capacity as soon as possible manner. Of course, that including actual cost. So in fact, we are working with our customers. And the most important thing for them right now is supply assurance. It’s a supply to meet their demand. So we are working with them. We do everything possible to increase the capacity. And of course, at the same time, we share our value.

It appears that TSMC is selling AI chips for a few hundred dollars apiece while its customers then go onto sell the chips for tens of thousands of dollars – but TSMC management is ok with that

Well, Charles, I used to make a joke on my customers say that I’m selling him a few hundred dollars per chip, and then he sold it back to me with USD 200,000. But let me say that we are happy to see customers doing very well. And if customers do well, TSMC does well. And of course, we work with them and we sell our value to them. And fundamentally, we want to say that we are able to address and capture a major portion of the market in terms of a semiconductor component in AI. Did I answer your question?

Tencent (NASDAQ: TCEHY)

Tencent is testing its own foundational model for generative AI, and Tencent Cloud will be facilitating the deployment of open-source models by other companies; the development progress of Tencent’s own foundational model is good

In generative AI, we are internally testing our own proprietary foundation model in different use cases and are providing Tencent Cloud Model-as-a-Service solutions to facilitate efficient deployment of open-source foundation models in multiple industry verticals…

…And in terms of just the development, I would say, there are multi initiatives that’s going on at the company. The first one, obviously, is building our own proprietary foundation model, and that is actually progressing very well. The training is actually on track and making very good progress…

… And in terms of additional efforts, we are also on the cloud side, providing MaaS solution for enterprises, right? So basically providing a marketplace so that different enterprise clients can choose different types of open source large models for them to customize for their own use with their own data. And we have a whole set of technology infrastructure as well as tools to help them to make the choice as well as to do the training and do the deployment. And we believe this is going to be a pretty high value added and high margin product for the enterprise clients. 

Tencent’s management thinks that AI is a multiplier for many of the company’s businesses

AI is — really the more we look at it, the more excited we are for that asset growth multiplier across our many businesses. It would serve to enhance efficiency and the quality of our user to user services and at the same time, you facilitate the improvement in terms of our ad targeting, data targeting and also cost-efficient production of a lot of our content. So there are really multiple ways through which we can benefit from the continued development of generative AI. 

Tencent’s management believes that the company’s MaaS for AI will first benefit large enterprises, but that it will subsequently also benefit companies of different sizes (although the smaller companies will benefit from using trained models via API versus training their own models)

In terms of the AI and Model-as-a-Service solution, we — besides — we think a lot of the industries will actually benefit from it, right? Initially, it would definitely be with larger companies…

…I think over time, as the industry become more mature, obviously, the medium-sized and smaller sized enterprises will probably benefit. But I don’t think they will be benefiting from using — training their own model, right? But then they would probably be benefiting from using the already trained models directly through APIs. So I think that’s sort of the way the industry will probably evolve over time. 

Tencent’s management believes that the company’s MaaS will provide a revenue stream that is recurring and high margin

I think, obviously, the revenue model is still evolving, but I would say, theoretically, what you talked about the high margin and high recurring revenue is going to be true because we are adding more value to the customers. And once the customers start using these services, right, it will be built into their interaction with their customers, which will be much more sticky than if it’s in their back-end systems. So I think that would probably be true. 

An important change Tencent has made to improve its advertising technology stack when using machine learning is to shift from CPUs (central processing units) to GPUs (graphics processing units)

If you look at the key changes or key things that we have done with respect to machine learning on ad platform, I think the traditional challenge for us is that we have many different platforms. We have many different types of inventories. We have a very large coverage of user base and with a lot of data, right? And all these things make it actually very complicated for us to target customers based on just rule-based or CPU-based targeting system, which was actually what we have been deploying .And a key change is that we have deployed a lot of GPUs, so moving from CPUs to GPUs and we have built a very large neural network to basically accept all these different complexities and be able to come up with the optimal solution. And as a result, our ad targeting becomes much more effective and much higher speed and more accurate in terms of targeting. And as a result, right now, it actually provides a very strong boost to our targeting ability and also the ROI that we can deliver through our ad systems. And as James talked about, this is sort of early stage of this deployment and continuous improvement of our technology, and I think this trend will continue.

Tesla (NASDAQ: TSLA)

Tesla’s management believes that (1) the company’s Autopilot service has a data-advantage, as AI models become a lot more powerful with more data, and (2) self-driving will be safer than human driving

And I mean, there are times where we see basically, in a neural net basically, it’s sort of, at a million training examples, it barely works at 2 million, it slightly works at 3 million. It’s like, “Wow, okay, we’re seeing something.” But then you get to like 10 million training examples, it’s like — it becomes incredible. So there’s just no substitute for a massive amount of data. And obviously, Tesla has more vehicles on the road that are collecting this data than all other companies combined by I think, maybe even an order of magnitude. So I think we might have 90% of all — a very big number…

So today, over 300 million miles have been driven using FSD Beta. That 300 million-mile number is going to seem small very quickly. It will soon be billions of miles, tens of billions of miles. And the FSD will go from being as good as a human to then being vastly better than a human. We see a clear path to full self-driving being 10x safer than the average human driver. 

Tesla’s management sees the Dojo training computer as a means to reduce the cost of neural net training and expects to spend more than US$1 billion on Dojo-related R&D through 2024

Our Dojo training computer is designed to significantly reduce the cost of neural net training. It is designed to — it’s somewhat optimized for the kind of training that we need, which is a video training. So we just see that the need for neural net training, again, talking of being a quasi-infinite of things, is just enormous. So I think having — we expect to use both NVIDIA and Dojo, to be clear. But there’s — we just see a demand for really advanced training resources. And we think we may reach in-house neural net training capability of 100 [ exoblocks ] by the end of next year…

…I think we will be spending something north of $1 billion over the next year on — through the end of next year, it’s well over $1 billion in Dojo. And yes, so I mean we’ve got a truly staggering amount of video data to do training on.

Around 5-6 Optimus bots – Tesla’s autonomous robots – have been made so far; Tesla’s management realised that it’s hard to find actuators that work well, and so Tesla had to design and manufacture its own actuators; the first Optimus with Tesla actuators should be made around November

Yes, I think we’re around 5 or 6 bots. I think there’s a — we were at 10, I guess. It depends on how many are working and what phase. But it’s sort of — yes, there’s more every month…  

…We found that there are actually no suppliers that can produce the actuators. There are no off-the-shelf actuators that work well for a humanoid robot at any price…

…So we’ve actually had to design our own actuators to integrate the motor, the power electronics, the controller, the sensors. And really, every one of them is custom designed. And then, of course, we’ll be using the same inference hardware as the car. But we, in designing these actuators, are designing them for volume production, so that they’re not just lighter, tighter and more capable than any other actuators whereof that exists in the world. But it’s also actually manufacturable. So we should be able to make them in volume. The first Optimus that is will have all of the Tesla designed actuators, sort of production candidate actuators, integrated and walking should be around November-ish. And then we’ll start ramping up after that.

Tesla is buying Nvidia chips as fast as Nvidia will deliver it – and Tesla’s management thinks that if Nvidia can deliver more chips, Tesla would not even need Dojo, but Nvidia can’t

But like I said, we’re also — we have some — we’re using a lot of NVIDIA hardware. We’ll continue to use — we’ll actually take NVIDIA hardware as fast as NVIDIA will deliver it to us. Tremendous respect for Jensen and NVIDIA. They’ve done an incredible job. And frankly, I don’t know, if they could deliver us enough GPUs, we might not need Dojo. But they can’t. They’ve got so many customers. They’ve been kind enough to, nonetheless, prioritize some of our GPU orders.

Elon Musk explained that his timing-projections for the actualisation of full self-driving has been too optimistic in the past because the next challenge is always many times harder than the last – he still expects Tesla’s full self-driving service to be better than human-driving by the end of this year, although he admits may be wrong yet again

Well, obviously, as people have sort of made fun of me, and perhaps quite fairly have made fun of me, my predictions about achieving full self-driving have been optimistic in the past. The reason I’ve been optimistic, what it tends to look like is we’ll make rapid progress with a new version of FSD, but then it will curve over logarithmically. So at first, logarithmic curve looks like this sort of fairly straight upward line, diagonal and up. And so if you extrapolate that, then you have a great thing. But then because it’s actually logarithmic, it curves over, and then there have been a series of stacked logarithmic curves. Now I know I’m the boy who cried FSD, but man, I think we’ll be better than human by the end of this year. That’s not to say we’re approved by regulators. And I’m saying then that, that would be in the U.S. because we’ve got to focus on one market first. But I think we’ll be better than human by the end of this year. I’ve been wrong in the past, I may be wrong this time.

The Trade Desk (NASDAQ: TSLA)

The use of AI is helping Trade Desk to surface advertising value for its customers

Of course, there are many other aspects of Kokai that we unveiled on [ 06/06 ], some of which are live and many of which we will be launching in the next few months. These indexes and other innovations, especially around the application of AI across our platform [ are helping us ] surface value more intuitively to advertisers. We are revamping our UX so that the campaign setup and optimization experience is even more intuitive with data next to decisions at every step. And we’re making it easier than ever for thousands of data, inventory, measurement and media partners to integrate with us. 

Trade Desk is using different AI models for specific applications instead of using one model for all purposes

You’ll recall that we launched AI in our platform in 2018 before it was trendy. And we call it then to and distributing that AI across the platform in a variety of different ways and different deep learning models so that we’re using that for very specific applications rather than trying to create one algo to rule them all, if you will, which is something we actually very — in a very disciplined way are trying to avoid. So we can create checks and balances in the way that the [ tech ] works, and we can make certain that AI is always providing improvements by essentially having A/B testing and better auditability

Visa (NYSE: V)

Visa is piloting a new AI-powered fraud capability for instant payments

First, our partnership with Pay.UK, the account-to-account payments operator in the U.K. was recently announced. We will be piloting our new fraud capability, RTP Prevent, which is uniquely built for instant payments with deep learning AI models. Using RTP Prevent, we can provide a risk score in real time so banks can decide whether to approve or reject the transaction on an RTP network. This is a great example of building and deploying entirely new solutions and our network of network strategy…

…So first of all, what we’ve done is we’ve built a real-time risk score. We’ve built it uniquely for instant payments, where there’s often unique cases of fraud in terms of how they work. We built it using deep learning AI models. And what it does is it enables banks to be able to decide whether to approve or reject the transaction in real time, which is a capability that most banks or most real-time payments networks around the world have been very hungry for. It’s a score from 1 to 99. It comes with an instant real-time code that explains the score. And what it does is it leverages our proprietary data that kind of we have used to enhance our own risk algorithms as well as the data that we see on a lot of our payment platforms, including Visa Direct. And one of the benefits of us bringing that to market is it integrates with the bank’s existing fraud and risk tools. Because we’re often providing these types of risk scores to banks and they’re ingesting them from us, it directly integrates into their fraud and risk tools, so the real-time information, their systems know how to use it. It can be automated into their decisioning algorithms and those types of things.

Wix (NASDAQ: WIX)

Wix has worked with AI for nearly a decade and management believes AI will be a key driver of Wix’s product strategy in the future

This quarter, we also continued to innovate and introduce new AI-driven tools in our pipeline. As mentioned last quarter, we have leveraged AI technology for nearly a decade, which has played a key role in driving user success for both Self Creators and Partners. By harnessing a variety of deep learning models trained on the incredible amount of data from the hundreds of millions of Wix sites, we’ve built out an impressive suite of AI and genAI products with the purpose of making the website building experience on Wix frictionless. As AI continues to evolve, we remain on the forefront of innovation with a number of AI and gen-AI driven products in our near-term pipeline, including AI Site Generator and AI Assistant for your business. AI is a key driver of our product and growth strategy for both Self Creators and Partners, and I’m excited for what is still to com

The introduction of generative AI products and features is improving the key performance indicators (KPIs) of Wix’s business

In regards to your question, if we see any tangible evidence that GenAI is actually improving business performance, then yes, we do. We — I’m not going to disclose all the details, but I’m just going to say that the thing we released in the first part of the year and late last year already are showing improvement in business KPIs. So it makes us very optimistic. And of course, the more we put those kind of technology in front of more users, we expect that factor to grow. But if you think about it right, the core value that Wix brings is reducing the friction when you try to build a website. And when you use that technology, that can do tremendously well in order to improve that core value. And then, of course, we expect the results to be significant.

Wix’s management believes that having generative AI technology alone is not sufficient for building a website

So the ones that we’ve seen until now are essentially doing the following, right? They take a template and they generate the text for the template and that — then they save that as a website. Essentially, they’re using ChatGPT to write text and then just put it inside of a template.

When we started, we did that. We’re now doing — with ChatGPT, we’re doing it since last, I think, November. And with ADI, we did it, of course, with algorithm less sophisticated. But even then, we didn’t just inject text to template. We actually created layouts around the text, which is the other way around, right? And that creates a huge difference in what we generate because when you fill text into a template, you are creating essentially artificial text that will fit the design. While in most cases, if you think about building a business, you do the other way around, you create your marketing messages and then you create a design, right, to fit that. And visually, it creates a massive defense efficiency of those websites and very different. So that is the first difference.

The other difference is that if you think about it, since probably 1998, you could write text in a word document and then save it as HTML, okay? So now you just build the website and you have the text and you have a very, very basic website. Of course, you cannot run your business on top of that because it doesn’t have everything you need to run a business. It doesn’t have analytics. It doesn’t have a contact form. It doesn’t have e-commerce. It doesn’t have transactions. All of those are the platform that makes it into a real business. And this is something that most of the tools — all the tools I have seen so far are lacking, right? They just build the page, which you could do in ’98, with word and just save it as HTML. So that’s another huge difference, right?

And the last part is the question of how do you edit. And this is a very important thing. A website is not something that you could edit once and you just publish it and you never go back. You constantly have things to do. You change products, you change services, you change addresses, you add things, you remove things. You need to add content, so Google will like you, and this is very, very important for finding your business in Google. And there’s a lot of other things, right? So you need to be able to edit the content.

Now when it comes to edit content, you don’t want to regenerate the website, okay, which [indiscernible] you see in all of those things that fill a template because it’s not only about filling a template, it’s now about editing the content. And this is the thing that we spend so much money on doing, right, to back in the technology, the e-commerce and then the ability to go in and point at something and edit or move it and drive it. So those are the things that created Wix, and those are, I think, still our differentiators.

Even if you generated a template with ChatGPT and it looks great. And for some magic origin actually, fit your value that the — marketing value that you want to put in your website. Editing it is not going to be possible with the current technology they use. And then even more than that, the ability to have all of the applications on top of it that you really need for our business, don’t exist.

Zoom Video Communications (NASDAQ: ZM)

There are promising signs that Zoom’s AI-related products are gaining traction with customers

Let me also thank Valmont Industries. Valmont came onboard as a Zoom customer a little over a year ago with Meetings and Phone and quickly became a major platform adopter, including Zoom One and Zoom Contact Center. And in Q2, with the goal of utilizing AI to better serve their employees, they added Zoom Virtual Agent due to its accuracy of intent understanding, ability to route issues to the correct agent, ease of use and quality of analytics…

But we’re really excited about the vision that we can take for them not only around, obviously, the existing platform but what’s also coming from an AI perspective. And I think our customers are finding that very attractive, as you’ve heard from the customers that Eric talked about seeing a lot of momentum of customers that were originally Meetings customers really moving either into Zoom One or adding on Zoom Phone and considering Contact Center as well.

Zoom’s management believes that the company has a differentiated AI strategy

And also our strategy is very differentiated, right? First of all, have a federated AI approach. And also the way we look at those AI features, how to help a customer improve productivity, that’s very important, right? And because the customer already like us, not like some others, right, who gave you a so-called free services and then your AI features price. That’s not our case, right? We really care about the customer value and also add more and more innovations.

Zoom’s management believes that AI integrations in the company’s products will be a key differentiator

And in terms of AI, not like other vendors, right, they already have Contact Center solution for a long, long time. When you look at AI kind of architecture and flexible, right, how to add AI to that to all those existing leaders the Contact Center. We already realized the importance of AI, right? That’s why we have a very flexible architecture. Not only do we build organic AI features but also acquired Solvvy and also the Virtual Agent and so on and so forth. Organic growth and also the acquisition certainly help us a lot in terms of product innovation. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Datadog, Etsy, Fiverr, Mastercard, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.

When Genius Failed (temporarily)*

Not even a business and investing genius can save us from short-term pain.

The late Henry Singleton was a bona fide polymathic genius. He had a PhD in electrical engineering and could play chess just below the grandmaster level. In the realm of business, Warren Buffett once said that Singleton “has the best operating and capital deployment record in American business… if one took the 100 top business school graduates and made a composite of their triumphs, their record would not be as good.”

Singleton co-founded Teledyne in 1960 and stepped down as chairman in 1990. Teledyne started life as an electronics company and through numerous acquisitions engineered by Singleton, morphed into an industrials and insurance conglomerate. According to The Outsiders, a book on eight idiosyncratic CEOs who generated tremendous long-term returns for their shareholders, Teledyne produced a 20.4% annual return from 1963 to 1990, far ahead of the S&P 500’s 8.0% return. Distant Force, a hard-to-obtain memoir on Singleton, mentioned that a Teledyne shareholder who invested in 1966 “was rewarded with an annual return of 17.9 percent over 25 years, or a return of 53 times his invested capital.” In contrast, the S&P 500’s return was just 6.7 times in the same time frame. 

Beyond the excellent long-term results, I also found another noteworthy aspect about Singleton’s record: It is likely that shareholders who invested in Teledyne in 1963 or 1966 would subsequently have thought, for many years, that Singleton’s genius had failed them. I’m unable to find precise historical stock price data for Teledyne during Singleton’s tenure. But based on what I could gather from Distant Force, Teledyne’s stock price sunk by more than 80% from 1967 to 1974. That’s a huge and demoralising decline for shareholders after holding on for seven years, and was significantly worse than the 11% fall in the S&P 500 in that period. But even an investor who bought Teledyne shares in 1967 would still have earned an annualised return of 12% by 1990, outstripping the S&P 500’s comparable annualised gain of 10%. And of course, an investor who bought Teledyne in 1963 or 1966 would have earned an even better return, as mentioned earlier. 

Just like how Buffett’s Berkshire Hathaway had seen a stomach-churning short-term decline in its stock price enroute to superb long-term gains driven by outstanding business growth, shareholders of Teledyne also had to contend with the same. I don’t have historical financial data on Teledyne from primary sources. But for the 1963-1989 time frame, based on data from Distant Force, it appears that the compound annual growth rates (CAGRs) for the conglomerate’s revenue, net income, and earnings per share were 19.8%, 25.3%, and 20.5%, respectively; the self-same CAGRs for the 1966-1989 time frame were 12.1%, 14.3%, and 16.0%. These numbers roughly match Teledyne’s returns cited by The Outsiders and Distant Force, once again demonstrating a crucial trait about the stock market I’ve mentioned in many earlier articles in in this blog (see here and here for example): What ultimately drives a stock’s price over the long run is its business performance.

Not every long-term winner in the stock market will bring its shareholders through an agonising fall mid-way. A notable example is the Canada-based Constellation Software, which is well-known in the investment community for being a serial acquirer of vertical market software businesses. The company’s stock price has risen by nearly 15,000% from its May 2006 IPO to the end of June 2023, but it has never seen a peak-to-trough decline of more than 30%. This said, it’s common to see companies suffer significant drawdowns in their stock prices while on their way to producing superb long-term returns. An unfortunate reality confronting investors who are focused on the long-term business destinations of the companies they’re invested in is that while the end point has the potential to be incredibly well-rewarding, the journey can also be blisteringly painful.

*The title of this section is a pun on one of my favourite books on finance, titled When Genius Failed. In the book, author Roger Lowenstein detailed how the hedge fund, Long-Term Capital Management (LTCM), produced breath-taking returns in a few short years only to then give it all back in the blink of an eye. $1 invested in LTCM at its inception in February 1994 would have turned into $4 by April 1998, before collapsing to just $0.30 by September in the same year; the fund had to be rescued via a bail-out orchestrated by the Federal Reserve Bank of New York. Within LTCM’s ranks were some of the sharpest minds in finance, including Nobel laureate economists, Robert Merton and Myron Scholes. Warren Buffett once said that LTCM “probably have as high an average IQ as any 16 people working together in one business in the country…[there was] an incredible amount of intellect in that room.” LTCM’s main trading strategy was arbitrage – taking advantage of price differentials between similar financial securities that are trading at different prices. The LTCM team believed that the price differentials between similar instruments would eventually converge and they set up complex trades involving derivatives to take advantage of that convergence. Because of the minute nature of the price differentials, LTCM had to take on enormous leverage in order to make substantial profits from its arbitrage trading activities. According to Roger Lowenstein’s account, leverage ratios of 20-to-1 to 30-to-1 were common. At its peak, LTCM was levered 100-to-1 – in other words, the hedge fund was borrowing $100 for every dollar of asset it had. Compounding the problem, LTCM’s partners, after enjoying startling success in the fund’s early days, started making directional bets in the financial markets, a different – and arguably riskier – activity from their initial focus on arbitrage. The story of LTCM’s downfall is a reminder of how hubris and leverage can combine into a toxic cocktail of financial destruction.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.