The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the second quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a once-in-a-generation platform shift (a similar comment was also made in the company’s 2023 first-quarter earnings call)

I think AI is basically like a once-in-a-generation platform shift, probably bigger than the shift to mobile, probably more akin to something like the Internet as far as what it can do for new businesses and new business opportunities. And I think that it is a huge opportunity for us to really be in the leading edge of innovation.

Airbnb is already using a fair amount of AI in its product but there’s not much generative AI at the moment; management also believes that AI can continue to help Airbnb lower its fixed cost base

I mean, remember that we actually use a fair amount of AI right now on the product, like we do it for our party prevention technology, a lot of our matching technologies. A lot of the underlying technologies we have is actually AI-driven. It’s not so much gen AI, which is such a huge kind of future opportunity. I think we’ll see more leverage in our fixed cost base, so needing fewer people to do more work overall. And so I think that, that’s going to help both on our fixed costs and some our variable costs. So you’ll see us being able to automate more customer service contacts, et cetera, over time…

…So customer — the strength of Airbnb is that we’re one-of-a-kind. We have 7 million active listings, more than 7 million listings, and everyone is unique and that is really special. But the problem with Airbnb is it’s one-of-a-kind, and sometimes you don’t know what you’re going to get. And so I think that if we can continue to increase reliability and then if there’s something that goes unexpected, if customer serves can quickly fix, remediate the issue, then I think there will be a tipping point where many people that don’t consider Airbnb and they only stay in hotels would consider Airbnb. And to give you a little more color about this customer service before I go to the future, there are so many more types of issues that could arise staying in Airbnb than a hotel. First of all, when you call a hotel, they’re usually one property and they’re aware of every room. We’re in nearly every country in the world. Often a guest or host will call us, and they will even potentially speak a different language than the person on the other side, the host — the guest and host.

There are nearly 70 different policies that you could be adjudicating. Many of these are 100 pages long. So imagine a customer service agent trying to quickly deal with an issue with somebody from 2 people from 2 different countries in a neighborhood that the agent may never even heard of. What AI can do, and we’re using a pilot to GPT-4, is AI can read all of our policies. No human can ever quickly read all those policies. It can read the case history of both guests and hosts. It could summarize the case issue, and it could even recommend what the ruling should be based on our policies. And that can then write a macro that the customer search agent can basically adopt and amend. If we get all this right, it’s going to 2 things. In the near term, it’s going to actually make customer service a lot more effective because agents will actually be able to handle a lot more tickets and make the ticket, you’ll never even have to talk to an agent, but also the service to be more reliable, which will unlock more growth.

Airbnb’s management believes that they can use build a breakthrough multi-modal AI interface to learn more about Airbnb’s users and provide a lot of personalisation (a.k.a an AI concierge)

If you were to go to ChatGPT right now and you ask it a question and I were to go to ChatGPT and ask it a question, we’re going to get mostly the same answer. And the reason why is it doesn’t know who you are and it doesn’t know who I am. So it does really good with like immutable truths, like how far is the earth to the moon or something like that. And — there’s no conditional answers to that. But it turns out in life, there’s a whole bunch of questions, and travel is one of these areas where the answer isn’t right for everyone. Where should I travel? Where should I stay? Who should I go with? What should I bring? Every one of these questions depends on who you are…

… And we can design, I think, a breakthrough interface for AI. I do not think that the AI interface is chat. Chat, I do not think is the right interface because we want to interface that’s multimodal. It’s text, it’s image and it’s video and you can — it’s much faster than typing to be able to see what you want. So we think there’s a whole new interface. And also, I think it’s really important that we provide a lot of personalization, that we learn more about you, that you’re not just a unanimous customer. And that’s partly why we’re investing more and more in account profiles, personalization, really understanding the guests. We want to know more about every guest in Airbnb than any travel company knows about their customer in the world. And if we do that, we can provide much more personalized service and that our app can almost be like an AI concierge that can match to the local experiences, local homes, local places all over the world.

Airbnb’s management is not interested in building foundational AI models – they are only keen on building the interface (a similar comment was also made in the company’s 2023 first-quarter earnings call)

And so we’re not going to be building like large research labs to develop these large language models. Those are like infrastructure projects, building bridges. But we’re going to build the applications on top of the bridges, like the car. And I think Airbnb is best-in-class at designing interfaces. I think you’ve seen that over the last few years.

Airbnb’s management believes that the companies that will best succeed in AI are the most product-led companies

And I think the last thing I’ll just say about AI is I think the companies that will best succeed in AI, well, think of it this way, which company’s best adopted in mobile? Which company is best adopted in the Internet? It was the companies that were most innovative, the most product-led. And I think we are very much a product-led, design-led, technology-led company, and we always want to be on the frontier of new tech. So we’re working on that, and I think you’ll see some exciting things in the years to come.

Alphabet (NASDAQ: GOOG)

Alphabet is making AI helpful for everyone in four important ways

 At I/O, we shared how we are making AI helpful for everyone in 4 important ways: first, improving knowledge and learning…

…Second, we are helping people use AI to boost their creativity and productivity…

…Third, we are making it easier for others to innovate using AI…

…Finally, we are making sure we develop and deploy AI technology responsibly so that everyone can benefit.

2023 is the seventh year of Alphabet being an AI-first company and it knows how to incorporate AI into its products

This is our seventh year as an AI-first company, and we intuitively know how to incorporate AI into our products.

Nearly 80% of Alphabet’s advertisers use at least one AI-powered Search ads product 

In fact, today, nearly 80% of advertisers already use at least one AI-powered Search ads product.

Alphabet is using AI to help advertisers create campaigns and ads more easily in Google Ads and also help advertisers better understand their campaigns

Advertisers tell us they’re looking for more assistive experience to get set up with us faster. So at GML, we launched a new conversational experience in Google Ads powered by a LLM tuned specifically from ads data to make campaign construction easier than ever. Advertisers also tell us they want help creating high-quality ads that work in an instant. So we’re rolling out a revamped asset creation flow in Performance Max that helps customers adapt and scale their most successful creative concepts in a few clicks. And there’s even more with PMax. We launched a new asset insights and new search term insights that improve campaign performance understanding and new customer life cycle goals that led advertisers optimize for new and existing customers while maximizing sales. We’ve long said it’s all about reaching the right customer with the right creative at the right time. 

So later this year, Automatically Created Assets, which are already generating headlines and descriptions for search ads, will start using generative AI to create assets that are even more relevant to customer queries. Broad match also got updates. AI-based keyword prioritization ensures the right keyword, bid, budget, creative and landing page is chosen when there are multiple overlapping keywords eligible. And then to make it easier for advertisers to optimize visual storytelling and drive consideration in the mid funnel, we’re launching 2 new AI-powered ad solutions, Demand Gen and Video View campaigns, and both will include Shorts inventory. 

Alphabet’s management thinks the integration of LLMs (large language models) and generative AI make Alphabet’s core Search product even better

Large language models make them even more helpful models like PaLM 2 and soon Gemini, which we are building to be multimodal. These advances provide an opportunity to reimagine many of our products, including our most important product, Search. We are in a period of incredible innovation for Search, which has continuously evolved over the years. This quarter saw our next major evolution with the launch of the Search Generative Experience, or SGE, which uses the power of generative AI to make Search even more natural and intuitive. User feedback has been very positive so far. It can better answer the queries people come to us with today, while also unlocking entirely new types of questions that Search can answer. For example, we found that generative AI can connect the dots for people as they explore a topic or project, helping them weigh multiple factors and personal preferences before making a purchase or booking a trip. We see this new experience as another jumping off point for exploring the web, enabling users to go deeper to learn about a topic.

Alphabet’s management thinks the company has done even better in integrating generative AI into search than they thought it would be at this point in time

Look, on the Search Generative Experience, we definitely wanted to make sure we’re thinking deeply from first principles, while it’s exciting new technology, we’ve constantly been bringing in AI innovations into Search for the past few years, and this is the next step in that journey. But it is a big change so we thought about from first principles. It really gives us a chance to now not always be constrained in the way Search was working before, allowed us to think outside the box. And I see that play out in experience. So I would say we are ahead of where I thought we’d be at this point in time. The feedback has been very positive. We’ve just improved our efficiency pretty dramatically since the product launch. The latency has improved significantly. We are keeping a very high bar, and — but I would say we are ahead on all the metrics in terms of how we look at it internally.

Alphabet’s management believes that even with the introduction of generative AI (Search Generative Experience) in the company’s core Search product, advertising will still continue to play a critical role in the company’s business model and the monetisation of Search will not be harmed

Ads will continue to play an important role in this new search experience. Many of these new queries are inherently commercial in nature. We have more than 20 years of experience serving ads relevant to users’ commercial queries, and SGE enhances our ability to do this even better. We are testing and evolving placements and formats and giving advertisers tools to take advantage of generative AI…

…Users have commercial needs, and they are looking for choices, and there are merchants and advertisers looking to provide those choices. So those fundamentals are true in SGE as well. And we have a number of experiments in flight, including ads, and we are pleased with the early results we are seeing. And so we will continue to evolve the experience, but I’m comfortable at what we are seeing, and we have a lot of experience working through these transitions, and we’ll bring all those learnings here as well.

Alphabet’s management believes that Google Cloud is a leading platform for training and running inference of generative AI models with more than 70% of generative AI unicorns using Google Cloud

Our AI-optimized infrastructure is a leading platform for training and serving generative AI models. More than 70% of gen AI unicorns are Google Cloud customers, including Cohere, Jasper, Typeface and many more. 

Google Cloud uses both Nvidia chips as well as Google’s own TPUs (this combination helps customers get 2x better price performance than competitors)

We provide the widest choice of AI supercomputer options with Google TPUs and advanced NVIDIA GPUs, and recently launched new A3 AI supercomputers powered by NVIDIA’s H100. This enables customers like AppLovin to achieve nearly 2x better price performance than industry alternatives. 

Alphabet is seeing customers using Google Cloud’s AI capabilities for online travelling, retail marketing, anti-money laundering, drug discovery, and more

Among them, Priceline is improving trip planning capabilities. Carrefour is creating full marketing campaigns in a matter of minutes. And Capgemini is building hundreds of use cases to streamline time-consuming business processes. Our new Anti-Money Laundering AI helps banks like HSBC identify financial crime risk. And our new AI-powered target and lead identification suite is being applied at Cerevel to help enable drug discovery…

… I mentioned Duet AI earlier. Instacart is using it to improve customer service workflows. And companies like Xtend are scaling sales outreach and optimizing customer service.

Alphabet’s management thinks that open-source AI models will be important in the ecosystem and

Google Cloud will be offering not just first-party AI models, but also third-party and open source models

So similarly, you would see with AI, we will embrace — we will offer not just our first-party models, we’ll offer third-party models, including open source models. I think open source has a critical role to play in this ecosystem. Google contributes, we are one of the largest contributors to — if you look at hugging phase and in terms of the contribution there, when you look at projects like Android, Chromium and so on, Kubernetes and so on. So we’ll embrace that and we’ll stay at the cutting edge of technology, and I think that will serve us well for the long term.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks generative AI is going to be transformative, but it’s still very early days in the adoption and success of generative AI, and consumer applications is only one opportunity in the area

It’s important to remember that we’re in the very early days of the adoption and success of generative AI, and that consumer applications is only one layer of the opportunity…

… I think it’s going to be transformative, and I think it’s going to transform virtually every customer experience that we know. But I think it’s really early. I think most companies are still figuring out how they want to approach it…

…What I would say is that we have had a very significant amount of business in AWS driven by machine learning and AI for several years. And you’ve seen that largely in the form of compute as customers have been doing a lot of machine learning training and then running their models and production on top of AWS and our compute instances. But you’ve also seen it in the form of the 20-plus machine learning services that we’ve had out there for a few years. I think when you’re talking about the big potential explosion in generative AI, which everybody is excited about, including us, I think we’re in the very early stages there. We’re a few steps into a marathon in my opinion. 

Amazon’s management sees LLMs (large language models) in generative AI as having three key layers and Amazon is participating heavily in all three: The first layer is the compute layer; the second would be LLMs-as-a-service; and the third would be the applications that run on top of LLMs, with ChatGPT being an example

We think of large language models in generative AI as having 3 key layers, all of which are very large in our opinion and all of which AWS is investing heavily in. At the lowest layer is the compute required to train foundational models and do inference or make predictions…

…We think of the middle layer as being large language models as a service…

…Then that top layer is where a lot of the publicity and attention have focused, and these are the actual applications that run on top of these large language models. As I mentioned, ChatGPT is an example. 

Amazon has AI compute instances that are powered by Nvidia H100 GPUs, but the supply of Nvidia chips is scarce, so management built Amazon’s own training (Trainium) and inference (Inferentia) chips and they are an appealing price performant option

Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there’s only been one viable option in the market for everybody and supply has been scarce. That, along with the chip expertise we’ve built over the last several years, prompted us to start working several years ago on our own custom AI chips for training called Trainium and inference called Inferentia that are on their second versions already and are a very appealing price performance option for customers building and running large language models.

Amazon’s management optimistic that a lot of LLM training and inference will be running on Trainium and Inferentia in the future

We’re optimistic that a lot of large language model training and inference will be run on AWS’ Trainium and Inferentia chips in the future.

Amazon’s management believes that most companies that want to work with AI do not want to build foundational LLMs themselves as it is time consuming and expensive, and companies only want to customize the LLMs with their own data in a secure way (this view was also mentioned in Amazon’s 2023 first-quarter earnings call) 

Stepping back for a second, to develop these large language models, it takes billions of dollars and multiple years to develop. Most companies tell us that they don’t want to consume that resource building themselves. Rather, they want access to those large language models, want to customize them with their own data without leaking their proprietary data into the general model, have all the security, privacy and platform features in AWS work with this new enhanced model and then have it all wrapped in a managed service. 

AWS has a LLM-as-a-service called Bedrock that provides access to LLMs from Amazon and multiple startups; large companies are already using Bedrock to build generative AI applications; Bedrock allows customers to create conversation AI agents 

This is what our service Bedrock does and offers customers all of these aforementioned capabilities with not just one large language model but with access to models from multiple leading large language model companies like Anthropic, Stability AI, AI21 Labs, Cohere and Amazon’s own developed large language models called Titan. Customers, including Bridgewater Associates, Coda, Lonely Planet, Omnicom, 3M, Ryanair, Showpad and Travelers are using Amazon Bedrock to create generative AI application. And we just recently announced new capabilities from Bedrock, including new models from Cohere, Anthropic’s Claude 2 and Stability AI’s Stable Diffusion XL 1.0 as well as agents for Amazon Bedrock that allow customers to create conversational agents to deliver personalized up-to-date answers based on their proprietary data and to execute actions.

Amazon’s management believes that AWS is democratizing access to generative AI and is making it easier for companies to work with multiple LLMs

If you think about these first 2 layers I’ve talked about, what we’re doing is democratizing access to generative AI, lowering the cost of training and running models, enabling access to large language model of choice instead of there only being one option.

Amazon’s management sees coding companions as a compelling early example of a generative AI application and Amazon has CodeWhisperer, which is off to a very strong start

We believe one of the early compelling generative AI applications is a coding companion. It’s why we built Amazon CodeWhisperer, an AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code. It’s off to a very strong start and changes the game with respect to developer productivity.

Every team in Amazon are building generative AI applications but management believes that most of these applications will be built by other companies, although these applications will be built on AWS

Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience. But while we will build a number of these applications ourselves, most will be built by other companies, and we’re optimistic that the largest number of these will be built on AWS… 

…Coupled with providing customers with unmatched choices at these 3 layers of the generative AI stack as well as Bedrock’s enterprise-grade security that’s required for enterprises to feel comfortable putting generative AI applications into production, we think AWS is poised to be customers’ long-term partner of choice in generative AI…

…On the AI question, what I would tell you, every single one of our businesses inside of Amazon, every single one has multiple generative AI initiatives going right now. And they range from things that help us be more cost effective and streamlined in how we run operations in various businesses to the absolute heart of every customer experience in which we offer. And so it’s true in our stores business. It’s true in our AWS business. It’s true in our advertising business. It’s true in all our devices, and you can just imagine what we’re working on with respect to Alexa there. It’s true in our entertainment businesses, every single one. It is going to be at the heart of what we do. It’s a significant investment and focus for us.

Amazon’s management believes that (1) data is the core of AI, and companies want to bring generative AI models to data, not the other way around and (2) AWS has a data advantage

Remember, the core of AI is data. People want to bring generative AI models to the data, not the other way around. AWS not only has the broadest array of storage, database, analytics and data management services for customers, it also has more customers and data store than anybody else.

Amazon’s management is of the view that in the realm of generative AI as well as cloud computing in general, the more demand there is, the more capex Amazon needs to spend to invest in data centers for long-term monetisation; management wants the challenge of having more capex to spend on because that will mean that AWS customers are successful with building generative AI on top of AWS

And so it’s — like in AWS, in general, one of the interesting things in AWS, and this has been true from the very earliest days, which is the more demand that you have, the more capital you need to spend because you invest in data centers and hardware upfront and then you monetize that over a long period of time. So I would like to have the challenge of having to spend a lot more in capital in generative AI because it will mean that customers are having success and they’re having success on top of our services.

Apple (NASDAQ: AAPL)

Apple has been doing research on AI for years and has built these technologies as integral features of its products; management intends for Apple to continue investing in AI in the years ahead

If you take a step back, we view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build. And so if you think about WWDC in June, we announced some features that will be coming in iOS 17 this fall, like Personal Voice and Live Voicemail. Previously, we had announced lifesaving features like fall detection and crash detection and ECG. None of these features that I just mentioned and many, many more would be possible without AI and machine learning. And so it’s absolutely critical to us.

And of course, we’ve been doing research across a wide range of AI technologies, including generative AI for years. We’re going to continue investing and innovating and responsibly advancing our products with these technologies with the goal of enriching people’s lives. And so that’s what it’s all about for us.

ASML (NASDAQ: ASML)

ASML’s management believes that AI has strengthened the long-term megatrends powering the growth of the semiconductor industry

Beyond 2024, it’s really the solid believe we have in the megatrends that are not going to go away. You can even argue that some of these megatrends, when you think about AI, are even more important than we thought, let’s say at the end of last year. But it’s not only AI, it’s also the energy transition, it’s the electrification of mobility, it’s industrial Internet Of Things. It’s everything that’s driven by sensors and actuators. So, effectively, we see very strong growth across the entire semiconductor space. Whether it’s mature or whether it’s advanced. Because of these megatrends we have still a very strong confidence in what we said at the end of last year, that by 2025 – depending on what market scenario you are choosing, higher or lower – we will have between €30 billion and €40 billion of sales and gross margin by that 2025 timeframe between 54% and 56%. And if you extend that then to 2030, we are still very confident that by that time, also dependent on a lower or higher market scenario, sales will be anywhere between €44 billion and €60 billion with gross margin between 56% and 60%. So, we have short-term cycles. This is what the industry is all about. But we have very strong confidence, even stronger confidence, in what the longer-term future is going to bring for this company.

ASML’s management thinks the world is at the beginning of an AI high-power compute wave, but AI will not be a huge driver of the company’s growth in 2024

But I think we’re at the beginning of this, you could say, AI high-power compute wave. So yes, you’ll probably see some of that in 2024. But you have to remember that we have some capacity there, which is called the current underutilization. So yes, we will see some of that, but that will be taken up, the particular demand, by the installed base. Now — and that will further accelerate. I’m pretty sure. But that will definitely mean that, that will be, you could say, the shift to customer by 2025. So I don’t see that or don’t particularly expect that, that will be a big driver for additional shipments in 2024, given the utilization situation that we see today.

Arista Networks (NYSE: ANET)

Arista Networks’ management is seeing AI workloads drive an upgrade from 400 gigabit networking ports to 800 gigabit ports

As we surpassed 75 million cumulative cloud networking ports, we are experiencing 3 refresh cycles with our customers, 100 gigabit migration in the enterprises, 200 and 400 gigabit migration in the cloud and 400 going to 800 gigabits for AI workloads…

…We had the same discussion when the world went to 400 gig. Are we switching for 100 to 400. The reality was the customers continue to buy both 100 and 400 for different use cases. [ 51T ] and 800 gig especially are being pulled by AI clusters, the AI teams, they’re very anxious to get their hands on it, move the data as quickly as possible and reduce their job completion times. So you’ll see early traction there.

At least one of Arista Networks’ major cloud computing customers is shifting capital expenditure from other cloud computing areas to AI-related areas

During the past couple of years, we have enjoyed significant increase in cloud CapEx to support our Cloud Titan customers for their ever-growing needs, tech refresh and expanded offerings. Each customer brings a different business and mix of AI networking and classic cloud networking for their compute and storage clusters. One specific Cloud Titan customer has signaled a slowdown in CapEx from previously elevated levels. Therefore, we expect near-term Cloud Titan demand to moderate with spend favoring their AI investments. 

Arista Networks is a founding member of a consortium that is promoting the use of Ethernet for networking needs in AI data centres

Arista is a proud founding member of the Ultra Ethernet Consortium that is on a mission to build open, multivendor AI networking at scale based on proven Ethernet and IP.

Arista Networks’ management thinks AI networking will be an extension of cloud networking in the future

In the decade ahead, AI networking will become an extension of cloud networking to form a cohesive and seamless front-end and back-end network.

Arista Networks’ management thinks that Ethernet – and not Inifiniband – is the right networking technology when it comes to the training of large language models (LLMs) because they involve a massive amount of data; but in the short run, management thinks Infiniband will be more widely adopted

Today, I would say, in the back end of the network, there are basically 3 classes of networks. One is very, very small networks that are within a server where customers use PCIe, CXL, there is proprietary NVIDIA-specific technologies like NVLink that Arista does not participate. Then there’s more medium clusters, you can think generative AI, mostly inference where they may well get built on Ethernet. For the extremely large clusters with large language training models, especially with the advent of ChatGPT 3 and 4 you’re not talking about not just billion parameters, but an aggregate of trillion parameters. And this is where Ethernet will shine. But today, the only technology that is available to customers is InfiniBand. So obviously, InfiniBand with 10, 15 years of similarity in an HPC environment is often being bundled with the GPU. But the right long-term technology is Ethernet, which is why I’m so proud of what the Ultra Ethernet Consortium and a number of vendors are doing to make that happen. So near term, there’s going to be a lot of InfiniBand and Arista will be watching that outside in…

…And what is their network foundation. In some cases, where they just need to go quick and fast, as I explained before, it would not be uncommon to just bundle their GPUs with an existing technology like InfiniBand. But where they’re really rolling out into 2025, they’re doing more trials and pilots with us to see what the performance is, to see what the drop is, to see how many they can connect, what’s the latency, what’s the better entropy, what’s the efficiency, et cetera. That’s where we are today.

Arista Networks’ management thinks that neither Ethernet nor Infiniband were purpose-built for AI

But longer term, Arista will be participating in an Ethernet [ AI ] network. And neither technology, I want to say, were perfectly designed for AI, InfiniBand was more focused on HPC and Ethernet was more focused on general purpose networking. So I think the work we are doing with the UEC to improve Ethernet for AI is very important.

Arista Networks’ management thinks that there’s a 10-year AI-drive growth opportunity for Ethernet networking technology

I think the way to look at our AI opportunity is it’s 10 years ahead of us. And we’ll have early customers in the cloud with very large data sets, trialing our Ethernet now. And then we will have more cloud customers, not only Titans, but other high-end Tier 2 cloud providers and enterprises with large data sets that would also trial us over time. So in 2025, we expect to have a large list of customers, of which Cloud Titans will still end up being some of the biggest but not the only ones.

Datadog (NASDAQ: DDOG)

Datadog has introduced functionalities related to generative AI and LLMs (large language models) on its platform that include (1) the ability for software teams to monitor the performance of their AI models, (2) an incident management copilot, and (3) new integrations across AI stacks including GPU infrastructure providers, vector databases, and more

To kick off our keynote, we launched our first innovation for generative AI and large language model. We showcased our LLM observability product, enabling ML engineers to safely deploy and manage the model production. This includes the motor catalog centralized place to view and manage every model in every state of our customer development pipeline; analysis and insight on model performance, which allows all engineers to identify and address performance and quality issue with the model themselves; and help identify model drift, the performance the performance degradation that happens over time as model interact with the world data. We also introduced Bits AI. Bits understands natural language and provide insights from across the Datadog platform as well as from our customers’ collaboration and documentation tools. Among its many features, Bits AI can act as an incident management copilot identifying and suggesting succes, generating synthetic tests and triggering workflows to automatically remediate critical issue. And we announced 15 new integrations across the next-generation AI stack from GPU infrastructure providers to Vector databases, motor vendors and orchestration frameworks.

Management is seeing Datadog get early traction with AI customers

And although it’s early days for everyone in this space, we are getting traction with AI customers. And in Q2, our next-gen AI customers contributed about 2% of ARR.

Datadog’s AI customers are those that are selling LLM services or companies that are built on differentiated AI technology

So it’s — you can see it as the customers that are either selling AI themselves. So that would be LM vendors and the like. Our customers whose whole business is so is built on differentiated AI technology. And we’ve been fairly selective in terms of who we put in a category because companies everywhere are very eager to said that they differentiate we are today. 

Datadog expanded a deal with one of the world’s largest tech companies that is seeing massive adoption of its new generative AI product and was using homegrown tools for tracking and observability, but those were slowing it down

Next, we signed a 7-figure expansion with 1 of the world’s largest tech companies. This customer is seeing massive adoption of its new generative AI product and needs to scale their GPU fleet to meet increasing demand for a workload. Using their homegrown tools were slowing them down and put at risk critical product launches. With Datadog, this team is able to programmatically manage new environments as they come online, track and alert on their service level objectives and provide real-time visibility for GPs.

Etsy (NASDAQ: ETSY)

Etsy is using machine learning (ML) models to better predict how humans would perceive the quality of a product

Our product teams are helping buyers more easily navigate the breadth and depth of our sellers’ inventory, leveraging the latest AI advances to improve our discovery and inspiration experiences while surfacing the very best of Etsy. These latest technologies, combined with training and guidance from our own talented team, is making the superhuman possible in terms of organizing and curating at scale, which I believe can unlock an enormous amount of growth in the years to come. One great example. Over the past quarter, we’ve more than doubled the size of our best of Etsy library, which is curated by expert merchandisers based on the visual appeal, uniqueness and apparent craftsmanship of an item. We’re now using this library to train our ML models to better predict the quality of items as perceived by humans. We’re seeing encouraging results from our first iterations on these models, and I’m optimistic that this work will have a material impact, helping us to surface the best of Etsy in every search.

Etsy’s use of ML has helped it to dramatically reduce the time it takes to resolve customer issues

Specific to our trust and safety work, advances in ML capabilities have enabled our enforcement models to detect an increasing number of policy violations, which, combined with human know-how, is starting to have a meaningful impact on the buyer and seller experience. Since Etsy Purchase Protection was launched about a year ago, we’ve reduced the issue resolution time for cases by approximately 85%, dramatically streamlining the service experience on the rare occasion that something goes wrong, demonstrating to buyers and sellers that we have their backs in these key moments. 

Etsy’s management wants to use AI to improve every touch point a customer has with Etsy

Of course, much of the focus was on the myriad ways we can continue to harness AI and ML technologies in almost every customer touch point, with the potential to further transform buyer-facing experiences like enhancing search and recommendations, seller tools like streamlining the listing process and assisting with answering customer queries, improving fraud detection and trust and safety models, et cetera. The opportunities are nearly endless.

Etsy has a small ML team and the company is streamlining its machine learning (ML) workflow so that it’s easy for any Etsy software engineering to deploy their own ML models without asking for help from the ML team

But all of this innovation also takes time and effort and relies on our relatively small but mighty team of ML experts, talent that is obviously in high demand. Historically, all new ML models have been created by this team of highly specialized data scientists. And the full process of creating a new model, from cleaning and organizing the data to training and testing the model, then putting it into production, could take as long as 4 months. That’s why we kicked off a major initiative over a year ago we call Democratizing ML with the goal to streamline and automate much of this work so that virtually any Etsy engineer can deploy their own ML models in a matter of days instead of months. And I’m thrilled to report that we’re starting to see the first prototypes from this effort come live now. For example, if you’re on the Etsy team working on buyer recommendations, you can now use a drag-and-drop modeling tool to create a brand-new recommendations module without needing our ML team to build that model for you. 

Etsy’s management is currently testing ML technology developed by other companies to drive its own efficiency

We’ve also been leveraging the investments that other companies, many of them are existing partners have invested already in machine learning. And so we’re doing a lot of beta testing and experimentation with other companies. And at the moment, that is coming at a very low cost to us. We would imagine that at some point, there will be some kind of license fee arrangement. But we are — typically, we do not invest in anything unless we see a high ROI for that investment.

Etsy’s management believes that generative AI can be good for the company’s search experience for consumers, but consumer-adoption and the AI-integration will take time

For buyers, the idea that the search experience can become more conversational, I think, can be a very big deal for Etsy, and maybe more for Etsy than for most people. I talked to 2 earnings calls ago now about how you don’t walk into a store and shout, “Dress blue, linen,” to a sales agent. You actually have a conversation with them that has more context. And I think that’s especially important in a place like Etsy, where we’ve got 115 million listings to choose from and no catalog. So the idea that it can be conversational, I think, can give a lot of context and really help. And I think a lot of the technology behind that is becoming a self-problem. What’s going to be longer is the consumer adoption curve. What do customers expect when they enter something into a search bar? And how do they get used to interacting with chatbots? And what’s the UI look like? And that’s something that I think we’re going to need to — we’re testing a lot right now. What do people expect? How do they like to interact with things? And in my experience now, having a few decades of consumer technology leadership, the consumer adoption curve is often the long pole in the tent, but I think over time, can yield really big gains for us.

Fiverr (NYSE: FVRR)

Fiverr Neo is a new matching service from Fiverr that provides better matching for search queries using data, AI, and conversational search

Essentially, what we’ve done with Fiverr Neo is to tackle head-zone, the largest challenge every market has, which is matching. Now being able to produce a great match is far more than just doing search. And search by definition is very limited because customers provide 3 or 4 awards. And based on that, you need to understand their intent, their need and everything surrounding that need. And providing good matching for us is really about not just pairing business with a professional or with an agency, but actually being able to produce a product and end result where the 2 parties to that transaction are very happy that they work together.To do this perfect match, you need a lot of information because that allows you to create a very, very precise type of match. And what we’ve developed with Fiverr Neo using the latest technologies alongside our deep data tech that we’ve developed along the years and the tens of millions of transactions that we’ve already processed and the learnings from that is a product that can have a human-like discussion where our technology deeply understands and can have a conversation that would guide the customer to define their exact needs.

Fiverr’s management is seeing high interest for AI services on the company’s marketplace

So on the AI services, pretty much the same as last quarter, meaning we’ve launched tens of categories around AI. The interest is very high. It’s very healthy. And we continue to invest in it. So basically introducing more and more categories that have to do with AI in general and Gen AI in particular. And our customers love it. They use it and we’re happy with what we’re seeing on that front.

Mastercard (NYSE: MA)

Mastercard’s management sees AI as a foundational technology for the company and the technology has been very useful for the company’s fraud-detection solutions, where Mastercard has helped at least 9 UK banks stop payment scams before funds leave a victim’s account

We recently launched our Consumer Fraud Risk solution, which leverages our latest AI capabilities and the unique network view of real-time payments I just mentioned to help banks predict and prevent payment scams. AI is a foundational technology used across our business and has been a game changer in helping identify such fraud patterns. We’ve partnered with 9 U.K. banks, including Barclays, Lloyds Bank, Halifax, Bank of Scotland, NatWest, Monzo and TSB to stop scam payments before funds leave a victim’s account.TSB, one of the first banks to adopt the solution, indicated that it has already dramatically increased its fraud detection since deploying the capability.

Meta Platforms (NASDAQ: META)

Meta’s management currently does not have a clear handle on how much AI-related capital expenditure is needed – it will depend on how fast Meta’s AI products grow

The other major budget point that we’re working through is what the right level of AI CapEx is to support our road map. Since we don’t know how quickly our new AI products will grow, we may not have a clear handle on this until later in the year…

…There’s also another component, which is the next-generation AI efforts that we’ve talked about around advanced research and gen AI, and that’s a place where we’re already standing up training clusters and inference capacity. But we don’t know exactly what we’ll need in 2024 since we don’t have any at-scale deployments yet of consumer business-facing features. And the scale of the adoption of those products is ultimately going to inform how much capacity we need.

Meta’s management is seeing the company’s investments in AI infrastructure paying off in the following ways: (1) Increase in engagement and monetisation of Reels; and (2) an increase in monetisation of automated advertising products

Investments that we’ve made over the years in AI, including the billions of dollars we’ve spent on AI infrastructure, are clearly paying off across our ranking and recommendation systems and improving engagement and monetization. AI-recommended content from accounts you don’t follow is now the fastest-growing category of content on Facebook’s Feed. Now since introducing these recommendations, they’ve driven a 7% increase in overall time spent on the platform. This improves the experience because you can now discover things that you might not have otherwise followed or come across.

Reels is a key part of this discovery engine. And Reels plays exceed 200 billion per day across Facebook and Instagram. We’re seeing good progress on Reels monetization as well, with the annual revenue run rate across our apps now exceeding $10 billion, up from $3 billion last fall.

Beyond Reels, AI is driving results across our monetization tools through our automated ads products, which we call Meta Advantage. Almost all our advertisers are using at least one of our AI-driven products. We’ve also deployed Meta Lattice, a new model architecture that learns to predict an ad’s performance across a variety of data sets and optimization goals. And we introduced AI Sandbox, a testing playground for generative AI-powered tools like automatic text variation, background generation and image outcropping.

Meta’s management believes the company is building leading foundational AI models, including Llama2, which is open-sourced; worth noting that Llama2 comes with a clause that large enterprises that sell Llama2 need to have a commercial agreement with Meta

Beyond the recommendations and ranking systems across our products, we’re also building leading foundation models to support a new generation of AI products. We’ve partnered with Microsoft to open source Llama 2, the latest version of our large language model and to make it available for both research and commercial use…

……in addition to making this open through the open source license, we did include a term that for the largest companies, specifically ones that are going to have public cloud offerings, that they don’t just get a free license to use this. They’ll need to come and make a business arrangement with us. And our intent there is we want everyone to be using this. We want this to be open. But if you’re someone like Microsoft or Amazon or Google, and you’re going to basically be reselling these services, that’s something that we think we should get some portion of the revenue for. So those are the deals that we intend to be making, and we’ve started doing that a little bit. I don’t think that, that’s going to be a large amount of revenue in the near term. But over the long term, hopefully, that can be something.

Meta’s management believes that open-sourcing allows Meta to benefit from (a) innovations that come from everywhere, in areas such as safety and efficiency, and (b) being able to attract potential employees

We have a long history of open sourcing our infrastructure and AI work from PyTorch, which is the leading machine learning framework, to models like Segment Anything, ImageBind and DINO to basic infrastructure as part of the Open Compute Project. And we found that open-sourcing our work allows the industry, including us, to benefit from innovations that come from everywhere. And these are often improvements in safety and security, since open source software is more scrutinized and more people can find and identify fixes for issues. The improvements also often come in the form of efficiency gains, which should hopefully allow us and others to run these models with less infrastructure investment going forward…

……One of the things that we’ve seen is that when you release these projects publicly and in open source, there tend to be a few categories of innovations that the community makes. So on the one hand, I think it’s just good to get the community standardized on the work that we’re doing. That helps with recruiting because a lot of the best people want to come and work at the place that is building the things that everyone else uses. It makes sense that people are used to these tools from wherever else they’re working. They can come here and build here. 

Meta is building new products itself using Llama and Llama2 will underpin a lot of new Meta products

So I’m really looking forward to seeing the improvements that the community makes to Llama 2. We are also building a number of new products ourselves using Llama that will work across our services…

…e wanted to get the Llama 2 model out now. That’s going to be — that’s going to underpin a lot of the new things that we’re building. And now we’re nailing down a bunch of these additional products, and this is going to be stuff that we’re working on for years.

Meta partnered with Microsoft to open-source Llama2 because Meta does not have a public cloud offering

We partnered with Microsoft specifically because we don’t have a public cloud offering. So this isn’t about us getting into that. It’s actually the opposite. We want to work with them because they have that and others have that, and that was the thing that we aren’t planning on building out.

Meta’s management thinks that AI be integrated into Meta’s products in the following ways: Help people connect, express themselves, create content, and get digital assistance in a better way (see also Point 30)

But you can imagine lots of ways that AI can help people connect and express themselves in our apps, creative tools that make it easier and more fun to share content, agents that act as assistance, coaches that can help you interact with businesses and creators and more. And these new products will improve everything that we do across both mobile apps and the metaverse, helping people create worlds and the avatars and objects that inhabit them as well.

Meta’s management expects the company to spend more on AI infrastructure in 2024 compared to 2023

We’re still working on our ’24 CapEx plans. We haven’t yet finalized that, and we’ll be working on that through the course of this year. But I mentioned that we expect that CapEx in ’24 will be higher than in ’23. We expect both data center spend to grow in ’24 as we ramp up construction on sites with the new data center architecture that we announced late last year. And then we certainly also expect to invest more in servers in 2024 for both AI workloads to support all of the AI work that we’ve talked about across the core AI ranking, recommendation work, along with the next-gen AI efforts. And then, of course, also our non-AI workloads, as we refresh some of our servers and add capacity just to support continued growth across the site.

There are three categories of products that Meta’s management plans to build with generative AI: (1) Building ads, (2) improving developer efficiency, and (3) building AI agents, especially for businesses so that businesses can interact with humans effectively (right now, human-to-business interaction is still very labour intensive)

I think that there are 3 basic categories of products or technologies that we’re planning on building with generative AI. One are around different kinds of agents, which I’ll talk about in a second. Two are just kind of generative AI-powered features.

So some of the canonical examples of that are things like in advertising, helping advertisers basically run ads without needing to supply as much creative or, say, if they have an image but it doesn’t fit the format, be able to fill in the image for them. So I talked about that a little bit upfront in my comments. But there’s stuff like that across every app. And then the third category of things, I’d say, are broadly focused on productivity and efficiency internally. So everything from helping engineers write code faster to helping people internally understand the overall knowledge base at the company and things like that. So there’s a lot to do on each of those zones.

For AI agents, specifically, I guess what I’d say is, and one of the things that’s different about how we think about this compared to some others in the industry is we don’t think that there’s going to be one single AI that people interact with, just because there are all these different entities on a day-to-day basis that people come across, whether they’re different creators or different businesses or different apps or things that you use. So I think that there are going to be a handful of things that are just sort of focused on helping people connect around expression and creativity and facilitating connections. I think there are going to be a handful of experiences around helping people connect to the creators who they care about and helping creators foster their communities.

And then the one that I think is going to have the fastest direct business loop is going to be around helping people interact with businesses. And you can imagine a world on this, where, over time, every business has as an AI agent that basically people can message and interact with. And it’s going to take some time to get there, right? I mean, this is going to be a long road to build that out. But I think that, that’s going to improve a lot of the interactions that people have with businesses, as well as if that does work, it should alleviate one of the biggest issues that we’re currently having around messaging monetization is that in order to — for a person to interact with a business. It’s quite human labor-intensive for a person to be on the other side of that interaction, which is one of the reasons why we’ve seen this take off in some countries where the cost of labor is relatively low. But you can imagine in a world where every business has an AI agent, that we can see the kind of success that we’re seeing in Thailand or Vietnam with business messaging could kind of spread everywhere. And I think that’s quite exciting.

Meta’s management believes that there will be both open and closed AI models in the ecosystem

I do think that there will continue to be both open and closed AI models. I think there are a bunch of reasons for this. There are obviously a lot of companies that their business model is to build a model and then sell access to it. So for them, making it open would undermine their business model. That is not our business model. We want to have the — like we view the model that we’re building as sort of the foundation for building products. So if by sharing it, we can improve the quality of the model and improve the quality of the team that we have that is working on that, that’s a win for our business of basically building better products. So I think you’ll see both of those models…

…But for our business model, at least, since we’re not selling access to this stuff, it’s a lot easier for us to share this with the community because it just makes our products better and other people’s…

…And it’s not just going to be like 1 thing is what everyone uses. I think different businesses will use different things for different reasons.

Meta’s management is aware that AI models could be dangerous if they become too powerful, but does not think the models are anywhere close to this point yet; he also thinks there are people who are genuinely concerned about AI safety, and AI companies who are trying to be opportunistic

There are a number of people who are out there saying that once the AI models get past a certain level of capability, it can become dangerous for them to become just in the hands of everyone openly. I think — what I think is pretty clear is that we’re not at that point today. I think that there’s consensus generally among people who are working on this in the industry and policy folks that we’re not at that point today. And it’s not exactly clear at what point you reach that. . So I think there are people who are kind of making that argument in good faith, who are actually concerned about the safety risk. So I think that there are probably some businesses that are out there making that argument because they want it to be more closed, because that’s their business, so I think we need to be wary of that.

Microsoft (NASDAQ: MSFT)

11,000 organisations are already using Azure OpenAI services, with nearly 100 new customers added each day during the quarter

We have great momentum across Azure OpenAI Service. More than 11,000 organizations across industries, including IKEA, Volvo Group, Zurich Insurance, as well as digital natives like FlipKart, Humane, Kahoot, Miro, Typeface, use the service. That’s nearly 100 new customers added every day this quarter…

…We’re also partnering broadly to scale this next generation of AI to more customers. Snowflake, for example, will increase its Azure spend as it builds new integrations with Azure OpenAI.

Microsoft’s management believes that every AI app has to start with data

Every AI app starts with data, and having a comprehensive data and analytics platform is more important than ever. Our intelligent data platform brings together operational databases, analytics and governance so organizations can spend more time creating value and less time integrating their data estate. 

Microsoft’s management believes that software developers are see Azure AI Studio as the tool of choice for AI software development

Now on to developers. New Azure AI Studio is becoming the tool of choice for AI development in this new era, helping organizations ground, fine-tune, evaluate and deploy models, and do so responsibly. VS Code and GitHub Copilot are category-leading products when it comes to how developers code every day. Nearly 90% of GitHub Copilot sign-ups are self-service, indicating strong organic interest and pull-through. More than 27,000 organizations, up 2x quarter-over-quarter, have chosen GitHub Copilot for Business to increase the productivity of their developers, including Airbnb, Dell and Scandinavian Airlines.

Microsoft is using AI for low-code, no-code software development tools to help domain experts automate workflows, create apps etc

We’re also applying AI across low-code, no-code tool chain to help domain experts automate workflows, create apps and web pages, build virtual agents, or analyze data using just natural language. Copilot in Power BI combines the power of large language models with an organization’s data to generate insights faster, and Copilot in Power Pages makes it easier to create secure low-code business websites. One of our tools that’s really taken off is Copilot in Power Virtual Agents, which is delivering one of the biggest benefits of this new area of AI, helping customer service agents be significantly more productive. HP and Virgin Money, for example, have both built custom chatbots with Copilot and Power Virtual Agents that were trained to answer complex customer inquiries. All-up, more than 63,000 organizations have used AI-powered capabilities in Power Platform, up 75% quarter-over-quarter.

The feedback Microsoft’s management has received for Microsoft 365 Copilot is that it is a gamechanger for productivity

4 months ago, we introduced a new pillar of customer value with Microsoft 365 Copilot. We are now rolling out Microsoft 365 Copilot to 600 paid customers through our early access program, and feedback from organizations like Emirates NBD, General Motors, Goodyear and Lumen is that it’s a game changer for employee productivity.

Microsoft’s management believes that revenue growth from the company’s AI services will be gradual

At a total company level, revenue growth from our Commercial business will continue to be driven by the Microsoft Cloud and will again outpace the growth from our Consumer business. Even with strong demand and a leadership position, growth from our AI services will be gradual as Azure AI scales and our copilots reach general availability dates. So for FY ’24, the impact will be weighted towards H2.

Microsoft’s management believes that AI will accelerate the growth of overall technology spending
We do think about what’s the long-term TAM here, right? I mean this is — you’ve heard me talk about this as a percentage of GDP, what’s going to be tech spend? If you believe that, let’s say, the 5% of GDP is going to go to 10% of GDP, maybe that gets accelerated because of the AI wave…

…And of course, I think one of the things that people often, I think, overlook is, and Satya mentioned it briefly when you go back to the pull on Azure, I think in many ways, lots of these AI products pull along Azure because it’s not just the AI solution services that you need to build an app. And so it’s less about Microsoft 365 pulling it along or any one Copilot. It’s that when you’re building these, it requires data and it requires the AI services. So you’ll see them pull both core Azure and AI Azure along with them. 

Microsoft’s management believes that companies need their own data in the cloud in order to utilise AI efficiently

Yes, absolutely. I think having your data, in particular, in the cloud is sort of key to how you can take advantage of essentially these new AI reasoning engines to complement, I’ll call it, your databases because these AI engines are not databases, but they can reason over your data and to help you then get more insights, more completions, more predictions, more summaries, and what have you.

Nvidia (NASDAQ: NVDA)

Nvidia is enjoying incredible demand for its AI chips

Data Center Compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA Accelerated Computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.

Nvidia is seeing tremendous demand for accelerated computing

There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade.

Nvidia is seeing strong demand for AI from consumer internet companies as well as enterprises

Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud as well as demand for on-premise infrastructure. 

Nvidia’s management believes that virtually every industry can benefit from AI

Virtually, every industry can benefit from generative AI. For example, AI Copilot, such as those just announced by Microsoft, can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.  

Nvidia’s management is seeing some of the earliest applications of generative AI in companies in marketing, media, and entertainment 

We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world’s largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider, Shutterstock, is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI.

Nvidia’s management believes that Infiniband is a much better networking solution for AI compared to Ethernet

Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of [indiscernible] for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners…

…We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant, some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings. Basically, the networking is free. And so if you have a single application, if you will, infrastructure where it’s largely dedicated to large language models or large AI systems, InfiniBand is really, really a terrific choice.

Nvidia’s management thinks that general purpose computing is too costly and slow, and that the world will shift to accelerated computing, driven by the demand for generative AI; this shift from general purpose computing to accelerated computing contains massive economic opportunity

It is recognized for some time now that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It’s too energy costly, it’s too expensive, and the performance of the applications are too slow. 

And finally, the world has a new way of doing it. It’s called accelerated computing, and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that’s already in the data center. And by using it, you offload the CPUs. You save a ton of money and order of magnitude, in cost and order of magnitude and energy and the throughput is higher. And that’s what the industry is really responding to.

Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It’s ready to be deployed. And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers…

…The world has something along the lines of about $1 trillion worth of data centers installed in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We’re seeing 2 simultaneous platform shifts at the same time. One is accelerated computing. And the reason for that is because it’s the most cost-effective, most energy-effective and the most performant way of doing computing now. So what you’re seeing, and then all of a sudden, enabled by generative AI — enabled by accelerated compute and generative AI came along. And this incredible application now gives everyone 2 reasons to transition, to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It’s about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year. You’re seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition, and we’re seeing these 2 platform shifts happening at the same time.

PayPal (NASDAQ: PYPL)

PayPal’s management believes the use of AI will allow the company to operate faster at lower cost

Our initial experiences with AI and continuing advances in our processes, infrastructure and product quality enable us to see a future where we do things better, faster and cheaper.

PayPal’s management believes that the use of AI has accelerated the company’s product innovation and improved developers’ productivity

As we discussed in our June investor meeting, we are meaningfully accelerating new product innovations into the market, scaling our A/B testing and significantly improving our time to market. We are now consistently delivering against our road map on schedule. This is the result of significant investments in our platform infrastructure and tools and enhanced set of measurements and performance indicators, hiring new talent and early successes using AI in our software development process…

There’s no question that AI is going to impact every single company and every function just as it will inside of PayPal. And we’ve been experimenting with a couple of hundred of our developers using tools from both Google, Microsoft as well as Amazon. And we are seeing 20% to 40% increases in engineering productivity. 

PayPal’s management believes that companies with unique, large data sets will have an advantage when using AI technologies; management sees PayPal as one of these companies

We believe that only those companies with unique and scaled data sets will be able to fully utilize the power of AI to drive actionable insights and differentiated value propositions for their customers…

…We capture 100% of the data flows, which really is feeding our AI engines. It’s fueling what will be our next-generation checkout. And most importantly, it’s fueling kind of our ability to have best-in-class auth rates in the industry and the lowest loss rates in the industry. 

Shopify (NASDAQ: SHOP)

Shopify’s management believes that entrepreneurship is entering an era where AI will become the most powerful sidekick for business creation

We are quickly positioning ourselves to build on the momentum we are seeing across our business, making purposeful change that support our core focus on commerce and unlock what we believe is a new era of data-driven entrepreneurship and growth, an era where AI becomes the most powerful sidekick for business creation.

Shopify recently introduced Shopify Magic, a suite of AI features that is integrated across Shopify’s products and workflows, and will soon launch Sidekick, an AI-powered chat interface commerce assistant; Shopify Magic is designed specifically for commerce, unlike other generative AI products

We recognize the immense potential of AI to transform the consumer landscape and commerce more broadly. And we are committed to harnessing its power to help our merchants succeed. We believe AI is making the impossible possible, giving everyone superpowers to be more productive, more creative, and more successful than ever before. So, of course, we are building that directly into Shopify. In our additions last week, we unveiled Shopify Magic, our suite of free AI-enabled features that are integrated across Shopify’s products and workflows, everything from inbox to online store builder and app store to merchandising to unlock creativity and increased productivity.

One of the most exciting products we will be launching soon in early access is our new AI-enabled commerce assistant, Sidekick. Powered by Shopify Magic, Sidekick is a new chat interface packed with advanced AI capabilities purposely built for commerce. Merchants will now have a commerce expert in their corner who is deeply competent, incredibly intelligent, and always available. With Sidekick, no matter your expertise or skillset, it allows entrepreneurs to use everyday language to have conversations that jump-start the creative process, tackle time-consuming tasks, and make smarter business decisions. By harnessing a deep understanding of systems and available data, Sidekick integrates seamlessly with the Shopify admin, enhancing and streamlining merchant operations. While we’re at the very early stages, the power of Sidekick is already incredible, and it’s developing fast…

……  I mean, unlike other generative AI products, Shopify Magic is specifically designed for commerce. And it’s not just embedded in one place, it’s embedded throughout the entire product. So, for example, the ability to generate blog posts instantaneously or write incredibly, high-converting product descriptions or create highly contextualized content for your business. That is where we feel like AI really can play a big role here in making merchants lives better..

… . And with Sidekick, you can do these incredible things like you can analyze sales and you can ideate on store design or you can even give instructions on how to run promotions.

Shopify’s management does not seem keen to raise the pricing of its services to account for the added value from new AI features such as Magic and Sidekick 

So, certainly there is opportunities for us to continuously review our pricing and figure out where the right pricing is. And we will continue to do that. But in terms of, you know, features like Magic and Sidekick, which are really excited about, remember, when our merchants do better, Shopify does better. That’s the business model. And so, the more that they can sell, the faster they can grow, the more we can share in that upside. But the other part that we talked about in the prepared remarks that’s just worthwhile mentioning again is that product attach rate. The fact that we’re still growing at — we’re still above 3%, which is really high, it means that as we introduce new products, new merchant solutions, whether it’s payment solutions, shipping, things like Audiences, anything like collabs, collective, more of our merchants are taking more of our solutions.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

Management sees AI as a positive for TSMC and the company did see an increase in AI-related demand, but it was not enough to offset declines elsewhere

Moving into third quarter 2023, while we have recently observed an increase in AI-related demand, it is not enough to offset the overall cyclicality of our business…

… The recent increase in AI-related demand is directionally positive for TSMC. Generative AI requires higher computing power and interconnected bandwidth, which drives increasing semiconductor content. Whether using CPUs, GPUs or AI accelerator and related ASIC for AI and machine learning, the commonality is that it requires use of leading-edge technology and a strong foundry design ecosystem. These are all TSMC’s strengths…

…Of course, we have a model, basically. The short-term frenzy about the AI demand definitely cannot extrapolate for the long term. And neither can we predict the near future, meaning next year, how the sudden demand will continue or will flatten out. However, our model is based on the data center structure. We assume a certain percentage of the data center processor are AI processors, and based on that, we calculate the AI processor demand. And this model is yet to be fitted to the practical data later on. But in general, I think the — our trend of a big portion of data center processor will be AI processor is a sure thing. And will it cannibalize the data center processors? In the short term, when the CapEx of the cloud service providers are fixed, yes, it will. It is. But as for the long term, when their data service — when the cloud service is having the generative AI service revenue, I think they will increase the CapEx. That should be consistent with the long-term AI processor demand. And I mean the CapEx will increase because of the generative AI services…

…But again, let me emphasize that those kind of applications in the AI, be it CPUs, GPUs or AI accelerator or ASIC, they all need leading-edge technologies. And they all have one symptom: they are using the very large die size, which is TSMC’s strength. 

AI server processors currently account for just 6% of TSMC’s total revenue but is expected to grow at 50% annually in the next 5 years to become a low-teens percentage of TSMC’s total revenue

Today, server AI processor demand, which we define as CPUs, GPUs and AI accelerators that are performing training and inference functions accounts for approximately 6% of TSMC’s total revenue. We forecasted this to grow at close to 50% CAGR in the next 5 years and increase to low teens percent of our revenue.

AI has reinforced the view of TSMC’s management that there will be healthy long-term growth in the semiconductor industry in general, and TSMC’s business in particular

The insatiable need for energy-efficient computation is starting from data centers and we expect it will proliferate to edge and end devices over time, which will further long term — which will drive further long-term opportunities. We have already embedded a certain assumption for AI demand into our long-term CapEx and growth forecast. Our HPC platform is expected to be the main engine and the largest incremental contributor to TSMC’s long-term growth in the next several years. While the quantification of the total addressable opportunity is still ongoing, generative AI and large language model only reinforce the already strong conviction we have in the structural megatrend to drive TSMC’s long-term growth, and we will closely monitor the development for further potential upside.

TSMC currently can’t fulfil all the demand for certain AI chips because of the lack of product capacity, but the company is expanding capacity

For the AI, right now, we see very strong demand, yes. For the front-end part, we don’t have any problem to support. But for the back end, the advanced packaging side, especially for the CoWoS, we do have some very tight capacity to — very hard to fulfill 100% of what customers needed. So we are working with customers for the short term to help them to fulfill the demand, but we are increasing our capacity as quickly as possible. And we expect these tightening somewhat be released in next year, probably towards the end of next year. But in between, we’re still working closely with our customers to support their growth…

… I will not give you the exact number, but let me give you a roughly probably 2x of the capacity will be added…

… I think the second question is about the pricing of the — on the CoWoS. As I answer the question, we are increasing the capacity as soon as possible manner. Of course, that including actual cost. So in fact, we are working with our customers. And the most important thing for them right now is supply assurance. It’s a supply to meet their demand. So we are working with them. We do everything possible to increase the capacity. And of course, at the same time, we share our value.

It appears that TSMC is selling AI chips for a few hundred dollars apiece while its customers then go onto sell the chips for tens of thousands of dollars – but TSMC management is ok with that

Well, Charles, I used to make a joke on my customers say that I’m selling him a few hundred dollars per chip, and then he sold it back to me with USD 200,000. But let me say that we are happy to see customers doing very well. And if customers do well, TSMC does well. And of course, we work with them and we sell our value to them. And fundamentally, we want to say that we are able to address and capture a major portion of the market in terms of a semiconductor component in AI. Did I answer your question?

Tencent (NASDAQ: TCEHY)

Tencent is testing its own foundational model for generative AI, and Tencent Cloud will be facilitating the deployment of open-source models by other companies; the development progress of Tencent’s own foundational model is good

In generative AI, we are internally testing our own proprietary foundation model in different use cases and are providing Tencent Cloud Model-as-a-Service solutions to facilitate efficient deployment of open-source foundation models in multiple industry verticals…

…And in terms of just the development, I would say, there are multi initiatives that’s going on at the company. The first one, obviously, is building our own proprietary foundation model, and that is actually progressing very well. The training is actually on track and making very good progress…

… And in terms of additional efforts, we are also on the cloud side, providing MaaS solution for enterprises, right? So basically providing a marketplace so that different enterprise clients can choose different types of open source large models for them to customize for their own use with their own data. And we have a whole set of technology infrastructure as well as tools to help them to make the choice as well as to do the training and do the deployment. And we believe this is going to be a pretty high value added and high margin product for the enterprise clients. 

Tencent’s management thinks that AI is a multiplier for many of the company’s businesses

AI is — really the more we look at it, the more excited we are for that asset growth multiplier across our many businesses. It would serve to enhance efficiency and the quality of our user to user services and at the same time, you facilitate the improvement in terms of our ad targeting, data targeting and also cost-efficient production of a lot of our content. So there are really multiple ways through which we can benefit from the continued development of generative AI. 

Tencent’s management believes that the company’s MaaS for AI will first benefit large enterprises, but that it will subsequently also benefit companies of different sizes (although the smaller companies will benefit from using trained models via API versus training their own models)

In terms of the AI and Model-as-a-Service solution, we — besides — we think a lot of the industries will actually benefit from it, right? Initially, it would definitely be with larger companies…

…I think over time, as the industry become more mature, obviously, the medium-sized and smaller sized enterprises will probably benefit. But I don’t think they will be benefiting from using — training their own model, right? But then they would probably be benefiting from using the already trained models directly through APIs. So I think that’s sort of the way the industry will probably evolve over time. 

Tencent’s management believes that the company’s MaaS will provide a revenue stream that is recurring and high margin

I think, obviously, the revenue model is still evolving, but I would say, theoretically, what you talked about the high margin and high recurring revenue is going to be true because we are adding more value to the customers. And once the customers start using these services, right, it will be built into their interaction with their customers, which will be much more sticky than if it’s in their back-end systems. So I think that would probably be true. 

An important change Tencent has made to improve its advertising technology stack when using machine learning is to shift from CPUs (central processing units) to GPUs (graphics processing units)

If you look at the key changes or key things that we have done with respect to machine learning on ad platform, I think the traditional challenge for us is that we have many different platforms. We have many different types of inventories. We have a very large coverage of user base and with a lot of data, right? And all these things make it actually very complicated for us to target customers based on just rule-based or CPU-based targeting system, which was actually what we have been deploying .And a key change is that we have deployed a lot of GPUs, so moving from CPUs to GPUs and we have built a very large neural network to basically accept all these different complexities and be able to come up with the optimal solution. And as a result, our ad targeting becomes much more effective and much higher speed and more accurate in terms of targeting. And as a result, right now, it actually provides a very strong boost to our targeting ability and also the ROI that we can deliver through our ad systems. And as James talked about, this is sort of early stage of this deployment and continuous improvement of our technology, and I think this trend will continue.

Tesla (NASDAQ: TSLA)

Tesla’s management believes that (1) the company’s Autopilot service has a data-advantage, as AI models become a lot more powerful with more data, and (2) self-driving will be safer than human driving

And I mean, there are times where we see basically, in a neural net basically, it’s sort of, at a million training examples, it barely works at 2 million, it slightly works at 3 million. It’s like, “Wow, okay, we’re seeing something.” But then you get to like 10 million training examples, it’s like — it becomes incredible. So there’s just no substitute for a massive amount of data. And obviously, Tesla has more vehicles on the road that are collecting this data than all other companies combined by I think, maybe even an order of magnitude. So I think we might have 90% of all — a very big number…

So today, over 300 million miles have been driven using FSD Beta. That 300 million-mile number is going to seem small very quickly. It will soon be billions of miles, tens of billions of miles. And the FSD will go from being as good as a human to then being vastly better than a human. We see a clear path to full self-driving being 10x safer than the average human driver. 

Tesla’s management sees the Dojo training computer as a means to reduce the cost of neural net training and expects to spend more than US$1 billion on Dojo-related R&D through 2024

Our Dojo training computer is designed to significantly reduce the cost of neural net training. It is designed to — it’s somewhat optimized for the kind of training that we need, which is a video training. So we just see that the need for neural net training, again, talking of being a quasi-infinite of things, is just enormous. So I think having — we expect to use both NVIDIA and Dojo, to be clear. But there’s — we just see a demand for really advanced training resources. And we think we may reach in-house neural net training capability of 100 [ exoblocks ] by the end of next year…

…I think we will be spending something north of $1 billion over the next year on — through the end of next year, it’s well over $1 billion in Dojo. And yes, so I mean we’ve got a truly staggering amount of video data to do training on.

Around 5-6 Optimus bots – Tesla’s autonomous robots – have been made so far; Tesla’s management realised that it’s hard to find actuators that work well, and so Tesla had to design and manufacture its own actuators; the first Optimus with Tesla actuators should be made around November

Yes, I think we’re around 5 or 6 bots. I think there’s a — we were at 10, I guess. It depends on how many are working and what phase. But it’s sort of — yes, there’s more every month…  

…We found that there are actually no suppliers that can produce the actuators. There are no off-the-shelf actuators that work well for a humanoid robot at any price…

…So we’ve actually had to design our own actuators to integrate the motor, the power electronics, the controller, the sensors. And really, every one of them is custom designed. And then, of course, we’ll be using the same inference hardware as the car. But we, in designing these actuators, are designing them for volume production, so that they’re not just lighter, tighter and more capable than any other actuators whereof that exists in the world. But it’s also actually manufacturable. So we should be able to make them in volume. The first Optimus that is will have all of the Tesla designed actuators, sort of production candidate actuators, integrated and walking should be around November-ish. And then we’ll start ramping up after that.

Tesla is buying Nvidia chips as fast as Nvidia will deliver it – and Tesla’s management thinks that if Nvidia can deliver more chips, Tesla would not even need Dojo, but Nvidia can’t

But like I said, we’re also — we have some — we’re using a lot of NVIDIA hardware. We’ll continue to use — we’ll actually take NVIDIA hardware as fast as NVIDIA will deliver it to us. Tremendous respect for Jensen and NVIDIA. They’ve done an incredible job. And frankly, I don’t know, if they could deliver us enough GPUs, we might not need Dojo. But they can’t. They’ve got so many customers. They’ve been kind enough to, nonetheless, prioritize some of our GPU orders.

Elon Musk explained that his timing-projections for the actualisation of full self-driving has been too optimistic in the past because the next challenge is always many times harder than the last – he still expects Tesla’s full self-driving service to be better than human-driving by the end of this year, although he admits may be wrong yet again

Well, obviously, as people have sort of made fun of me, and perhaps quite fairly have made fun of me, my predictions about achieving full self-driving have been optimistic in the past. The reason I’ve been optimistic, what it tends to look like is we’ll make rapid progress with a new version of FSD, but then it will curve over logarithmically. So at first, logarithmic curve looks like this sort of fairly straight upward line, diagonal and up. And so if you extrapolate that, then you have a great thing. But then because it’s actually logarithmic, it curves over, and then there have been a series of stacked logarithmic curves. Now I know I’m the boy who cried FSD, but man, I think we’ll be better than human by the end of this year. That’s not to say we’re approved by regulators. And I’m saying then that, that would be in the U.S. because we’ve got to focus on one market first. But I think we’ll be better than human by the end of this year. I’ve been wrong in the past, I may be wrong this time.

The Trade Desk (NASDAQ: TSLA)

The use of AI is helping Trade Desk to surface advertising value for its customers

Of course, there are many other aspects of Kokai that we unveiled on [ 06/06 ], some of which are live and many of which we will be launching in the next few months. These indexes and other innovations, especially around the application of AI across our platform [ are helping us ] surface value more intuitively to advertisers. We are revamping our UX so that the campaign setup and optimization experience is even more intuitive with data next to decisions at every step. And we’re making it easier than ever for thousands of data, inventory, measurement and media partners to integrate with us. 

Trade Desk is using different AI models for specific applications instead of using one model for all purposes

You’ll recall that we launched AI in our platform in 2018 before it was trendy. And we call it then to and distributing that AI across the platform in a variety of different ways and different deep learning models so that we’re using that for very specific applications rather than trying to create one algo to rule them all, if you will, which is something we actually very — in a very disciplined way are trying to avoid. So we can create checks and balances in the way that the [ tech ] works, and we can make certain that AI is always providing improvements by essentially having A/B testing and better auditability

Visa (NYSE: V)

Visa is piloting a new AI-powered fraud capability for instant payments

First, our partnership with Pay.UK, the account-to-account payments operator in the U.K. was recently announced. We will be piloting our new fraud capability, RTP Prevent, which is uniquely built for instant payments with deep learning AI models. Using RTP Prevent, we can provide a risk score in real time so banks can decide whether to approve or reject the transaction on an RTP network. This is a great example of building and deploying entirely new solutions and our network of network strategy…

…So first of all, what we’ve done is we’ve built a real-time risk score. We’ve built it uniquely for instant payments, where there’s often unique cases of fraud in terms of how they work. We built it using deep learning AI models. And what it does is it enables banks to be able to decide whether to approve or reject the transaction in real time, which is a capability that most banks or most real-time payments networks around the world have been very hungry for. It’s a score from 1 to 99. It comes with an instant real-time code that explains the score. And what it does is it leverages our proprietary data that kind of we have used to enhance our own risk algorithms as well as the data that we see on a lot of our payment platforms, including Visa Direct. And one of the benefits of us bringing that to market is it integrates with the bank’s existing fraud and risk tools. Because we’re often providing these types of risk scores to banks and they’re ingesting them from us, it directly integrates into their fraud and risk tools, so the real-time information, their systems know how to use it. It can be automated into their decisioning algorithms and those types of things.

Wix (NASDAQ: WIX)

Wix has worked with AI for nearly a decade and management believes AI will be a key driver of Wix’s product strategy in the future

This quarter, we also continued to innovate and introduce new AI-driven tools in our pipeline. As mentioned last quarter, we have leveraged AI technology for nearly a decade, which has played a key role in driving user success for both Self Creators and Partners. By harnessing a variety of deep learning models trained on the incredible amount of data from the hundreds of millions of Wix sites, we’ve built out an impressive suite of AI and genAI products with the purpose of making the website building experience on Wix frictionless. As AI continues to evolve, we remain on the forefront of innovation with a number of AI and gen-AI driven products in our near-term pipeline, including AI Site Generator and AI Assistant for your business. AI is a key driver of our product and growth strategy for both Self Creators and Partners, and I’m excited for what is still to com

The introduction of generative AI products and features is improving the key performance indicators (KPIs) of Wix’s business

In regards to your question, if we see any tangible evidence that GenAI is actually improving business performance, then yes, we do. We — I’m not going to disclose all the details, but I’m just going to say that the thing we released in the first part of the year and late last year already are showing improvement in business KPIs. So it makes us very optimistic. And of course, the more we put those kind of technology in front of more users, we expect that factor to grow. But if you think about it right, the core value that Wix brings is reducing the friction when you try to build a website. And when you use that technology, that can do tremendously well in order to improve that core value. And then, of course, we expect the results to be significant.

Wix’s management believes that having generative AI technology alone is not sufficient for building a website

So the ones that we’ve seen until now are essentially doing the following, right? They take a template and they generate the text for the template and that — then they save that as a website. Essentially, they’re using ChatGPT to write text and then just put it inside of a template.

When we started, we did that. We’re now doing — with ChatGPT, we’re doing it since last, I think, November. And with ADI, we did it, of course, with algorithm less sophisticated. But even then, we didn’t just inject text to template. We actually created layouts around the text, which is the other way around, right? And that creates a huge difference in what we generate because when you fill text into a template, you are creating essentially artificial text that will fit the design. While in most cases, if you think about building a business, you do the other way around, you create your marketing messages and then you create a design, right, to fit that. And visually, it creates a massive defense efficiency of those websites and very different. So that is the first difference.

The other difference is that if you think about it, since probably 1998, you could write text in a word document and then save it as HTML, okay? So now you just build the website and you have the text and you have a very, very basic website. Of course, you cannot run your business on top of that because it doesn’t have everything you need to run a business. It doesn’t have analytics. It doesn’t have a contact form. It doesn’t have e-commerce. It doesn’t have transactions. All of those are the platform that makes it into a real business. And this is something that most of the tools — all the tools I have seen so far are lacking, right? They just build the page, which you could do in ’98, with word and just save it as HTML. So that’s another huge difference, right?

And the last part is the question of how do you edit. And this is a very important thing. A website is not something that you could edit once and you just publish it and you never go back. You constantly have things to do. You change products, you change services, you change addresses, you add things, you remove things. You need to add content, so Google will like you, and this is very, very important for finding your business in Google. And there’s a lot of other things, right? So you need to be able to edit the content.

Now when it comes to edit content, you don’t want to regenerate the website, okay, which [indiscernible] you see in all of those things that fill a template because it’s not only about filling a template, it’s now about editing the content. And this is the thing that we spend so much money on doing, right, to back in the technology, the e-commerce and then the ability to go in and point at something and edit or move it and drive it. So those are the things that created Wix, and those are, I think, still our differentiators.

Even if you generated a template with ChatGPT and it looks great. And for some magic origin actually, fit your value that the — marketing value that you want to put in your website. Editing it is not going to be possible with the current technology they use. And then even more than that, the ability to have all of the applications on top of it that you really need for our business, don’t exist.

Zoom Video Communications (NASDAQ: ZM)

There are promising signs that Zoom’s AI-related products are gaining traction with customers

Let me also thank Valmont Industries. Valmont came onboard as a Zoom customer a little over a year ago with Meetings and Phone and quickly became a major platform adopter, including Zoom One and Zoom Contact Center. And in Q2, with the goal of utilizing AI to better serve their employees, they added Zoom Virtual Agent due to its accuracy of intent understanding, ability to route issues to the correct agent, ease of use and quality of analytics…

But we’re really excited about the vision that we can take for them not only around, obviously, the existing platform but what’s also coming from an AI perspective. And I think our customers are finding that very attractive, as you’ve heard from the customers that Eric talked about seeing a lot of momentum of customers that were originally Meetings customers really moving either into Zoom One or adding on Zoom Phone and considering Contact Center as well.

Zoom’s management believes that the company has a differentiated AI strategy

And also our strategy is very differentiated, right? First of all, have a federated AI approach. And also the way we look at those AI features, how to help a customer improve productivity, that’s very important, right? And because the customer already like us, not like some others, right, who gave you a so-called free services and then your AI features price. That’s not our case, right? We really care about the customer value and also add more and more innovations.

Zoom’s management believes that AI integrations in the company’s products will be a key differentiator

And in terms of AI, not like other vendors, right, they already have Contact Center solution for a long, long time. When you look at AI kind of architecture and flexible, right, how to add AI to that to all those existing leaders the Contact Center. We already realized the importance of AI, right? That’s why we have a very flexible architecture. Not only do we build organic AI features but also acquired Solvvy and also the Virtual Agent and so on and so forth. Organic growth and also the acquisition certainly help us a lot in terms of product innovation. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Datadog, Etsy, Fiverr, Mastercard, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.

Thoughts on Artificial Intelligence

Artificial intelligence has the potential to reshape the world.

The way Jeremy and I see it, artificial intelligence (AI) really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are known as generative AI products – they are software that use AI to generate art and text, respectively (and often at astounding quality), hence the term “generative”. Since then, developments in AI have progressed at a breathtaking pace. One striking observation I’ve found with AI is the much higher level of enthusiasm that company-leaders have for the technology compared to the two other recent “hot things”, namely, blockchain/cryptocurrencies and the metaverse. Put another way, AI could be a real game changer for societies and economies.

I thought it would be useful to write down some of my current thoughts on AI and its potential impact. Putting pen to paper (or fingers to the keyboard) helps me make sense of what’s in my mind. Do note that my thoughts are fragile because the field of AI is developing rapidly and there are many unknowns at the moment. In no order of merit:

  • While companies such as OpenAI and Alphabet have released generative AI products, they have yet to release open-source versions of their foundational AI models that power the products. Meta Platforms, meanwhile, has been open sourcing its foundational AI models in earnest. During Meta’s latest earnings conference call in April this year, management explained that open sourcing allows Meta to benefit from improvements to its foundational models that are made by software developers, outside of Meta, all over the world. Around the same time, there was a purportedly leaked document from an Alphabet employee that discussed the advantages in the development of AI that Meta has over both Alphabet and OpenAI by virtue of it open sourcing its foundational models. There’s a tug-of-war now between what’s better – proprietary or open-sourced foundational AI models – but it remains to be seen which will prevail or if there will even be a clear winner. 
  • During Amazon’s latest earnings conference call (in April 2023), the company’s management team shared their observation that most companies that want to utilise AI have no interest in building their own foundational AI models because it takes tremendous amounts of time and capital. Instead, they merely want to customise foundational models with their own proprietary data. On the other hand, Tencent’s leaders commented in the company’s May 2023 earnings conference call that they see a proliferation of foundational AI models from both established companies as well as startups. I’m watching to find out which point of view is closer to the truth. I also want to point out that the frenzy to develop foundational AI models may be specific to China. Rui Ma, an astute observer of and writer on China’s technology sector, mentioned in a recent tweet that “everyone in China is building their own foundational model.” Meanwhile, the management of online travel platform Airbnb (which is based in the US, works deeply with technology, and is clearly a large company) shared in May 2023 that they have no interest in building foundational AI models – they’re only interested in designing the interface and tuning the models. 
  • A database is a platform to store data. Each piece of software requires a database to store, organize, and process data. The database has a direct impact on the software’s performance, scalability, flexibility, and reliability, so its selection is a highly strategic decision for companies. In the 1970s, relational databases were first developed and they used a programming language known as Structured Query Language (SQL). Relational databases store and organise data points that are related to one another in table form (picture an Excel spreadsheet) and were useful from the 1980s to the late 1990s. But because they were used to store structured data, they began to lose relevance with the rise of the internet. Relational databases were too rigid for the internet era and were not built to support the volume, velocity, and variety of data in the internet era. This is where non-relational databases – also known as NoSQL, which stands for either “non SQL” or “not only SQL” – come into play. NoSQL databases are not constrained to relational databases’ tabular format of data storage and can work with unstructured data such as audio, video, and photos. As a result, they are more flexible and better suited for the internet age. AI appears to require different database architectures. The management of MongoDB, a company that specialises in NoSQL databases, talked about the need for a vector database to store the training results of large language models during the company’s June 2023 earnings conference call. Simply put, a vector database stores data in a way that allows users to easily find data, say, an image (or text), that is related to a given image (or text) – this feature is very useful for generative AI products. This said, MongoDB’s management also commented in the same earnings conference call that NoSQL databases will still be very useful in the AI era. I’m aware that MongoDB’s management could be biased, but I do agree with their point of view. Vector databases appear to be well-suited (to my untrained technical eye!) for a narrow AI-related use case, whereas NoSQL databases are useful in much broader ways. Moreover, AI is likely to increase the volume of software developed for all kinds of software – not just AI software – and they need modern databases. MongoDB’s management also explained in a separate June 2023 conference that a typical generative AI workflow will include both vector databases and other kinds of databases (during the conference, management also revealed MongoDB’s own vector database service). I’m keeping a keen eye on how the landscape of database architectures evolve over time as AI technologies develop.
  • Keeping up with the theme of new architectures, the AI age could also usher in a new architecture for data centres. This new architecture is named accelerated computing by Nvidia. In the traditional architecture of data centres, CPUs (central processing units) are the main source of computing power. In accelerated computing, the entire data centre – consisting of GPUs (graphic processing units), CPUs, DPUs (data processing units), data switches, networking hardware, and more – provides the computing power. Put another way, instead of thinking about the chip as the computer, the data centre becomes the computer under the accelerated computing framework. During Nvidia’s May 2023 earnings conference call, management shared that the company had been working on accelerated computing for many years but it was the introduction of generative AI – with its massive computing requirements – that “triggered a killer app” for this new data centre architecture. The economic opportunity could be immense. Nvidia’s management estimated that US$1 trillion of data centre infrastructure was installed over the last four years and nearly all of it was based on the traditional CPU-focused architecture. But as generative AI gains importance in society, data centre infrastructure would need to shift heavily towards the accelerated computing variety, according to Nvidia’s management.
  • And keeping with the theme of something new, AI could also bring about novel and better consumer experiences. Airbnb’s co-founder and CEO, Brian Chesky, laid out a tantalising view on this potential future during the company’s latest May 2023 earnings conference call. Chesky mentioned that search queries in the travel context are matching questions and the answers depend on who the questioner is and what his/her preferences are. With the help of AI, Airbnb could build “the ultimate AI concierge that could understand you,” thereby providing a highly personalised travel experience. Meanwhile, in a recent interview with Wired, Microsoft’s CEO Satya Nadella shared his dream that “every one of Earth’s 8 billion people can have an AI tutor, an AI doctor, a programmer, maybe a consultant!” 
  • Embedded AI is the concept of AI software that is built into a device itself. This device can be a robot. And if robots with embedded AI can be mass-produced, the economic implications could be tremendous, beyond the impact that AI could have as just software. Tesla is perhaps the most high profile company in the world today that is developing robots with embedded AI. The company’s goal for the Tesla Bot (also known as Optimus) is for it to be “a general purpose, bi-pedal, autonomous humanoid robot capable of performing unsafe, repetitive or boring tasks.” There are other important companies that are working on embedded AI. For example, earlier this year, Nvidia acquired OmniML, a startup whose software shrinks AI models, making it easier for the models to be run on devices rather than on the cloud.
  • Currently, humans are behind the content trained on by foundational AI models underpinning the likes of ChatGPT and other generative AI products. But according to a recently-published paper from UK and Canadian researchers titled The Curse of Recursion: Training on Generated Data Makes Models Forget, the quality of foundational AI models degrades significantly as the proportion of content they are trained on shifts toward an AI-generated corpus. This could be a serious problem in the future if there’s an explosion in the volume of generative AI content, which seems likely; for context, Adobe’s management shared in mid-June this year that the company’s generative AI feature, Firefly, had already powered 500 million content-generations since its launch in March 2023. The degradation, termed “model collapse” by the researchers, happens because content created by humans are a more accurate reflection of the world since they would contain improbable data. Even after training on man-made data, AI models tend to generate content that understates the improbable data. If subsequent AI models train primarily on AI-generated content, the end result is that the improbable data become even less represented. The researchers describe model collapse as “a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time.” Model collapse could have serious societal consequences; one of the researchers, Ilia Shumailov, told Venture Beat that “there are many other aspects that will lead to more serious implications, such as discrimination based on gender, ethnicity or other sensitive attributes.” Ross Anderson, another author of the paper, wrote in a blog post that with model collapse, advantages could accrue to companies that “control access to human interfaces at scale” or that have already trained AI models by scraping the web when human-generated content was still overwhelmingly dominant. 

There’s one other fragile thought I have about AI that we think is more important than what I’ve shared above, and it is related to the concept of emergence. Emergence is a natural phenomenon where sophisticated outcomes spontaneously “emerge” from the interactions of agents in a system, even when these agents were not instructed to produce these outcomes. The following passages from the book, Complexity: The Emerging Science at the Edge of Order and Chaos by Mitch Waldrop, help shed some light on emergence:

“These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry. Thus, molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, and so on. At each level, new emergent structures would form and engage in new emergent behaviors. Complexity, in other words, was really a science of emergence… 

…Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystems – on and on. Indeed, thought Holland, that’s what this business of “emergence” was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked at…

…Arthur was fascinated by the thing. Reynolds had billed the program as an attempt to capture the essence of flocking behavior in birds, or herding behavior in sheep, or schooling behavior in fish. And as far as Arthur could tell, he had succeeded beautifully. Reynolds’ basic idea was to place a large collection of autonomous, birdlike agents—“boids”—into an onscreen environment full of walls and obstacles. Each boid followed three simple rules of behavior: 

1. It tried to maintain a minimum distance from other objects in the environment, including other boids.

2. It tried to match velocities with boids in its neighborhood.

3. It tried to move toward the perceived center of mass of boids in its neighborhood.

What was striking about these rules was that none of them said, “Form a flock.” Quite the opposite: the rules were entirely local, referring only to what an individual boid could see and do in its own vicinity. If a flock was going to form at all, it would have to do so from the bottom up, as an emergent phenomenon. And yet flocks did form, every time. Reynolds could start his simulation with boids scattered around the computer screen completely at random, and they would spontaneously collect themselves into a flock that could fly around obstacles in a very fluid and natural manner. Sometimes the flock would even break into subflocks that flowed around both sides of an obstacle, rejoining on the other side as if the boids had planned it all along. In one of the runs, in fact, a boid accidentally hit a pole, fluttered around for a moment as though stunned and lost—then darted forward to rejoin the flock as it moved on.”

In our view, the concept of emergence is important in AI because at least some of the capabilities of ChatGPT seen today were not explicitly programmed for – they emerged. Satya Nadella said in his aforementioned interview with Wired that “when we went from GPT 2.5 to 3, we all started seeing these emergent capabilities.” Nadella was referring to the foundational AI models built by OpenAI in his Wired interview. One of the key differences between GPT 2.5 and GPT 3 is that the former contains 1.5 billion parameters, whereas the latter contains 175 billion, more than 100 times more. The basic computational unit within an AI model is known as a node, and parameters are a measure of the strength of a connection between two nodes. The number of parameters can thus be loosely associated with the number of nodes, as well as the number of connections between nodes, in an AI model. With GPT 3’s much higher number of parameters compared to GPT 2.5, the number of nodes and number of connections (or interactions) between nodes in GPT 3 thus far outweigh those of GPT 2.5. Nadella’s observation matches those of David Ha, an expert on AI whose most recent role was the head of research at Stability AI. During a February 2023 podcast hosted by investor Jim O’Shaughnessy, Ha shared the following (emphasis is mine):

Then the interesting thing is, sure, you can train things on prediction or even things like translation. If you have paired English to French samples, you can do that. But what if you train a model to predict itself without any labels? So that’s really interesting because one of the limitations we have is labeling data is a daunting task and it requires a lot of thought, but self-labeling is free. Like anything on the internet, the label is itself, right? So what you can do is there’s two broad types of models that are popular now. There’s language models that generate sequences of data and there’s things like image models, Stable Diffusion you generate an image. These operate on a very similar principle, but for things like language model, you can have a large corpus of text on the internet. And the interesting thing here is all you need to do is train the model to simply predict what the next character is going to be or what the next word is going to be, predict the probability distribution of the next word.

And such a very simple objective as you scale the model, as you scale the size and the number of neurons, you get interesting emerging capabilities as well. So before, maybe back in 2015, ’16, when I was playing around with language models, you can feed it, auto Shakespeare, and it will blab out something that sounds like Shakespeare.

But in the next few years, once people scaled up the number of parameters from 5 million, to a hundred million, to a billion parameters, to a hundred billion parameters, this simple objective, you can now interact with the model. You can actually feed in, “This is what I’m going to say,” and the model takes that as an input as if it said that and predict the next character and give you some feedback on that. And I think this is very interesting, because this is an emergent phenomenon. We didn’t design the model to have these chat functions. It’s just like this capability has emerged from scale.

And the same for image side as well. I think for images, there are data sets that will map the description of that image to that image itself and text to image models can do things like go from a text input into some representation of that text input and its objective is to generate an image that encapsulates what the text prompt is. And once we have enough images, I remember when I started, everyone was just generating tiny images of 10 classes of cats, dogs, airplanes, cars, digits and so on. And they’re not very general. You can only generate so much.

But once you have a large enough data distribution, you can start generating novel things like for example, a Formula 1 race car that looks like a strawberry and it’ll do that. This understanding of concepts are emergent. So I think that’s what I want to get at. You start off with very simple statistical models, but as you increase the scale of the model and you keep the objectives quite simple, you get these emergent capabilities that were not planned but simply emerge from training on that objective.

Emergence occurred in AI models as their number of parameters (i.e. the number of interactions between nodes) grew. This is a crucial point because emergence requires a certain amount of complexity in the interactions between agents, which can only happen if there are large numbers of agents as well as interactions between agents. It’s highly likely, in my view, that more emergent phenomena could develop as AI models become even more powerful over time via an increase in their parameters. It’s also difficult – perhaps impossible – to predict what these emergent phenomena could be, as specific emergent phenomena in any particular complex system are inherently unpredictable. So, any new emergent phenomena from AI that springs up in the future could be anywhere on the spectrum of being wildly positive to destructive for society. Let’s see!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, Tencent, and Tesla. Holdings are subject to change at any time.

What Is The Monetary Cost of Stock-Based Compensation?

Confused by stock-based compensation? Here is how investors can account for SBC when calculating intrinsiic value.

It is common today for companies to exclude stock-based compensation (SBC) when reporting “adjusted” earnings. 

In management’s eyes, SBC expense is not a cash outflow and is excluded when reporting adjusted earnings. But don’t let that fool you. SBC is a real expense for shareholders. It increases a company’s outstanding share count and reduces future dividends per share.

I’ve thought about SBC quite a bit in the last few months. One thing I noticed is that investors often do not properly account for it. There are a couple of different scenarios that I believe should lead to investors using different methods to account for SBC.

Scenario 1: Offsetting dilution with buybacks

The first scenario is when a company is both buying back shares and issuing shares to employees as SBC. The easiest and most appropriate way to account for SBC in this situation is by calculating how much the company spent to buy back the stock that vested in the year.

Take the credit card company Visa (NYSE: V) for example. In its FY2022 (fiscal year ended 30 September 2022), 2.2 million restricted stock units (RSUs) were vested and given to Visa employees. At the same time, Visa bought back 56 million shares at an average price of US$206 per share.  In other words, Visa managed to buy back all the shares that were vested, and more.

We can calculate the cash outlay that Visa spent to offset the dilution from the grants of RSUs by multiplying the number of grants by the average price it paid to buy back its shares. In Visa’s case, the true cost of the SBC was around US$453 million (2.2 million RSUs multiplied by average price of US$206).

We can then calculate how much free cash flow (FCF) was left over that could be returned to shareholders by deducting US$453 million from Visa’s FCF. In FY2022, this FCF was US$17.4 billion.

Scenario 2: No buybacks!

On the other hand, when a company is not offsetting dilution with buybacks, it becomes trickier to account for SBC.

Under GAAP accounting, SBC is reported based on the company’s stock price at the time of the grant. But in my view, this is a severely flawed form of accounting. Firstly, unless the company is buying back shares, the stock price does not translate into the true cost of SBC. Second, even if the stock price was a true reflection of intrinsic value, the grants may have been made years ago and the underlying value of each share could have changed significantly since then. 

In my view, I think the best way to account for SBC is by calculating how SBC is going to impact future dividend payouts to shareholders. This is the true cost of SBC.

Let’s use Okta Inc (NASDAQ: OKTA) as an example. In Okta’s FY2023 (fiscal year ended 31 January 2023), 2.6 million RSUs were vested and the company had 161 million shares outstanding at the end of the year (after dilution). This means that the RSUs vested led to a 1.7% rate of dilution. Put another way, all future dividends per share for Okta will be reduced by around 1.7%. Although the company is not paying a dividend yet, RSUs vested should lead to a reduction in the intrinsic value per share by 1.7%.

More granularly, I did a simple dividend discount model. I made certain assumptions around free cash flow growth and future dividend payout ratios. Using those assumptions and a 12% discount rate, I found that Okta’s intrinsic value was around US$12.5 billion.

With an outstanding share count of 161 million, Okta’s stock was worth US$77.63 each. Before dilution, Okta had 158.4 million shares and each share was worth US$78.91. The cost of dilution was around US$1.28 per share or US$201 million dollars.

Scenario 3: How about options?

In the two scenarios above, I only accounted for the RSU portion of the SBC. Both Okta and Visa also offer employees another form of SBC: Options.

Options give employees the ability to buy stock in a company in the future at a predetermined price. Unlike RSUs, the company receives cash when an option is exercised.

In this scenario, there is a cash inflow but an increase in share count. The best way to account for this is by calculating the drop in intrinsic value due to the dilution but offsetting it by the amount of cash the company receives.

For instance, Okta employees exercised 1.4 million options in FY2023 at a weighted average share price of US$11.92. Recall that we calculated our intrinsic value of shares after dilution to be US$78.91. Given the same assumptions, the cost of these options was US$66.99 per option, for a total cost of US$93.7 million.

Key takeaways

SBC can be tricky for investors to account for. Different scenarios demand different analysis methods. 

When a company is buying back shares, the amount spent on offsetting dilution is the amount that can not be used as dividends. This is the cost to shareholders. On the other hand, when no buybacks are done, a company’s future dividends per share is reduced as the number of shares grows. 

Ultimately, the key thing to take note of is how SBC impacts a company’s future dividends per share. By sticking to this simple principle, we can deduce the best way to account for SBC.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Okta and Visa. Holdings are subject to change at any time.

An Investing Paradox: Stability Is Destabilising

The epicenters of past periods of economic stress happened in sectors that were strong and robust. Why is that the case?

One of my favourite frameworks for thinking about investing and the economy is the simple but profound concept of stability being destabilising. This comes from the ideas of the late economist Hyman Minsky.

When he was alive, Minsky wasn’t well known. His views on why an economy goes through boom-bust cycles only gained prominence after the 2008-2009 financial crisis. In essence, Minsky theorised that for an economy, stability itself is destabilising. I first learnt about him – and how his ideas can be extended to investing – years ago after coming across a Motley Fool article written by Morgan Housel. Here’s how Housel describes Minsky’s framework:

“Whether it’s stocks not crashing or the economy going a long time without a recession, stability makes people feel safe. And when people feel safe, they take more risk, like going into debt or buying more stocks.

It pretty much has to be this way. If there was no volatility, and we knew stocks went up 8% every year [the long-run average annual return for the U.S. stock market], the only rational response would be to pay more for them, until they were expensive enough to return less than 8%. It would be crazy for this not to happen, because no rational person would hold cash in the bank if they were guaranteed a higher return in stocks. If we had a 100% guarantee that stocks would return 8% a year, people would bid prices up until they returned the same amount as FDIC-insured savings accounts, which is about 0%.

But there are no guarantees—only the perception of guarantees. Bad stuff happens, and when stocks are priced for perfection, a mere sniff of bad news will send them plunging.”

In other words, great fundamentals in business (stability) can cause investors to take risky actions, such as pushing valuations toward the sky or using plenty of leverage. This plants the seeds for a future downturn to come (the creation of instability).

I recently came across a wonderful July 2010 blog post, titled A Batesian Mimicry Explanation of Business Cycles, from economist Eric Falkenstein that shared historical real-life examples of how instability was created in the economy because of stability. Here are the relevant passages from Falkenstein’s blog post (emphases are mine):

“…the housing bubble of 2008 was based on the idea that the borrower’s credit was irrelevant because the underlying collateral, nationwide, had never fallen significantly in nominal terms. This was undoubtedly true. The economics profession, based on what got published in top-tier journals, suggested that uneconomical racial discrimination in mortgage lending was rampant, lending criteria was excessively prudent (underwriting criteria explicitly do not note borrowers race, so presumably lenders were picking up correlated signals). Well-known economists Joe Stiglitz and Peter Orzag wrote a paper for Fannie Mae arguing the expected loss on its $2 trillion in mortgage guarantees of only $2 million dollars, 0.0001%. Moody’s did not consider it important to analyze the collateral within mortgage CDOs, as if the borrower or collateral characteristics were irrelevant. In short, lots of smart people thought housing was an area with extremely low risk.

Each major bust has its peculiar excesses centered on previously prudent and successful sectors. After the Panic of 1837, many American states defaulted quite to the surprise of European investors, who were mistakenly comforted by their strong performance in the Panic of 1819 (perhaps the first world-wide recession). The Panic of 1893 centered on railroads, which had for a half century experienced solid growth, and seemed tested by their performance in the short-lived Panic of 1873.”

It turns out that it were the “prudent and successful sectors” – the stable ones – that were the epicenters of the panics of old. It was their stability that led to investor excesses, exemplifying Minsky’s idea of how stability is destabilising.

The world of investing is full of paradoxes. Minsky’s valuable contribution to the world of economic and investment thinking is one such example.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Takeaways From Silicon Valley Bank’s Collapse

The collapse of Silicon Valley Bank, or SVB, is a great reminder for investors to always be prepared for the unexpected.

March 2023 was a tumultuous month in the world of finance. On 8 March, Silicon Valley Bank, the 16th largest bank in the USA with US$209 billion in assets at the end of 2022, reported that it would incur a US$1.8 billion loss after it sold some of its assets to meet deposit withdrawals. Just two days later, on 10 March, banking regulators seized control of the bank, marking its effective collapse. It turned out that Silicon Valley Bank, or SVB, had faced US$42 billion in deposit withdrawals, representing nearly a quarter of its deposit base at the end of 2022, in just one day on 9 March.

SVB had failed because of a classic bank run. At a simplified level, banking involves taking in deposits and distributing the capital as loans to borrowers. A bank’s assets (what it owns) are the loans it has doled out, and its liabilities (what it owes) are deposits from depositors. When depositors withdraw their deposits, a bank has to return cash to them. Often, depositors can withdraw their deposits at short notice, whereas a bank can’t easily convert its loans into ready cash quickly. So when a large group of depositors ask for their money back, it’s difficult for a bank to meet the withdrawals – that’s when a bank run happens.

When SVB was initially taken over by regulators, there was no guarantee that the bank’s depositors would be made whole. Official confirmation that the money of SVB’s depositors would be fully protected was only given a few days later. In the leadup to and in the aftermath of SVB’s fall, there was a palpable fear among stock market participants that a systemic bank run could happen within the US banking sector. The Invesco KBW Regional Banking ETF, an exchange-traded fund tracking the KBW Nasdaq Regional Banking Index, which comprises public-listed US regional banks and thrifts, fell by 21% in March 2023. The stock price of First Republic Bank, ranked 14th in America with US$212 billion in assets at the end of 2022, cratered by 89% in the same month. For context, the S&P 500 was up by 3.5%.

SVB was not the only US bank that failed in March 2023. Two other US banks, Silvergate Bank and Signature Bank, did too. There was also contagion beyond the USA. On 19 March, Credit Suisse, a Switzerland-based bank with CHF 531 billion in assets (around US$575 billion) at the end of 2022, was forced by its country’s regulators to agree to be acquired by its national peer, UBS, for just over US$3 billion; two days prior, on 17 March, Credit Suisse had a market capitalization of US$8.6 billion. Going back to the start of 2023, I don’t think it was in anyone’s predictions for the year that banks of significant size in the USA would fail (Signature Bank had US$110 billion in assets at the end of 2022) or that the 167 year-old Credit Suisse would be absorbed by another bank for a relative pittance. These are a sound reminder of a belief I have about investing: Bad scenarios inevitably happen from time to time, but I  just don’t know when. To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.

The SVB bank run is also an example of an important aspect of how I invest: Why I shun forecasts. SVB’s run was different from past bank runs. Jerome Powell, chair of the Federal Reserve, said in a 22 March speech (emphasis is mine):

The speed of the run [on SVB], it’s very different from what we’ve seen in the past and it does kind of suggest that there’s a need for possible regulatory and supervisory changes just because supervision and regulation need to keep up with what’s happening in the world.”

There are suggestions from observers of financial markets that the run on SVB could happen at such breakneck speed – US$42 billion of deposits, which is nearly a quarter of the bank’s deposit base, withdrawn in one day – because of the existence of mobile devices and internet banking. I agree. Bank runs of old would have involved people physically waiting in line at bank branches to withdraw their money. Outflow of deposits would thus take a relatively longer time. Now it can happen in the time it takes to tap a smartphone. In 2014, author James Surowiecki reviewed Walter Friedman’s book on the folly of economic forecasting titled Fortune Tellers. In his review, Surowiecki wrote (emphasis is mine):

The failure of forecasting is also due to the limits of learning from history. The models forecasters use are all built, to one degree or another, on the notion that historical patterns recur, and that the past can be a guide to the future. The problem is that some of the most economically consequential events are precisely those that haven’t happened before. Think of the oil crisis of the 1970s, or the fall of the Soviet Union, or, most important, China’s decision to embrace (in its way) capitalism and open itself to the West. Or think of the housing bubble. Many of the forecasting models that the banks relied on assumed that housing prices could never fall, on a national basis, as steeply as they did, because they had never fallen so steeply before. But of course they had also never risen so steeply before, which made the models effectively useless.”

There is great truth in something writer Kelly Hayes once said: “Everything feels unprecedented when you haven’t engaged with history.” SVB’s failure can easily feel epochal to some investors, since it was one of the largest banks in America when it fell. But it was actually just 15 years ago, in 2008, when the largest bank failure in the USA – a record that still holds – happened. The culprit, Washington Mutual, had US$307 billion in assets at the time. In fact, bank failures are not even a rare occurrence in the USA. From 2001 to the end of March 2023, there have been 563 such incidents. But Hayes’ wise quote misses an important fact about life: Things that have never happened before do happen. Such is the case when it came to the speed of SVB’s bank run. For context, Washington Mutual crumbled after a total of US$16.7 billion in deposits – less than 10% of its total deposit base – fled over 10 days.

I have also seen that unprecedented things do happen with alarming regularity. It was just three years ago, in April 2020, when the price of oil went negative for the first time in history. When investing, I have – and always will – keep this in mind. I also know that I am unable to predict what these unprecedented events could look like, but I am sure that they are bound to happen. To deal with these, I fall back to what I shared earlier:

“To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.”

I think such companies carry the following traits that I have been looking for for a long time in my investing activities:

  1. Revenues that are small in relation to a large and/or growing market, or revenues that are large in a fast-growing market 
  2. Strong balance sheets with minimal or reasonable levels of debt
  3. Management teams with integrity, capability, and an innovative mindset
  4. Revenue streams that are recurring in nature, either through contracts or customer-behaviour
  5. A proven ability to grow
  6. A high likelihood of generating a strong and growing stream of free cash flow in the future

These traits interplay with each other to produce companies I believe to be antifragile. I first came across the concept of antifragility – referring to something that strengthens when exposed to non-lethal stress – from Nassim Nicholas Taleb’s book, Antifragile. Antifragility is an important concept for the way I invest. As I mentioned earlier, I operate on the basis that bad things will happen from time to time – to economies, industries, and companies – but I just don’t know how and when. As such, I am keen to own shares in antifragile companies, the ones which can thrive during chaos. This is why the strength of a company’s balance sheet is an important investment criteria for us – having a strong balance sheet increases the chance that a company can survive or even thrive in rough seas. But a company’s antifragility goes beyond its financial numbers. It can also be found in how the company is run, which in turn stems from the mindset of its leader.

It’s crucial to learn from history, as Hayes’s quote suggests. But it’s also important to recognise that the future will not fully resemble the past. Forecasts tend to fail because there are limits to learning from history and this is why I shun forecasts. In a world where unprecedented things can and do happen, I am prepared for the unexpected.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

How Bad is Zoom’s Stock-Based Compensation?

On the surface, the rising stock based compensation for Zoom looks bad. But looking under the hood, the situation is not as bad as it looks.

There seems to be a lot of concern surrounding Zoom’s rising stock-based compensation (SBC).

In its financial years 2021, 2022 and 2023, Zoom recorded SBC of US$275 million, US$477 million and US$1,285 million, respectively. FY2023 was perhaps the most worrying for investors as Zoom’s revenue essentially flat-lined while its SBC increased by more than two-fold.

But as mentioned in an earlier article, GAAP accounting is not very informative when it comes to SBC. When companies report SBC using GAAP accounting, they record the amount on the financial statements based on the share price at the time of the grant. A more informative way to look at SBC would be from the perspective of the actual number of shares given out during the year.

In FY2021, 2022 and 2023, Zoom issued 0.6 million, 1.8 million and 4 million restricted stock units (RSUs), respectively. From that point of view, it seems the dilution is not too bad. Zoom had 293 million shares outstanding as of 31 January 2023, so the 4 million RSUs issued resulted in only 1.4% more shares.

What about down the road?

The number of RSUs granted in FY2023 was 22.1 million, up from just 3.1 million a year before. The big jump in FY2023 was because the company decided to give a one-time boost to existing employees. 

However, this does not mean that Zoom’s dilution is going to be 22 million shares every year from now. The number of RSUs granted in FY2023 was probably a one-off grant that will likely not recur and these grants will vest over a period of three to four years.

If we divide the extra RSUs given in FY2023 by their 4-year vesting schedule, we can assume that around 8 million RSUs will vest each year. This will result in an annual dilution rate of 2.7% based on Zoom’s 293 million shares outstanding as of 31 January 2023.

Bear in mind: Zoom guided for a weighted diluted share count of 308 million for FY2024. This diluted number includes 4.8 million in unexercised options that were granted a number of years ago. Excluding this, the number of RSUs that vest will be around 10 million and I believe this is because of an accelerated vesting schedule this year.

Cashflow impact

Although SBC does not result in a cash outflow for companies, it does result in a larger outstanding share base and consequently, lower free cash flow per share.

But Zoom can offset that by buying back its shares. At its current share price of US$69, Zoom can buy back 8 million of its shares using US$550 million. Zoom generated US$1.5B in free cash flow if you exclude working capital changes in FY2023. If it can sustain cash generation at this level, it can buy back all its stock that is issued each year and still have around US$1 billion in annual free cash flow left over for shareholders.

And we also should factor in the fact that in most companies, due to employee turnover, the RSU forfeiture rate is around 20% or more, which will mean my estimate of 8 million RSUs vesting per year for Zoom could be an overestimate. In addition, Zoom reduced its headcount by 15% in February this year, which should lead to more RSU forfeitures and hopefully fewer grants in the future.

Not as bad as it looks

GAAP accounting does not always give a complete picture of the financial health of a business. In my view, SBC is one of the most significant flaws of GAAP accounting and investors need to look into the financial notes to better grasp the true impact of SBC.

Zoom’s SBC numbers seem high. But when zooming in (pun intended), the SBC is not as bad as it looks. In addition, with share prices so low, it is easy for management to offset dilution with repurchases at very good prices. However, investors should continue to monitor share dilution over time to ensure that management is fair to shareholders.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Zoom. Holdings are subject to change at any time.

What Causes Stock Prices To Rise?

A company can be valued based on its future cash flows. Dividends, as cashflows to shareholders, should therefore drive stock valuations.

I recently wrote about why dividends are the ultimate driver of stock valuations. Legendary investor Warren Buffett once said: “Intrinsic value can be defined simply as the discounted value of cash that can be taken out of business during its remaining life.”

And dividends are ultimately the cash that is taken out from a business over time. As such, I consider the prospect of dividends as the true driver of stock valuations.

But what if a company will not pay out a dividend in my lifetime? 

Dividends in the future

Even though we may never receive a dividend from a stock, we should still be able to make a gain through stock price appreciation.

Let’s say a company will only start paying out $100 a share in dividends 100 years from now and that its dividend per share will remain stable from then. An investor who wants to earn a 10% return will be willing to pay $1000 a share at that time.

But it is unlikely that anyone reading this will be alive 100 years from now. That doesn’t mean we can’t still make money from this stock.

In Year 99, an investor who wants to make a 10% return will be willing to pay $909 a share as they can sell it to another investor for $1000 in Year 100. That’s a 10% gain.

Similarly, an investor knowing this, will be willing to pay $826 in Year 98, knowing that another buyer will likely be willing to pay $909 to buy it from him in a year. And on and on it goes.

Coming back to the present, an investor who wants to make a 10% annual return should be willing to pay $0.07 a share. Even though this investor will likely never hold the shares for 100 years, in a well-oiled financial system, the investor should be able to sell the stock at a higher price over time.

But be warned

In the above example, I assumed that the financial markets are working smoothly and investors’ required rate of return remained constant at 10%. I also assumed that the dividend trajectory of the company is known. But reality is seldom like this.

The required rate of return may change depending on the risk-free rate, impacting what people will pay for the stock at different periods of time. In addition, uncertainty about the business may also lead to stock price fluctuations. Furthermore, there may even be mispricings because of misinformation or simply irrational behaviour of buyers and sellers of the stock. All of these things can lead to wildly fluctuating stock prices.

So even if you do end up being correct on the future dividend per share of the company, the valuation trajectory you thought that the company will follow may end up well off-course for long periods. The market may also demand different rates of return from you leading to the market’s “intrinsic value” of the stock differing from yours.

The picture below is a sketch by me (sorry I’m not an artist) that illustrates what may happen:

The smooth line is what your “intrinsic value” of the company looks like over time. But the zig-zag line is what may actually happen.

Bottom line

To recap, capital gains can be made even if a company doesn’t pay a dividend during our lifetime. But we have to be wary that capital gains may not happen smoothly.

Shareholders, even if they are right about a stock’s future dividend profile, must be able to hold the stock through volatile periods until the stock price eventually reaches above or at least on par with our intrinsic value to make our required rate of return.

You may also have noticed from the chart that occasionally stocks can go above your “intrinsic value” line (whatever rate of return you are using). If you bought in at these times, you are unlikely to make a return that meets your required rate.

To avoid this, we need to buy in at the right valuation and be patient enough to wait for market sentiment to converge to our intrinsic value over time to make a profit that meets our expectations. Patience and discipline are, hence, key to investment success. And of course, we also need to predict the dividend trajectory of the company somewhat accurately.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

How To Find The Intrinsic Value of a Stock At Different Points in Time

intrinsic value is the sum of all future cash flows discounted to the present, but it can also change over the course of time.

A company’s intrinsic value is the value of the sum of future cash flows to the shareholder discounted to the present day. 

But the intrinsic value of a company is not static. It moves with time. The closer we get to the future cash flows, the more an investor should be willing to pay for the company.

In this article, I will run through (1) how to compute the intrinsic value of a company today, (2) how to plot the graph of the intrinsic value, and (3) what to do with intrinsic value charts.

How to calculate intrinsic value

Simply put, intrinsic value is the sum of all future cash flows discounted to the present. 

As shareholders of a company, the future cash flow is all future dividends and the proceeds we can collect when we eventually sell our shares in the company.

To keep things simple, we should assume that we are holding a company to perpetuity or till the business closes down. This will ensure we are not beholden to market conditions that influence our future cash flows through a sale. We, hence, only need to concern ourselves with future dividends.

To calculate intrinsic value, we need to predict the amount of dividends we will collect and the timing of that dividend.

Once we figure that out, we can discount the dividends to the present day.

Let’s take a simple company that will pay $1 a share for 10 years before closing down. Upon closing, the company pays a $5 dividend on liquidation. Let’s assume we want a 10% return. The table below shows the dividend schedule, the value of each dividend when discounted to the present day and the total intrinsic value of the company now.

YearDividendNet present value
Now$0.00$0.00
Year 1$1.00$0.91
Year 2$1.00$0.83
Year 3$1.00$0.75
Year 4$1.00$0.68
Year 5$1.00$0.62
Year 6$1.00$0.56
Year 7$1.00$0.51
Year 8$1.00$0.47
Year 9$1.00$0.42
Year 10$6.00$2.31
Sum$15.00$8.07

As you can see, we have calculated the net present value of each dividend based on how far in the future we will receive them. The equation for the net present value is: (Dividend/(1+10%)^(Years away).

The intrinsic value is the sum of the net present value of all the dividends. The company in this situation has an intrinsic value of $8.07.

Intrinsic value moves

In the above example, we have calculated the intrinsic value of the stock today. But the intrinsic value moves with time. In a year, we will have collected $1 in dividends which will lower our intrinsic value. But at the same time, we will be closer to receiving subsequent dividends. 

The table below shows the intrinsic value immediately after collecting our first dividend in year 1.

YearDividendNet present value
Now$0.00$0.00
Year 1$1.00$0.91
Year 2$1.00$0.83
Year 3$1.00$0.75
Year 4$1.00$0.68
Year 5$1.00$0.62
Year 6$1.00$0.56
Year 7$1.00$0.51
Year 8$1.00$0.47
Year 9$6.00$2.54
Sum$14.00$7.88

There are a few things to take note of.

First, the sum of the remaining dividends left to be paid has dropped to $14 (from $15) as we have already collected $1 worth of dividends.

Second, the intrinsic value has now dropped to $7.88. 

We see that there are two main effects of time.

It allowed us to collect our first dividend payment of $1, reducing future dividends. That has a net negative impact on the remaining intrinsic value of the stock. But we are also now closer to receiving future dividends. For instance, the big payout after year 10 previously is now just 9 years away.

The net effect is that the intrinsic value dropped to $7.88. We can do the same exercise over and over to see the intrinsic value of the stock over time. We can also plot the intrinsic values of the company over time.

Notice that while intrinsic value has dropped, investors still manage to get a rate of return of 10% due to the dividends collected.

When a stock doesn’t pay a dividend for years

Often times a company may not pay a dividend for years. Think of Berkshire Hathaway, which has not paid a dividend in decades. 

The intrinsic value of Berkshire is still moving with time as we get closer to the dividend payment. In this scenario, the intrinsic value simply rises as we get closer to our dividend collection and there is no net reduction in intrinsic values through any payment of dividends yet.

Take for example a company that will not pay a dividend for 10 years. After which, it begins to distribute a $1 per share dividend for the next 10 years before closing down and pays $5 a share in liquidation value. 

YearDividendNet present value
Now0$0.00
Year 10$0.00
Year 20$0.00
Year 30$0.00
Year 40$0.00
Year 50$0.00
Year 60$0.00
Year 70$0.00
Year 80$0.00
Year 90$0.00
Year 10$0.00$0.00
Year 11$1.00$0.35
Year 12$1.00$0.32
Year 13$1.00$0.29
Year 14$1.00$0.26
Year 15$1.00$0.24
Year 16$1.00$0.22
Year 17$1.00$0.20
Year 18$1.00$0.18
Year 19$1.00$0.16
Year 20$6.00$0.89
Sum$15.00$3.11

The intrinsic value of such a stock is around $3.11 at present. But in a year’s time, as we get closer to future dividend payouts, the intrinsic value will rise. 

A simple way of thinking about it is that in a year’s time, the intrinsic value will have risen 10% to meet our 10% discount rate or required rate of return. As such, the intrinsic value will be $3.42 in one year. The intrinsic value will continue to rise 10% each year until we receive our first dividend payment in year 10.

The intrinsic value curve will look like this for the first 10 years:

The intrinsic value is a smooth curve for stocks that do not yet pay a dividend.

Using intrinsic value charts

Intrinsic value charts can be useful in helping investors know whether a stock is under or overvalued based on your required rate of return.

Andrew Brenton, CEO of Turtle Creek Asset Management whose main fund has produced a 20% annualised return since 1998 (as of December 2022), uses his estimate of intrinsic values to make portfolio adjustments. 

If a stock goes above his intrinsic value, it means that it will not be able to earn his required rate of return. In that case, he lowers his portfolio weighting of the stock and vice versa.

While active management of the portfolio using this method can be rewarding as in the case of Turtle Creek, it is also fairly time-consuming.

Another way to use intrinsic value charts is to use it to ensure you are getting a good entry price for your stock. If a stock trades at a price above your intrinsic value calculations, it may not be able to achieve your desired rate of return.

Final thoughts

Calculating the intrinsic value of a company can help investors achieve their return goals and ensure that they maintain discipline when investing in a company.

However, there are limitations. 

For one, intrinsic value calculations require an accurate projection of future payments to the shareholder. In many cases, it is hard for investors to predict with accuracy and confidence. We have to simply rely on our best judgement. 

We are also often limited by the fact that we may not hold stock to perpetuity or its natural end of life and liquidation. In the case that we need to sell the stock prematurely, we may be beholden to market conditions at the time of our sale of the stock. 

It is also important to note that intrinsic value is not the same for everyone. I may be willing to attribute a higher intrinsic value to a company if my required rate of return is lower than yours. So each individual investor has to set his own target return to calculate intrinsic value.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

What’s Your Investing Edge?

Whats your investing edge? That’s the question many investors find themselves asking when building a personal portfolio. Here are some ways to gain an edge.

Warren Buffett probably has the most concise yet the best explanation of how to value a stock. He said: “Intrinsic value can be defined simply: it is the discounted value of the cash that can be taken out of a business during its remaining life.”

This is how all stocks should theoretically be valued.  In a perfect market where cash flows are certain and discount rates remain constant, all stocks should provide the same rate of return. 

But this is not the case in the real world. Stocks produce varying returns, allowing investors to earn above-average returns. 

Active stock pickers have developed multiple techniques to try to obtain these above-average returns to beat the indexes. In this article, I’ll go through some investing styles, why they can produce above-average returns, and the pros and cons of each style.

Long-term growth investing

One of the more common approaches today is long-term growth investing. But why does long-term investing outperform the market?

The market underestimates the growth potential

One reason is that market participants may underestimate the pace or durability of the growth of a company. 

Investors may not be comfortable projecting that far in the future and often are only willing to underwrite growth over the next few years and may assume high growth fades away beyond a few years. 

While true for most companies, there are high-quality companies that are exceptions. if investors can find these companies that beat the market’s expectations, they can achieve better-than-average returns when the growth materialises. The chart below illustrates how investors can potentially make market-beating returns.

Let’s say the average market’s required rate of return is 10%. The line at the bottom is what the market thinks the intrinsic value is based on a 10% required return. But the company exceeds the market’s expectations, resulting in the stock price following the middle line instead and a 15% annual return.

The market underwrites a larger discount rate

Even if the market has high expectations for a company’s growth, the market may want a higher rate of return as the market is uncertain of the growth playing out. The market is only willing to pay a lower price for the business, thus creating an opportunity to earn higher returns.

The line below is what investors can earn which is more than the 10% return if the market was more confident about the company.

Deep value stocks

Alternatively, another group of investors may prefer to invest in companies whose share prices are below their intrinsic values now. 

Rather than looking at future intrinsic values and waiting for the growth to play out, some investors simply opt to buy stocks trading below their intrinsic values and hoping that the company’s stock closes the gap. The chart below illustrates how this will work.

The black line is the intrinsic value of the company based on a 10% required return. The beginning of the red line is where the stock price is at. The red line is what investors hope will happen over time as the stock price closes the gap with its intrinsic value. Once the gap closes, investors then exit the position and hop on the next opportunity to repeat the process.

Pros and cons

All investing styles have their own pros and cons. 

  1. Underappreciated growth
    For long-term investing in companies with underappreciated growth prospects, investors need to be right about the future growth of the company. To do so, investors must have a keen understanding of the business background, growth potential, competition, potential that the growth plays out and why the market may be underestimating the growth of the company.

This requires in-depth knowledge of the company and requires conviction in the management team being able to execute better than the market expects of them.

  1. Underwriting larger discount rates
    For companies that the market has high hopes for but is only willing to underwrite a larger discount rate due to the uncertainty around the business, investors need to also have in-depth knowledge of the company and have more certainty than the market that the growth will eventually play out.
    Again, this may require a good grasp of the business fundamentals and the probability of the growth playing out.
  2. Undervalued companies
    Thirdly, investors who invest in companies based on valuations being too low now, also need a keen understanding of the business. Opportunities can arise due to short-term misconceptions of a company but investors must have a differentiated view of the company from the rest of the market.
    A near-term catalyst is often required for the market to realise the discrepancy. A catalyst can be in the form of dividend increases or management unlocking shareholder value through spin-offs etc. This style of investing often requires more hard work as investors need to identify where the catalyst will come from. Absent a catalyst, the stock may remain undervalued for long periods, resulting in less-than-optimal returns. In addition, new opportunities need to be found after each exit.

What’s your edge?

Active fundamental investors who want to beat the market can use many different styles to beat the market. While each style has its own limitations, if done correctly, all of these techniques can achieve market-beating returns over time.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

When Shouldn’t You Pay a Premium For a Growing Company?

Return on retained capital and the reinvestment opportunity are two factors that impact valuation and returns for an investor.

You may assume that a faster-growing business always deserves a premium valuation but that’s not always the case. Growth is not the only criterion that determines valuation. The cost of growth matters just as much.

In this article, I will explore four things:

(I) Why growth is not the only factor that determines value
(II) Why companies with high returns on retained capital deserve a higher valuation
(III) How much we should pay for a business by looking at its reinvestment opportunities and returns on retained capital
(IV) Two real-life companies that have generated tremendous returns for shareholders based on high returns on retained capital

Growth is not the only factor

To explain why returns on retained capital matter, let’s examine a simple example.

Companies A and B both earn $1 per share in the upcoming year. Company A doesn’t reinvest its earnings. Instead, it gives its profits back to shareholders in the form of dividends. Company B, on the other hand, is able to reinvest all of its profits back into its business for an 8% return each year. The table below illustrates the earnings per share of the two companies over the next 5 years:

Company B is clearly growing its earnings per share much quicker than Company A. But that does not mean we should pay a premium valuation. We need to remember that Company B does not pay a dividend, whereas Company A pays $1 per share in dividends each year. Shareholders can reinvest that dividend to generate additional returns.

Let’s assume that an investor can make 10% a year from reinvesting the dividend collected from Company A. Here is how much the investor “earns” from being a shareholder of Company A compared to Company B after reinvesting the dividends earned each year:

The table just above shows that investors can earn more from investing in Company A and reinvesting the dividends than from investing in Company B. Company B’s return on retained capital is lower than the return we can get from reinvesting our dividends. In this case, we should pay less for Company B than Company A.

Retaining earnings to grow a company can be a powerful tool. But using that retained earnings effectively is what drives real value to the shareholder.

High-return companies

Conversely, investors should pay a premium for a company that generates a higher return on retained capital. Let’s look at another example.

Companies C and D both will generate $1 per share in earnings this year. Company C reinvests all of its earnings to generate a 10% return on retained capital. Company D, on the other hand, is able to generate a 20% return on retained capital. However, Company D only reinvests 50% of its profits and returns the rest to shareholders as dividends. The table below shows the earnings per share of both companies in the next 5 years:

As you may have figured, both companies are growing at exactly the same rate. This is because while Company D is generating double the returns on retained capital, it only reinvests 50% of its profit. The other 50% is returned to shareholders as dividends.

But don’t forget that investors can reinvest Company D’s dividends for more returns. The table below shows what shareholders can “earn” if they are able to generate 10% returns on reinvested dividends:

So while Companies C and D are growing at exactly the same rates, investors should be willing to pay a premium for Company D because it is generating higher returns on retained capital.

How much of a premium should we pay?

What the above examples show is that growth is not the only thing that matters. The cost of that growth matters more. Investors should be willing to pay a premium for a company that is able to generate high returns on retained capital.

But how much of a premium should an investor be willing to pay? We can calculate that premium using a discounted cash flow (DCF) model.

Let’s use Companies A, B, C, and D as examples again. But this time, let’s also add Company E into the mix. Company E reinvests 100% of its earnings at a 20% return on retained capital. The table below shows the earnings per share to each company’s shareholders, with dividends reinvested:

Let’s assume that the reinvestment opportunity for each company lasts for 10 years before it is exhausted. All the companies above then start returning 100% of their earnings back to shareholders each year. From then on, earnings remain flat. As the dividend reinvestment opportunity above is 10%, we should use a 10% discount rate to calculate how much an investor should pay for each company. The table below shows the price per share and price-to-earnings (P/E) multiples that one can pay:

We can see that companies with higher returns on retained capital invested deserve a higher P/E multiple. In addition, if a company has the potential to redeploy more of its earnings at high rates of return, it deserves an even higher valuation. This is why Company E deserves a higher multiple than Company D even though both deploy their retained capital at similar rates of return.

If a company is generating relatively low returns on capital, it is better for the company to return cash to shareholders in the form of dividends as shareholders can generate more returns from redeploying that cash elsewhere. This is why Company B deserves the lowest valuation. In this case, poor capital allocation decisions by the management team are destroying shareholder returns even though the company is growing. This is because the return on retained capital is below the “hurdle rate” of 10%.

Real-life example #1

Let’s look at two real-life examples. Both companies are exceptional businesses that have generated exceptional returns for shareholders.

The first company is Constellation Software Inc (TSE: CSU), a holding company that acquires vertical market software (VMS) businesses to grow. Constellation has a remarkable track record of acquiring VMS businesses at very low valuations, thus enabling it to generate double-digit returns on incremental capital invested.

From 2011 to 2021, Constellation generated a total of US$5.8 billion in free cash flow. It was able to redeploy US$4.1 billion of that free cash flow to acquire new businesses and it paid out US$1.3 billion in dividends. Over that time, the annual free cash flow of the company grew steadily and materially from US$146 million in 2011 to US$1.2 billion in 2021.

In other words, Constellation retained around 78% of its free cash flow and returned 22% of it to shareholders. The 78% of free cash flow retained was able to drive a 23% annualised growth in free cash flow. The return on retained capital was a whopping 30% per year (23/78). It is, hence, not surprising to see that Constellation’s stock price is up by around 33 times since 2011.

Today, Constellation sports a market cap of around US$37 billion and generated around US$1.3 billion in free cash flow on a trailing basis after accounting for one-off working capital headwinds. This translates to around 38 times its trailing free cash flow. Is that expensive?

Let’s assume that Constellation can continue to reinvest/retain the same amount of free cash flow at similar rates of return for the next 10 years before reinvestment opportunities dry out. In this scenario, we can pay around 34 times its free cash flow to generate a 10% annualised return. Given these assumptions, Constellation may be slightly expensive for an investor who wishes to earn an annual return of at least 10%. 

Real-life example #2

Simulations Plus (NASDAQ: SLP) is a company that provides modelling and simulation software for drug discovery and development. From FY2011 to FY2022 (its financial year ends in August), Simulations Plus generated a total of US$100 million in free cash flow. It paid out US$47 million in dividends during that time, retaining 53% of its free cash flow.

In that time period, Simulations Plus’s free cash flow per share also grew from US$0.15 in FY2011 to US$0.82 in FY2022. This translates to 14% annualised growth while retaining/reinvesting just 53% of its free cash flow. The company’s return on retained capital was thus 26%.

Simulations Plus’s stock price has skyrocketed from US$3 at the end of 2011 to US$42 today. At the current price, the company trades at around 47 times trailing free cash flow per share. Is this expensive?

Since Simulations Plus is still a small company in a fragmented but growing industry, its reinvestment opportunity can potentially last 20 years. Let’s assume that it maintains a return on retained capital of 26% and we can reinvest our dividends at a 10% rate of return. After 20 years, the company’s reinvestment opportunity dries up. In this scenario, we should be willing to pay around 44 times its annual free cash flow for the business. Again, today’s share price may be slightly expensive if we want to achieve a 10% rate of return.

The bottom line

Investors often assume that we should pay up for a faster-growing business. However, the cost of growth matters. When looking at a business, we need to analyse the company’s growth profile and its cost of growth.

The reinvestment opportunity matters too. If a company has a high return on retained capital but only retains a small per cent of annual profits to reinvest, then growth will be slow.

Thirdly, the duration of the reinvestment opportunity needs to be taken into account too. A company that can redeploy 100% of its earnings at high rates of returns for 20 years deserves a higher multiple than one that can only redeploy that earnings over 10 years.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.