The Latest Thoughts From American Technology Companies On AI (2024 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q1 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the first quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb has been using AI for a long time and has made a lot of progress in the last 12 months, including (1) a computer vision AI model trained with 100 million photos that allows hosts to organise all their photos by room, which leads to higher conversion rates, (2) an AI-powered feature for hosts to reply guests quickly, and (3) a reservation screening technology

We’ve been using AI for a long time. In the last 12 months, we’ve made a lot of progress. I’ll just give you 3 examples of things we’ve done with AI. We made it easier to host. We have a computer vision model that we trained with 100 million photos, and that allows hosts to — like the AI model to organize all their photos by room. Why would you want to do this? Because this increases conversion rate when you do this. We launched last week AI-powered quick replies for hosts. So basically, predicts the right kind of question or answer for a host to pre-generate to provide to guests. And this has been really helpful. And then we’ve made a really big impact on reducing partners in Airbnb with a reservation screening technology.

Airbnb’s management is going much bigger on generative AI; management thinks the biggest near-term impact generative AI can have on Airbnb’s business is in customer service; management thinks that generative AI in the realm of customer service can benefit Airbnb a lot more than hotels and online travel agents (OTAs); AI can solve difficult customer service challenges for Airbnb

So now we’re going much bigger on generative AI. I think I think we’re going to see the biggest impact is going to be on customer service in the near term. I think more than hotels, probably even more than OTA, Airbnb will benefit from generative AI. And the reason why, it’s just a simple structural reason. We have the most like buried inventory. We don’t have any SKUs, and we’re an incredibly global platform. So it’s a very difficult customer service challenge. But imagine an AI agent that can actually like read a corpus of 1,000 pages of policies and be able to help adjudicate and help a customer service agent help a guest from Germany staying with a host in Japan. It’s a very difficult problem that AI can really supplement. 

Airbnb’s management wants to bring AI capabilities from customer service to search and to the broader experience; the end game is to provide an AI-powered concierge

Over time, we’re going to bring the AI capabilities from customer service to search and to the broader experience. And the end game is to provide basically an AI-powered concierge. 

Alphabet (NASDAQ: GOOG)

Alphabet’s management gave a reminder that Alphabet has been an AI-first company since 2016; Alphabet started building TPUs (tensor processing units) in 2016

We’ve been an AI-first company since 2016, pioneering many of the modern breakthroughs that power AI progress for us and for the industry…

… You can imagine we started building TPUs in 2016, so we’ve definitely been gearing up for a long time.

Alphabet’s management rolled out Gemini 1.5 Pro in February, a foundational AI model which has a breakthrough in long context understanding and multimodal capabilities; Gemini 1.5 Pro has been embraced by developers and enterprise customers in a wide range of use cases

In February, we rolled out Gemini 1.5 Pro, which shows dramatic performance enhancements across a number of dimensions. It includes a breakthrough in long context understanding, achieving the longest context window of any large-scale foundation model yet. Combining this with Gemini’s native multimodal understanding across audio, video, text code and more, it’s highly capable. We are already seeing developers and enterprise customers enthusiastically embrace Gemini 1.5 and use it for a wide range of things.

Alphabet’s management thinks that the company has the best infrastructure for AI; Gemini’s training and inference is done with Alphabet’s custom TPU (tensor processing unit) chips; Google Cloud offers the latest generation of Nvidia GPUs (graphics processing units) and Alphabet’s own TPUs

We have the best infrastructure for the AI era… Our data centers are some of the most high-performing, secure, reliable, and efficient in the world. They’ve been purpose-built for training cutting-edge AI models and designed to achieve unprecedented improvements in efficiency. We have developed new AI models and algorithms that are more than 100x more efficient than they were 18 months ago. Our custom TPUs, now in their fifth generation, are powering the next generation of ambitious AI projects. Gemini was trained on and is served using TPUs…

…We offer an industry-leading portfolio of NVIDIA GPUs along with our TPUs. This includes TPU v5p, which is now generally available and NVIDIA’s latest generation of Blackwell GPUs. 

Alphabet’s management is seeing generative AI cause a shift in what people can do with Search, and they think this will lead to a new stage of growth, similar to the outcomes of prior shifts in Search; Alphabet has been experimenting with SGE (Search Generative Experience) for over a year and the company is now bringing AI overseas to main Search results; Alphabet has served billions of queries with its generative AI features; people who use the AI overviews in Google Search increase their search usage and report higher satisfaction with search results; ads that are above or below SGE results were found by users to be helpful; management is confident that SGE with ads will remain relevant; management thinks that the use of generative AI can help Google answer more complex questions and expand the type of queries it can serve

We have been through technology shifts before, to the web, to mobile, and even to voice technology. Each shift expanded what people can do with Search and led to new growth. We are seeing a similar shift happening now with generative AI. For nearly a year, we’ve been experimenting with SGE in search labs across a wide range of queries. And now we are starting to bring AI overviews to the main Search results page. We are being measured in how we do this, focusing on areas where gen AI can improve the search experience while also prioritizing traffic to websites and merchants. We have already served billions of queries with our generative AI features. It’s enabling people to access new information, to ask questions in new ways and to ask more complex questions. Most notably, based on our testing, we are encouraged that we are seeing an increase in search usage among people who use the new AI overviews as well as increased user satisfaction with the results…

…We shared in March how folks are finding ads either above or below the SGE results helpful. We’re excited to have a solid baseline to keep innovating on and confident in the role SGE, including Ads, will play in delighting users and expanding opportunities to meet user needs…

… I think with generative AI in Search, with our AI overviews, I think we will expand the type of queries we can serve our users. We can answer more complex questions as well as, in general, that all seems to carry over across query categories. Obviously, it’s still early, and we are going to be measured and put user experience at the front, but we are positive about what this transition means…

…On SGE in Search, we are seeing early confirmation of our thesis that this will expand the universe of queries where we are able to really provide people with a mix of actual answers linked to sources across the Web and bring a variety of perspectives, all in an innovative way. 

The cost of producing SGE responses has decreased by 80% from when SGE was first introduced a year ago because of work Alphabet has done on its Gemini models and TPUs

A number of technical breakthroughs are enhancing machine speed and efficiency, including the new family of Gemini models and a new generation of TPUs. For example, since introducing SGE about a year ago, machine costs associated with SGE responses have decreased 80% from when first introduced in Labs driven by hardware, engineering, and technical breakthroughs.

Alphabet’s immense reach – 6 products with >2 billion monthly users each, and 15 products with 0.5 billion users – is helpful in distributing AI to users; management has brought AI features to many Alphabet products

We have 6 products with more than 2 billion monthly users, including 3 billion Android devices. 15 products have 0.5 billion users, and we operate across 100-plus countries. This gives us a lot of opportunities to bring helpful gen AI features and multimodal capabilities to people everywhere and improve their experiences. We have brought many new AI features to Pixel, Photos, Chrome, Messages and more. We are also pleased with the progress we are seeing with Gemini and Gemini Advanced through the Gemini app on Android and the Google app on iOS.

Alphabet’s management thinks the company has a clear path to monetisation of AI services through ads, cloud, and subscriptions; Alphabet introduced Gemini Advance, a subscription service to access the most advanced Gemini model, was introduced in 2024 Q1

We have clear paths to AI monetization through Ads and Cloud as well as subscriptions…

… Our Cloud business continues to grow as we bring the best of Google AI to enterprise customers and organizations around the world. And Google One now has crossed 100 million paid subscribers, and in Q1, we introduced a new AI premium plan with Gemini Advanced.

Established enterprises are using Google Cloud for their AI needs (For example: (1) Discover Financial has begun deploying generative AI tools to its 10,000 call center agents, (2) McDonald’s is using gen AI to enhance its customer and employee experiences, and (3) WPP is integrating with Gemini models); more than 60% of funded generative AI (gen AI) start-ups and nearly 90% of gen AI unicorns are also using Google Cloud; more than 1 million developers are now using Alphabet’s generative AI tools; customers can now also ground their generative AI with Google Search and their own data 

At Google Cloud Next, more than 300 customers and partners spoke about their generative AI successes with Google Cloud, including global brands like Bayer, Cintas, Mercedes-Benz, Walmart and many more…

Today, more than 60% of funded gen AI start-ups and nearly 90% of gen AI unicorns are Google Cloud customers. And customers like PayPal and Kakao Brain are choosing our infrastructure… 

……On top of our infrastructure, we offer more than 130 models, including our own models, open source models and third-party models. We made Gemini 1.5 Pro available to customers as well as Imagine 2.0 at Cloud Next. And we shared that more than 1 million developers are now using our generative AI across tools, including AI Studio and Vertex AI. We spoke about how customers like Bristol-Myers Squibb and Etsy can quickly and easily build agents and connect them to their existing systems. For example, Discover Financial has begun deploying gen AI-driven tools to its nearly 10,000 call center agents to achieve faster resolution times for customers. Customers can also now ground their gen AI with Google Search and their own data from their enterprise databases and applications. In Workspace, we announced that organizations like Uber, Pepperdine University and PennyMac are using Gemini and Google Workspace, our AI-powered agent that’s built right into Gmail, Docs sheets and more…

…To help McDonald’s build the restaurant of the future, we’re deepening our partnership across cloud and ads. Part of this includes them connecting Google Cloud’s latest hardware and data technologies across restaurants globally and starting to apply Gen AI to enhance its customer and employee experiences. Number two, WPP. At Google Cloud Next, we announced a new collaboration that will redefine marketing through the integration of our Gemini models with WPP Open, WPP’s AI-powered marketing operating system, already used by more than 35,000 of its people and adopted by key clients, including The Coca-Cola Company, L’Oreal and Nestle. We’re just getting started here and excited about the innovation this partnership will unlock. 

Alphabet’s management has AI solutions to help advertisers with predicting ad conversions and to match ads with relevant searches; management thinks Alphabet’s ability to help advertisers find customers and grow their advertising ROI (return on investment) is getting better as the company’s AI models improve

We’ve talked about whole solutions like Smart Bidding use AI to predict future ad conversions and their value in helping businesses stay agile and responsive to rapid shifts in demand and how products like broad match leverage LLMs to match ads to relevant searches and help advertisers respond to what millions of people are searching for…

…As advances accelerate in our underlying AI models, our ability to help businesses find users at speed and scale and drive ROI just keeps getting better.

Alphabet’s management introduced Gemini into Performance Max (PMax) in February and early results show PMax users are 63% more likely to publish a campaign with good or excellent ad strength and those who improve their ad strength on PMax to excellent see a 6% increase in conversions; PMax is available to all US advertisers and is starting to be rolled out internationally

In February, we rolled Gemini into PMax. It’s helping curate and generate text and image assets so businesses can meet PMax asset requirements instantly. This is available to all U.S. advertisers and starting to roll out internationally in English, and early results are encouraging. Advertisers using PMax asset generation are 63% more likely to publish a campaign with good or excellent ad strength. And those who improve their PMX ad strength to excellent see 6% more conversions on average.

Advertisers who use Alphabet’s ACA (automatically created assets) feature that is powered by generative AI see conversions increase by 5%

We’re also driving improved results for businesses opting into automatically created assets, which are supercharged with gen AI. Those adopting ACA see, on average, 5% more conversions at a similar cost per conversion in Search and Performance Max campaigns.

Alphabet’s Demand Gen AI-powered service helps advertisers engage with new and existing customers across Youtube, Shorts, Gmail, and Discover; movie studio Lionsgate tested Demand Gen for a movie’s promotion and saw that it provided an 85% more efficient CPC (cost per click) and 96% more efficient cost per page view compared to social benchmarks; Lionsgate has used Demand Gen for two more films; Alphabet recently introduced new tools in Demand Gen

And then there’s Demand Gen. Advertisers are loving its ability to engage new and existing customers and drive purchase consideration across our most immersive and visual touch points like YouTube, Shorts, Gmail and Discover. Hollywood film and TV studio, Lionsgate, partnered with Horizon Media to test what campaign type will deliver the most ticketing page views for its The Hunger Games: Ballad of Songbirds and Snakes film. Over a 3-week test, demand gen was significantly more efficient versus social benchmarks with an 85% more efficient CPC and 96% more efficient cost per page view. Lionsgate has since rolled out Demand Gen for 2 new titles. We’re also bringing new creative features to demand gen. Earlier this month, we announced new generative image tools to help advertisers create high-quality assets in a few steps with a few simple prompts. This will be a win for up-leveling visual storytelling and testing creative concepts more efficiently.

Google Cloud had 28% revenue growth in 2024 Q1 (was 26% in 2023 Q4), driven by an increasing contribution from AI; management sees the growth of Google Cloud being underpinned by the benefits AI provides for customers, and management wants to invest aggressively in cloud while remaining focused on profitable growth; Alphabet’s big jump capex in 2024 Q1 (was $6.3 billion in 2023 Q1) was mostly for technical infrastructure and reflects management’s confidence in the opportunities offered by AI; management expects Alphabet’s quarterly capex for the rest of 2024 to be similar to what was seen in 2024 Q1; management has no view on 2025 capex at the moment; management sees Google Cloud hitting an inflection point because of AI

Turning to the Google Cloud segment. Revenues were $9.6 billion for the quarter, up 28%, reflecting significant growth in GCP with an increasing contribution from AI and strong Google Workspace growth, primarily driven by increases in average revenue per seat. Google Cloud delivered operating income of $900 million and an operating margin of 9%…

…With respect to Google Cloud, performance in Q1 reflects strong demand for our GCP infrastructure and solutions as well as the contribution from our Workspace productivity tools. The growth we are seeing across Cloud is underpinned by the benefit AI provides for our customers. We continue to invest aggressively while remaining focused on profitable growth…

…With respect to CapEx, our reported CapEx in the first quarter was $12 billion, once again driven overwhelmingly by investment in our technical infrastructure, with the largest component for servers followed by data centers. The significant year-on-year growth in CapEx in recent quarters reflects our confidence in the opportunities offered by AI across our business. Looking ahead, we expect quarterly CapEx throughout the year to be roughly at or above the Q1 level, keeping in mind that the timing of cash payments can cause variability in quarterly reported CapEx…

…And then with respect to 2025, as you said, it’s premature to comment so nothing to add on that…

…On the Cloud side, obviously, it’s definitely a point of inflection overall. I think the AI transformation is making everyone think about their whole stack, and we are engaged in a number of conversations. I think paid AI infrastructure, people are really looking to Vertex AI, given our depth and breadth of model choice, or using Workspace to transform productivity in your workplace, et cetera. So I think the opportunities there are all related to that, both all the work we’ve built up and AI being a point of inflection in terms of driving conversations. I think you’ll see us do it both organically and with a strong partner program as well. So we’ll do it with a combination.

Alphabet’s management thinks the AI transition is a once-in-a-generation opportunity; it’s the first time they think Alphabet can work on AI in a horizontal way

I think the AI transition, I think it’s a once-in-a-generation kind of an opportunity. We’ve definitely been gearing up for this for a long time. You can imagine we started building TPUs in 2016, so we’ve definitely been gearing up for a long time…

… The real opportunities we see is the scale of research and innovation, which we have built up and are going to continue to deliver. I think for the first time, we can work on AI in a horizontal way and it impacts the entire breadth of the company, be it Search, be it YouTube, be it Cloud, be it Waymo and so on. And we see a rapid pace of innovation in that underlying.

Alphabet’s management thinks that, with regards to monetising the opportunity of smartphone-based AI searches, there will be search use-cases that can be fulfilled on-device, but there will be many, many search use-cases that will require the internet

[Question] As users start searching on smartphones and those searches are basically rendered on the model, on the phone, without accessing the web, how do you guys anticipate monetizing some of these smartphone-based behaviors that are kind of run on the edge?

[Answer] If you look at what users are looking for, people are looking for information and an ability to connect with things outside. So I think there will be a set of use cases which you will be able to do on device. But for a lot of what people are looking to do, I think you will need the richness of the cloud, the Web and you have to deliver it to users. So again, to my earlier comments, I think through all these moments, you saw what we have done with Samsung with Circle to Search. I think it gives a new way for people to access Search conveniently wherever they are. And so we view this as a positive way to bring our services to users in a more seamless manner. So I think it’s positive from that perspective. In terms of on-device versus cloud, there will be needs which can be done on-device and we should to help it from a privacy standpoint. But there are many, many things for which people will need to reach out to the cloud. And so I don’t see that as being a big driver in the on-cloud versus off-cloud in any way.

Amazon (NASDAQ: AMZN)

Amazon’s management recently launched a new generative AI tool for third-party sellers to quickly create product detail pages on Amazon using just the sellers’ URL to their websites; more than 100,000 third-party sellers on Amazon are already using at least one of Amazon’s generative AI tools

We’ve recently launched a new generative AI tool that enables sellers to simply provide a URL to their own website, and we automatically create high-quality product detail pages on Amazon. Already, over 100,000 of our selling partners have used one or more of our gen AI tools. 

Amazon’s management is seeing AWS customers being excited about leveraging generative AI to change their customer experiences and businesses; AWS’s AI business is already at a multibillion-dollar revenue rate; AWS AI’s business is driven by a few things, including the fact that many companies are still building their models; management expects more models to be built on AWS over time because of the depth of AI offerings AWS has

Our AWS customers are also quite excited about leveraging gen AI to change the customer experiences and businesses. We see considerable momentum on the AI front where we’ve accumulated a multibillion-dollar revenue run rate already…

… I mentioned we have a multibillion-dollar revenue run rate that we see in AI already, and it’s still relatively early days. And I think that there’s — at a high level, there’s a few things that we’re seeing that’s driving that growth. I think first of all, there are so many companies that are still building their models. And these range from the largest foundational model builders like Anthropic, you mentioned, to every 12 to 18 months or building new models. And those models consume an incredible amount of data with a lot of tokens, and they’re significant to actually go train. And a lot of those are being built on top of AWS, and I expect an increasing amount of those to be built on AWS over time because our operational performance and security and as well as our chips, both what we offer from NVIDIA. But if you take Anthropic, as an example, they’re training their future models on our custom silicon on Trainium. And so I think we’ll have a real opportunity for a lot of those models to run on top of AWS.

Amazon’s management’s framework for thinking about generative AI consists of 3 layers –  the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon continues to add capabilities in all 3

You heard me talk about our approach before, and we continue to add capabilities at all 3 layers of the gen AI stack. At the bottom layer, which is for developers and companies building models themselves, we see excitement about our offerings…

…The middle layer of the stack is for developers and companies who prefer not to build models from scratch but rather seek to leverage an existing large language model, or LLM, customize it with their own data and have the easiest and best features available to deploy secure high-quality, low-latency, cost-effective production gen AI apps…

…The top of the stack are the gen AI applications being built. 

Amazon’s management thinks AWS has the broadest selection of Nvidia compute instances but also sees high demand for Amazon’s custom silicon, Trainium and Inferentia, as they provide favourable price performance benefits; larger quantities of Amazon’s latest Trainium chip, Trainium 2, will arrive in 2024 H2 and early 2025; Anthropic’s future models will be trained on Tranium

We have the broadest selection of NVIDIA compute instances around, but demand for our custom silicon, Trainium and Inferentia, is quite high given its favorable price performance benefits relative to available alternatives. Larger quantities of our latest generation Trainium2 is coming in the second half of 2024 and early 2025…

…But if you take Anthropic, as an example, they’re training their future models on our custom silicon on Trainium. 

SageMaker, AWS’s fully-managed machine learning service, has helped (1) Perplexity AI train models 40% faster, (2) Workday reduce inference latency by 80%, and (3) NatWest reduce time to value for AI from 12-18 months to less than 7 months; management is seeing an increasing number of AI model builders standardising on SageMaker

Companies are also starting to talk about the eye-opening results they’re getting using SageMaker. Our managed end-to-end service has been a game changer for developers in preparing their data for AI, managing experiments, training models faster, lowering inference latency, and improving developer productivity. Perplexity.ai trains models 40% faster than SageMaker. Workday reduces inference latency by 80% with SageMaker, and NatWest reduces its time to value for AI from 12 to 18 months to under 7 months using SageMaker. This change is how challenging it is to build your own models, and we see an increasing number of model builders standardizing on SageMaker.

Amazon’s management thinks Amazon Bedrock, a LLM-as-a-service offering, has the broadest selection of LLMs (large language models) for customers in addition to retrieval augmented generation (RAG) and other features; Bedrock offers high-profile LLMs – such as Anthropic’s Claude 3 and Meta’s Llama 3 – in addition to Amazon’s own Titan models; Custom Model Import is a new feature from Bedrock that satisfies a customer request (the ability to import models from SageMaker or elsewhere into Bedrock in a simple manner) that nobody has yet met; management is seeing customers being excited about Custom Model Import; Bedrock has tens of thousands of customers

 This is why we built Amazon Bedrock, which not only has the broadest selection of LLMs available to customers but also unusually compelling model evaluation, retrieval augmented generation, or RAG, to expand model’s knowledge base, guardrails to safeguard what questions applications will answer, agents to complete multistep tasks, and fine-tuning to keep teaching and refining models. Bedrock already has tens of thousands of customers, including adidas, New York Stock Exchange, Pfizer, Ryanair and Toyota. In the last few months, Bedrock’s added Anthropic’s Claude 3 models, the best-performing models in the planet right now; Meta’s Llama 3 models; Mistral’s various models, Cohere’s new models and new first-party Amazon Titan models.

A week ago, Bedrock launched a series of other features, but perhaps most importantly, Custom Model Import. Custom Model Import is a sneaky big launch as it satisfies a customer request we’ve heard frequently and that nobody has yet met. As increasingly more customers are using SageMaker to build their models, they’re wanting to take advantage of all the Bedrock features I mentioned earlier that make it so much easier to build high-quality production-grade gen AI apps. Bedrock Custom Model Import makes it simple to import models from SageMaker or elsewhere into Bedrock before deploying their applications. Customers are excited about this, and as more companies find they’re employing a mix of custom-built models along with leveraging existing LLMs, the prospect of these 2 linchpin services in SageMaker and Bedrock working well together is quite appealing…

…And the primary example we see there is how many companies, tens of thousands of companies, already are building on top of Amazon Bedrock.

Amazon’s management has announced the general availability of Amazon Q, a highly-capable generative AI-powered assistant; Amazon Q helps developers generate code, test code, debug code, and can save developers months of work when moving from older versions of Java to newer ones; Amazon Q has an Agents capability which can autonomously perform a range of tasks, including (1) implementing application features, and (2) parsing a company’s entire data stock to create summaries and surface insights; Amazon Q also has Q Apps, which lets employees describe in natural language what app they want to build on top of internal data; management believes that Q is the most functionally-capable AI-powered assistant for software development and data, as Q outperforms competitors; many companies are already using Amazon Q

And today, we announced the general availability of Amazon Q, the most capable generative AI-powered assistant for software development and leveraging company’s internal data.

On the software development side, Q doesn’t just generate code. It also tests code, debugs coding conflicts, and transforms code from one form to another. Today, developers can save months using Q to move from older versions of Java to newer, more secure and capable ones. In the near future, Q will help developers transform their .NET code as well, helping them move from Windows to Linux.

Q also has a unique capability called Agents, which can autonomously perform a range of tasks, everything from implementing features, documenting, and refactoring code to performing software upgrades. Developers can simply ask Amazon Q to implement an application feature such as asking it to create an add to favorites feature in a social sharing app, and the agent will analyze their existing application code and generate a step-by-step implementation plan, including code changes across multiple files and suggested new functions. Developers can collaborate with the agent to review and iterate on the plan, and then the agent implements it, connecting multiple steps together and applying updates across multiple files, code blocks and test suites. It’s quite handy. On the internal data side, most companies have large troves of internally relevant data that resides in wikis, Internet pages, Salesforce, storage repositories like Amazon S3 and a bevy of other data stores and SaaS apps that are hard to access. It makes answering straightforward questions about company policies, products, business results, code, people, and many other topics hard and frustrating. Q makes this much simpler. You can point Q at all of your enterprise data repositories and it will search all this data, summarize logically, analyze trends, engage in dialogue with customers about this data.

We also introduced today a powerful new capability called Q Apps, which lets employees describe a natural language what apps they want to build on top of this internal data and Q Apps will quickly generate that app. This is going to make it so much easier for internal teams to build useful apps from their own data.

Q is not only the most functionally capable AI-powered assistant for software development and data but also setting the standard for performance. Q has the highest-known score and acceptance rate for code suggestions, outperforms all other publicly benchmarkable competitors and catching security vulnerabilities, and leads all software development assistants on connecting multiple steps together and applying automatic actions. Customers are gravitating to Q, and we already see companies like Brightcove, British Telecom, Datadog, GitLab, GoDaddy, National Australia Bank, NCS, Netsmart, Slam, Smartsheet, Sun Life, Tata Consultancy Services, Toyota, and Wiz using Q, and we’ve only been in beta until today.

Amazon’s management believes that AWS has a meaningful edge in security elements when it comes to generative AI, and this has led to companies moving their AI focus to AWS

I’d also caution folks not to overlook the security and operational performance elements of these gen AI services. It’s less sexy but critically important. Most companies care deeply about the privacy of the data in their AI applications and the reliability of their training and production apps. If you’ve been paying attention to what’s been happening in the last year or so, you can see there are big differences between providers on these dimensions. AWS has a meaningful edge, which is adding to the number of companies moving their AI focus to AWS.

Amazon’s management sees Amazon’s capex increasing meaningfully in 2024 compared to 2023 ($48.4 billion in 2023) because of AWS’s accelerating growth and high demand for generative AI; the capex in 2024 will go mostly towards technology infrastructure; the capex of $14 billion in 2024 Q1 will be the low quarter for the year;

We expect the combination of AWS’ reaccelerating growth and high demand for gen AI to meaningfully increase year-over-year capital expenditures in 2024, which given the way the AWS business model works is a positive sign of the future growth…

…As a reminder, we define these as the combination of CapEx plus equipment finance leases. In 2023, overall capital investments were $48.4 billion…

…We do see, though, on the CapEx side that we will be meaningfully stepping up our CapEx and the majority of that will be in our — to support AWS infrastructure and specifically generative AI efforts…

…We’re talking about CapEx. Right now, in Q1, we had $14 billion of CapEx. We expect that to be the low quarter for the year.

Amazon’s management is very bullish on AWS, as 85% or more of global IT spend remains on-premise, even though AWS is already at at $100 billion-plus revenue run rate; in addition, there’s demand for generative AI, most of which will be created in the next few decades from scratch and on the cloud

We remain very bullish on AWS. We’re at $100 billion-plus annualized revenue run rate, yet 85% or more of the global IT spend remains on-premises. And this is before you even calculate gen AI, most of which will be created over the next 10 to 20 years from scratch and on the cloud. There is a very large opportunity in front of us. 

Amazon’s management thinks the generative AI opportunity is something they have not seen since the cloud or internet

We have a lot of growth in front of us, and that’s before the generative AI opportunity, which I don’t know if any of us have seen a possibility like this in technology in a really long time, for sure, since the cloud, perhaps since the Internet. 

Amazon’s management thinks much more money will be spent on AI inference than on model training; management sees quite a few companies that are building their generative AI applications to do inference on AWS

I think the thing that people sometimes don’t realize is that while we’re in the stage that so many companies are spending money training models, once you get those models into production, which not that many companies have, but when you think about how many generative AI applications will be out there over time, most will end up being in production when you see the significant run rates. You spend much more in inference than you do in training because you train only periodically, but you’re spinning out predictions and inferences all the time. And so we also see quite a few companies that are building their generative AI applications to do inference on top of AWS.

Amazon’s management sees both training and inference being really big drivers for AWS; this is helped by the fact that these AI models will work with companies’ data and the security surrounding the data is important for companies, and AWS has a meaningful edge in security

We see both training and inference being really big drivers on top of AWS. And then you layer on top of that the fact that so many companies, their models and these generative AI applications are going to have their most sensitive assets and data. And it’s going to matter a lot to them what kind of security they get around those applications. And yes, if you just pay attention to what’s been happening over the last year or 2, not all the providers have the same track record. And we have a meaningful edge on the AWS side so that as companies are now getting into the phase of seriously experimenting and then actually deploying these applications to production, people want to run their generative AI on top of AWS.

Apple (NASDAQ: AAPL)

Apple’s management continues to feel bullish about Apple’s opportunity in generative AI; Apple is making significant investments in the area and will be sharing details soon; management thinks Apple has advantages with AI given its unique combination of hardware, software, services, custom silicon (with industry-leading neural engines), and privacy

We continue to feel very bullish about our opportunity in generative AI. We are making significant investments, and we’re looking forward to sharing some very exciting things with our customers soon. We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create. 

Apple’s management does not expect Apple’s capex to inflect higher, nor the composition of the capex to change much, even as the company leans into AI

[Question] As Apple leans more into AI and generative AI, should we expect any changes to the historical CapEx cadence that we’ve seen in the last few years of about $10 billion to $11 billion per year? Or any changes to how we may have historically thought about the split between tooling, data center, and facilities?

[Answer]  We are obviously very excited about the opportunity with GenAI. We obviously are pushing very hard on innovation on every front, and we’ve been doing that for many, many years. Just during the last 5 years, we spent more than $100 billion in research and development. As you know, on the CapEx front, we have a bit of a hybrid model where we make some of the investments ourselves. In other cases, we share them with our suppliers and partners. On the manufacturing side, we purchase some of the tools and manufacturing equipment. In some of the cases, our suppliers make the investment. And we do something similar on the data center side. We have our own data center capacity and then we use capacity from third parties. It’s a model that has worked well for us historically, and we plan to continue along the same lines going forward.

Apple’s management will soon share their thoughts on how Apple intends to monetise AI on its devices – but not today

[Question] You’ve obviously mentioned your excitement around generative AI multiple times. I’m just curious how Apple is thinking about the different ways in which you can monetize this technology because, historically, software upgrades haven’t been a big factor in driving product cycles. And so could AI be potentially different?

[Answer] I don’t want to get in front of our announcements obviously. I would just say that we see generative AI as a very key opportunity across our products, and we believe that we have advantages that set us apart there. And we’ll be talking more about it in — as we go through the weeks ahead.

Arista Networks (NYSE: ANET)

Arista Networksmanagement sees an addressable market of US$60 billion in client-to-cloud AI networking

Amidst all the network consolidation, Arista is looking to establish ourselves as the pure-play networking innovator, for the next era, addressing at least a $60 billion TAM in data-driven client-to-cloud AI networking.

Arista Networks’ management is pleased with the momentum they are seeing in the company’s customer segments, including the Cloud and AI Titans segment; management is becoming increasingly constructive about hitting their 2025 target of US$750 million in AI revenue; the 2025 target of US$750 million is not a hockey-stick target, but a glide path

We are quite pleased with the momentum across all our 3 sectors: Cloud and AI Titans, Enterprise and Providers. Customer activity is high as Arista continues to impress our customers and prospects with our undeniable focus on quality and innovation…

… A good AI network needs a good data strategy, delivered by our highly differentiated EOS and network data lake architecture. We are, therefore, becoming increasingly constructive about achieving our AI target of $750 million in 2025…

…When you think about the $750 million target that has become more constructive to Jayshree’s prepared remarks, that’s a glide path. So it’s not 0 in ’24, It’s a glide path to ’25. 

Traditional networking discards data as the network changes state, but recent developments in AI show how important it is to gather and store large data sets – this is a problem Arista Networks’ management is solving through the company’s NetDL (Network Data Lake) platform, which streams every piece of network data in real time and archives the full data history

From the inception of networking decades ago, networking has involved rapidly changing data. Data about how the network is operating, which paths through the network our best and how the network is being used. But historically, most of this data was to simply discarded as the network changes state and that which was collected can be difficult to interpret because it lacks context. Network addresses and port numbers by themselves, provide a little insight into what users are doing or experiencing.

Recent developments in AI have proved the value of data. But to take advantage of these breakthroughs, you need to gather and store large data sets, labeled suitably for machine learning. Arista is solving this problem with NetDL, we continually monitor every device, not simply taking snapshots, but rather streaming every network event, every counter, every piece of data in real time, archiving a full history in NetDL. Alongside this device data, we also collect flow data and inbound network telemetry data gathered by our switches. Then we enrich this performance data further with user, service and application layer data from external sources outside the network, enabling us to understand not just how each part of the network is performing, but also which users are using the network for what purposes. And how the network behavior is influencing their experience. NetDL is a foundational part of the EOS stack, enabling advanced functionality across all of our use cases. For example, in AI fabrics, NetDL enables fabric-wide visibility, integrating network data and NIC data to enable operators to identify misconfigurations or misbehaving hosts and pinpoint performance bottlenecks.

Any slowdown in the network when running generative AI training tasks can reduce processor performance by 30% or more

As generative AI training tasks evolve, they are made up of many thousands of individual iterations. Any slowdown due to network and critically impact the application performance, creating inefficient wait stage and idling away processor performance by 30% or more. The time taken to reach coherence known, as job completion time is an important benchmark achieved by building proper scale-out AI networking to improve the utilization of these precious and expensive GPUs. 

A Cloud and AI Titan customer of Arista Networks used the company’s product to build a 24,000 node GPU cluster for complex AI training tasks; Arista Networks’ product offered an improvement of at least 10% on job completion performance across all packet sizes versus InfiniBand; in Arista Networks’ four recent AI Ethernet clusters that was won versus InfiniBand, management is seeing all four projects migrate from trials to pilots; Arista Networks will be connecting thousands of GPUs in the four projects this year and management expects to connect 10,000 to 100,000 GPUs in 2025; ethernet was traditionally considered to have loss properties while InfiniBand was traditionally considered to be lossless, but when ethernet is used in actual GPU clusters, ethernet is 10% faster than Infiniband; management expects improvement in ethernet’s performance relative to Infiniband in the future, driven partly by the Ultra Ethernet Consortium 

In a recent blog from one of our large Cloud and AI Titan customers, Arista was highlighted for building a 24,000 node GPU cluster based on our flagship 7800 AI Spine. This cluster tackles complex AI training tasks that involve a mix of model and data parallelization across thousands of processors and ethernet is proving to offer at least 10% improvement of job completion performance across all packet sizes versus InfiniBand…

…If you recall, in February, I shared with you that we are progressing well in 4 major AI Ethernet clusters, that we won versus InfiniBand recently. In all 4 cases, we are now migrating from trials to pilots, connecting thousands of GPUs this year, and we expect production in the range of 10,000 to 100,000 GPUs in 2025…

…Historically, as you know, when you look at InfiniBand and Ethernet in isolation, there are a lot of advantages of each technology. Traditionally, InfiniBand has been considered lossless and Ethernet is considered to have some loss properties. However, when you actually put a full GPU cluster together along with the optics and everything, and you look at the coherents of the job completion time across all packet sizes, data has shown that and this is data that we have gotten from third parties, including Broadcom, that just about in every packet size in a real-world environment independent of the comparing those technologies, the job completion time of Ethernet was approximately 10% faster. So you can look at these things in silos. You can look at it in a practical cluster and in a practical cluster we are already seeing improvements on Ethernet. Now don’t forget, this is just Ethernet as we know it today. Once we have the Ultra Ethernet Consortium and some of the improvements you’re going to see on packet spring and dynamic load balancing and congestion control, I believe those numbers will get even better. 

Arista Networks’ management is witnessing an inflection of AI networking and expects the trend to continue both in the short and long run; management is seeing ethernet emerging as critical infrastructure for both front-end and back-end AI data centers; AI applications require seamless communication between the front-end (includes CPUs, or central processing units) and back-end (includes GPUs and AI accelerators); management is seeing ethernet at scale becoming the de facto network and premium choice for scaled-out AI training workloads

We are witnessing an inflection of AI networking and expect this to continue throughout the year and decade. Ethernet is emerging as a critical infrastructure across both front-end and back-end AI data centers. AI applications simply cannot work in isolation and demand seamless communication among the compute nodes, consisting of back-end GPUs and AI accelerators and as well as the front end nodes like the CPUs, alongside storage and IP/WAN systems as well…

…Ethernet at scale is becoming the de facto network at premier choice for scale-out AI training workloads.

Arista Networks’ management thinks that visibility on new AI and cloud projects is getting better and has now improved to at least 6 months

In summary, as we continue to set the direction of Arista 2.0 networking, our visibility to new AI and cloud projects is improving and our enterprise and provider activity continues to progress well…

…In the Cloud and AI Titans in November, we were really searching for even 3 months visibility, 6 would have been amazing. Today, I think after a year of tough situations for us where the Cloud Titans were pivoting rather rapidly to AI and not thinking about the Cloud as much. We’re now seeing a more balanced approach where they’re still doing AI, which is exciting, but they’re also expanding their regions on the Cloud. So I would say our visibility has now improved to at least 6 months and maybe it gets longer as time goes by.

Arista Networks’ management still sees Infiniband as the de facto network of choice for AI workloads, but ethernet is gaining ground; management sees ethernet as being the eventual winner against InfiniBand because ethernet has a long history of 50 years that gives it an advantage (Metcalfe’s law) 

And then sometimes we see them, obviously, when they’re pushing InfiniBand, which has been, for most part, the de facto network of choice. You might have heard me say, last year or the year before, I was outside looking into this AI networking. But today, we feel very pleased that we are able to be the scale-out network for NVIDIA’s, GPUs and NICs based on Ethernet.,,

…This InfiniBand topic keeps coming up. And I’d just like to point out that Ethernet is about 50 years old. And over those 50 years, Ethernet has come head-to-head with a bunch of technologies like Token ring, SONET, ATM, FDDI, HIPPI, Scalable Coherent Interconnect, [ Mirrornet ]. And all of these battles have one thing in common. Ethernet won. And the reason why is because of Metcalfe’s law, the value of a network is quadratic in the number of nodes of the interconnect. And so anybody who tries to build something which is not Ethernet, is starting off with a very large quadratic disadvantage. And any temporary advantage they have because of the — some detail of the tech cycle is going to be quickly overwhelmed by the connectivity advantage you have with Ethernet.

Arista Networks’ management does not see Nvidia as a direct competitor for ethernet; management also believes that Arista Networks’ focus and experience are advantages

We don’t see NVIDIA as a direct competitor yet on the Ethernet side. I think it’s 1% of their business. It’s 100% of our business. So we don’t worry about that overlap at all. And we think we’ve got 20 years of founding to now experience to make our Ethernet switching better and better at both on the front end and back end. So we’re very confident that Arista can build the scale up network and work with NVIDIA scale-up GPUs.

Within AI networking, Arista Networks’ management is seeing the first use case emerging to be the build-out of the fastest training workloads and clusters

The first use case that’s emerging for AI networking is, let’s just build the fastest training workloads and clusters. And they’re looking at performance. Power is a huge consideration, the cooling of the GPUs is a huge part of it. You would be surprised to hear a lot of times, it’s just waiting on the facilities and waiting for the infrastructure to be set up, right?

Arista Networks’ management is seeing Tier 2 cloud providers starting to pick up AI initiatives, although the Tier 2 providers are not close to the level the activity as the Cloud Titans

The Tier 2 cloud providers, I want to speak to them for a moment because not only are they strong for us right now, but they are starting to pick up some AI initiatives as well. So they’re not as large as close as the Cloud Titans, but the combination of the Service Providers and the Tier 2 Specialty Providers is also seeing some momentum.

Arista Networks is seeing GPU lead times improve significantly 

The GPU, the number of GPUs, the location of the GPUs, the scale of the GPUs, the locality of these GPUs, should they go with Blackwell should they build with a scale up inside the server or scale out to the network. So the whole center of gravity, what’s nice to watch which is why we’re more constructive on the 2025 numbers is that the GPU lead times have significantly improved, which means more and more of our customers will get more GPUs, which in turn means they can build out to scale our network.

Arista Networks’ management is not seeing any pause in their customers’ investments in GPU clusters and networking just to wait for the delivery of Nvidia’s latest Blackwell AI chips; Arista Networks’ networking products can perform the required networking tasks well regardless of what GPU is used

[Question] I want to go back to AI, the road map and the deployment schedule for Blackwell. So it sounds like it’s a bit slower than maybe initially expected with initial customer delivery late this year. How are you thinking about that in terms of your road map specifically and how that plays into what you’re thinking about ’25 in a little bit more detail. And does that late delivery maybe put a little bit of a pause on maybe some of the cloud spend in the fall of this year as there seems to be somewhat of a technology transition going on towards Blackwell away from the Legacy product?

[Answer] We’re not seeing a pause yet. I don’t think anybody is going to wait for Blackwell necessarily in 2024 because they’re still bringing up their GPU clusters. And how a cluster is divided across multiple tenants, the choice of host, memory, storage architectures, optimizations on the GPU for collective communication, libraries, specific workloads, resilience, visibility, all of that has to be taken into consideration. All this to say, a good scale-out network has to be built, no matter whether you’re connecting to today’s GPUs or future Balckwells. And so they’re not going to pause the network because they’re waiting for Blackwell. they’re going to get ready for the network, whether it connects to a Blackwell or a current H100. So as we see it, the training workloads and the urgency of getting the best job completion time is so important that they’re not going to spare any investments on the network side and the network side can be ready no matter what the GPU is.

ASML (NASDAQ: ASML)

ASML’s management sees no change to the company’s outlook for 2024 from what was mentioned in the 2023 Q4 earnings call, with AI-related applications still driving demand, Memory demand being driven by DRAM technology node transitions to support DDR5 and HBM, and Logic customers digesting capacity additions made in 2023

Looking at the market segments, we see a similar environment as communicated last quarter with demand momentum from AI-related applications. Memory demand is primarily driven by DRAM technology node transitions in support of advanced memories such as DDR5 and HBM. Logic customers continue to digest the significant capacity additions made over the last year — over the past year

ASML’s management sees some macro uncertainties as still being present, but the long-term trends in the company’s business (AI, electrification, energy transition) are intact

There are still some uncertainties. I would say primarily macro uncertainties. That’s still clearly there…

…If you look at the trends in the industry, if you look at, and I’m talking about the cyclicality trends in the industry, so like the utilization going up, inventory downstream being managed to more normal levels. I think it’s pretty clear that the industry is in its upturn and therefore we do believe that by 2024 we’re going to see a recovery. Clearly a recovery of the industry. So then fast forward to 2025. Then what do we find ourselves in? First off, I think we will find ourselves in 2025 in the midst of the upturn. So that’s a positive. Second – and we’ve talked about that many times – the secular trends are really strong. If you look at AI, if you look at electrification, if you look at the energy transition. It’s all very strong, very positive momentum behind it. So the secular trends are very, very strong. That is also something that I think will yield in 2025. Finally, if you just look at all the fab openings that have been indicated by our customers. The recent news on positive outcomes of CHIPS Act money allocation. All of that is very strong, very supportive for new fab openings across the globe. I think by 2025 you will see all three of those coming together. New fab openings, strong secular trends and the industry in the midst of its upturn. So that’s why we’re doing what we’re doing. Which is really preparing for that ramp, for that momentum that we see being built up.

ASML’s management thinks that AI will be driving demand for leading-edge and high-performance compute; AI is itself driven by massive amounts of data and the overlay of smart software over the data; management also thinks that IoT (Internet of Things) will be an area with plenty of AI applications

You’re basically saying what will drive leading-edge, high-performance compute. But you’re absolutely right. I mean, when you think about high-performance compute, and especially in the context of AI, and I’ve said this many, many times before, AI is driven by massive amounts of data and about also understanding the correlation between those data elements and then overlaying that with smart software. But — and I also believe, it’s actually what I’m seeing and what I’m hearing is that IoT in the industrial space will actually be in — will be an area where we will see a lot of AI applications. Well, in order to collect all that data, you need sensors because you’ve got all kinds of examples, whether it’s the car or whether it’s life science, medical equipment, it’s about sensing, and that is really the domain of mainstream semiconductors.

ASML’s management is seeing the software world enjoying 30% to 50% increases in productivity because of the use of AI

And when you think about AI, I mean, some of these examples, and especially in the software space where you see productivity, just the calculated productivity advantages of 30% to 50%, then the value of the next-generation transistor will be huge.

Coupang (NYSE: CPNG)

Coupang’s management is exploring both the company’s own foundational AI models as well as those from third-parties; AI has been a core part of Coupang’s strategy and management has been deploying the technology in many areas of the company’s business; management is excited about the potential of AI, but will be testing its ability to generate returns for the business

On AI, we are exploring, both for us, as you mentioned, foundational models as well as our own. Machine learning and AI continues to be — have been a core part of our strategy. We’ve deployed them in many facets of our business, from supply chain management to same-day logistics. We’re also seeing tremendous potential with large language models in a number of areas from search and ads, to catalog and operations, among others. There’s exciting potential for AI that we see and we see opportunities for it to contribute even more significantly to our business. But like any investment we make, we’ll test and iterate and then invest further only in the cases where we see the greatest potential for return.

Datadog (NASDAQ: DDOG)

Datadog’s management has announced general availability of Bits AI for incident management, where Bits AI can produce auto-generated incident summaries for incident responders

In the MegaGenAI space, we announced general availability of Bits AI for incident management. By using Bits AI for incident management, incident responders get auto-generated incident summaries to quickly understand the context and scope of a complex incident. And users can also enqure Bits AI to ask about related incidents and to form tasks on the fly from incident creation to resolution.

There’s growing interest in AI from Datadog’s customers, and the company’s next-gen AI customers accounted for 3.5% of ARR (was 3% in 2023 Q4); the percentage of ARR from next-gen AI customers is a metric that management thinks will become less relevant over time as AI adoption broadens

We’re also continuing to see more interest in AI from our customers. As a data point, ARR for next GenAI customers was about 3.5% of our total, a strong sign of the growing ecosystem of companies in this area…

…I’m not sure this is a metric we’ll keep bringing up. It was interesting for us to look at this small group of early AI-native companies to get a sense of what might come next in the world of AI. But I think as we — as time goes by and as AI adoption broadens, I think it becomes less and less relevant. 

Datadog has AI integrations that allow customers to pull their AI data into the Datadog platform; around 2,000 customers are already using 1 or more of the AI integrations

To help customers understand AI technologies and bring them into production applications, our AI integrations allow customers to pull their AI data into the Datadog platform. And today, about 2,000 of our customers are using 1 or more of these AI integrations. And we’ve continued to keep up with the rapid innovation in this space. For example, adding a new integration in Q1 with the NVIDIA Triton [indiscernible] server. 

Datadog’s management has announced general availability for Event Management in the cloud service management area; Event Management reduces the volume of alerts and events Datadog’s customers have to deal with; with Event Management, Datadog now has a full AI solution that helps teams automate remediation, proactively prevent outages and reduce the impact of incidents.

In the cloud service management area, we released event management in the general availability. Our customers face increasing complexity at scale, causing the volume of alerts and events to explode, which makes it difficult for teams to identify, prioritize, summarize and route issue to the right responders. Event management addresses this challenge by automatically reducing a massive volume of events and alerts into actionable insights. These are then used to generate tickets, call an incident or trigger an automated remediation. By combining event management with Watchdog, Bits AI and workflow automations, Datadog now provides a full AI solution that helps teams automate remediation, proactively prevent outages and reduce the impact of incidents…

…We just announced in GA, the Event Management product, which is the main missing building block we had for AIOps platform.

Datadog’s Azure business is growing faster than Azure itself, and Datadog’s AI-part of Azure is growing than faster then the AI-part of Azure itself

The hyperscaler that is the most open by is — or transparent by in terms of numbers is Microsoft as they disclose how much of their growth comes from AI more specifically. And I will say that if you compare our business to theirs, the Azure part of our business is growing faster than Azure itself. And the AI-driven part of our Azure business itself is also growing faster than what you see on the on the overall Azure number. So we think we have similar exposure, and we track to the same trends broadly

Datadog’s AI exposure leans toward inferencing and applications in production a lot more than the training of models

I will say also on AI adoption that some of the revenue jumps you might see from the cloud providers might relate to supply of GPUs coming online and a lot of training clusters being provisioned. And those typically won’t generate a lot of new usage for us. We tend to be more correlative with the live applications, production applications and inference workloads that tend to follow after that, and that are more tied to all of these applications going into production. 

Datadog has products for monitoring what AI models are doing, but those products are not in general availability yet; management expects to have more announcements on these monitoring products in the near future; Datadog’s customers that are the most scaled on AI workloads are model providers, and they tend to have their own monitoring infrastructure for the quality of the models; the needs of the model providers for monitoring infrastructure are not representative of the needs of the bulk of the market, but there may still be overlaps in the future if the situation with the cloud hyperscalers is used as a guide

We have products for monitoring, not just the infrastructure, but what the LLMs are doing. Those products are still not in GA, so we’re working with a smaller number of design partners for that. As I think not only these products are maturing, but also the industry around us is maturing and more of these applications are getting into production. You should expect to hear more from us on that topic in the near future. The customers we have that are the most scaled on AI workloads are the model providers themselves, and they tend to have their own infrastructure for monitoring the quality of the models…

…On the tooling, I would say there’s a handful of players that have been building that tooling for a few years for — in a way that’s very specialized to what they do internally. They are not necessarily the representative of the bulk of the market. So in those situations, we’re always careful about overfitting products to a group that might not be the right target customer group in the end in the same way that building infrastructure monitoring for the cloud providers to use internally might not be an exact fit for what the rest of the world needs. That being said, I mean, look, we work a lot with those companies, and they have a number of needs that some of them they can meet internally and some of them, they don’t. And if I go back to the example of hyperscalers, we actually have teams at the hyperscalers that use us for application or infrastructure or logs internally, even though they’ve built a lot of that tooling themselves. So I think everything is possible in the long run. But our focus is really on the vast majority of the customer base that’s going to either use those API-based products or tune and run their own models.  

Datadog’s management is seeing a trend of AI-adopters starting with an API-accessible AI model to build applications, before offloading some of the workload to open-sourced AI models

We think there are good/bad weather in terms of what the adoption of AI is going to be from all the other companies, and we definitely see a trend where customers start with an API-driven or API-accessible model, build applications and then offload some of that application to other models that typically come from the open source and they might train, fine-tune themselves to get to a lower cost and lower time to respond.

Management is seeing a lot of interest in Datadog’s new AI-related products; management thinks its AI-related products are a joy to use

We see a lot of interest in the new products. These are new products so we just announced in GA, the Event Management product, which is the main missing building block we had for AIOps platform. And we also just released into GA, Bits for incident management. So there’s a lot of demand for it. The products are actually, I will say it, for Bits for incident management is a joy to use.

Etsy (NASDAQ: ETSY)

Etsy’s management believes the company’s product team is getting more efficient with machine learning

We had double-digit growth in the number of experiments per product engineer that utilize machine learning as well as in our annualized gross GMS from experiments. And the total number of experiments run per engineer increased 20%. Some of this progress can be directly tied to work we told you about last year to democratize ML. These metrics give me confidence that the bold moves to improve customer experience can build over time and play a key role to get Etsy growing again.

Etsy’s management thinks the application of AI is very useful for the company’s Gift Mode initiative

Large language models were really helpful for Gift Mode. So for example, there are 200 different persona in Gift Mode. And then within each persona, there are 3 to 5 different gift ideas and the ability to ask large language models, what are 200 examples of persona, and it wasn’t quite this simple, but it does give you a head start on that. If I’m a foodie who also loves to travel, what are 3 things I might buy on Etsy, 3 different ideas for gifts on Etsy, like, it does help to come up with a lot of ideas more quickly. The productivity gains, large language models are starting to help us with coding productivity as well. 

Etsy’s management finds the use of machine learning (ML) to be particularly useful in removing products that violate Etsy’s policies

We’re doing more than ever to suppress and remove listings that violate our policies. And advances in ML have been particularly powerful as enablers here. In the first quarter, we removed about 115% more listings for violating our handmade policy than in the prior year…

…For example, does this same item exist also on AliExpress. And we assume right now, if that item exists on AliExpress, we assume it’s mass produced and we take it down. You as a seller can appeal that, you can tell us how you made it yourself, and it still ended up on AliExpress. And by the way, that’s true sometimes. You can appeal that, but our default now is we take that down. And that’s just one example. Gen AI is actually going to be, I think, more and more helpful at understanding how much value did this particular seller truly add to the product.

Etsy’s management has used machine learning to improve the estimation of delivery time for products

In terms of shipping timeliness, I’m pleased to report that our initiative to tighten estimated delivery dates, which we believe are an important effort to improve buyer perceptions of our reliability as well as to grow GMS, are already paying off. Our fulfillment team recently launched a new machine learning model, which reduced our estimate of USPS transit times by greater than 1 day, resulting in a nearly tripling of the percentage of eligible orders for which Etsy is now able to show an estimated delivery date of 7 days or less.

Fiverr (NYSE: FVRR)

Fiverr’s management continues to see AI having a net positive impact on the company’s business; AI-related services saw 95% year-on-year growth in GMV on Fiverr’s platform, with chatbot development being especially popular; a hospitality company and an online learning platform are two examples of companies that have used Fiverr for AI chatbot development

AI continued to have a net positive impact on our business, as complex services continue to grow faster and represent a bigger portion of our business. Demand for AI-related services remained strong, as evidenced by 95% year-over-year growth in GMV from AI service categories. Chatbot development was especially popular this quarter as businesses look for ways to lean into GenAI technology to better engage with customers. For example, we have seen a hospitality company building a conversational tool for customers to manage bookings or an online learning platform creating a personalized learning menu and tutoring sessions for children. 

Fiverr has a pool of 10,000 AI experts and it is growing

With an over 10,000 and growing AI expert pool, Fiverr has become the destination for businesses to get help implementing GenAI and take their business to the next level.

Fiverr’s management is seeing very promising signals on Fiverr Neo, the company’s AI assistant for matching buyers with sellers; one-third of buyers who received seller recommendations from Neo sent a project brief to a seller; overall order conversion with Neo is nearly 3x that of the Fiverr marketplace average; management is excited about the potential of AI matching technology 

We have also seen very promising signals on Fiverr Neo, the AI matching assistant that we launched last year. Neo enables our buyers to have a more natural purchasing path by creating a conversational experience that leverages the catalog data and search algo. Answers and steps are provided based on buyers questions and the stage of the search. As a result, we saw that nearly one-third of the buyers who received seller recommendations from Neo ended up sending a project brief to the seller and the overall order conversion is nearly 3 times that of the marketplace average. This really gives us confidence and excitement in the potential we could unlock by investing in AI matching technology.

Fiverr’s product innovation pace had picked up in recent years; the latest set of product innovations will be focused on deepening trust and leveraging AI

Our product innovation pace picked up even more in recent years as the scale of our marketplace significantly expanded. This includes monetization products, such as Promoted Gigs and Seller Plus; AI innovations such as Logo Maker, AI Audition, to the latest ground-breaking Fiverr Neo; Business Solutions offerings, such as Project Partner and Fiverr Certified; and numerous products and features such as Fiverr Discover, Milestones and Subscriptions that empower our community to work better and smarter. We are always leading the curve of innovation that powers growth not only for us, but for the industry.

As our teams work towards our July product release, we are focusing on deepening trust and leveraging AI to reimagine every aspect of the customer journey. This includes improving our catalog and building new experiences to enable high-stakes, high-trust work to happen on Fiverr. We are strengthening our muscle in knowing our customers better in order to provide them with the better matching, better recommendations and better customer care, all of which leads to more trust for Fiverr as a platform. We are already seeing some of the benefits in unlocking wallet share and driving a mix shift towards complex services on Fiverr, and we are going to see more impact down the road

All the work that Fiverr facilitates happens on Fiverr, so management believes that the company has a lot of data (for perspective, in 2023, 38 million files were exchanged on Fiverr, and 2.5 million messages were sent daily between buyers and sellers) to leverage with generative AI to take the matching experience for buyers and sellers to a new level

Second, data and AI matching. Fiverr is unique in the sense that we are not just a platform that connects businesses with freelancers, the entire work actually happens on Fiverr. And that is really the secret sauce that enables us to do matching in such a simple, accurate and seamless way. With Generative AI, there’s incredible potential to take that experience to a whole new level. Just to give you some idea of the scale we operate. In 2023, over 38 million files were exchanged on our platform, and on average, 2.5 million messages were sent between buyers and sellers on a daily basis. We are experimenting with GenAI technology on how to unlock the potential of that massive data on Fiverr in order to enable buyers and sellers to have more information, search and browse in new ways, ask more complex questions, and ultimately, make better, more informed choices on Fiverr.

Fiverr’s management is seeing the presence of AI having a negative impact on the simple, low-value services on the company’s marketplace, but AI is overall a net-positive for Fiverr; management gave an example of how only simple language translation services are being impacted by AI, but the complex translation services are not

We mentioned in the previous earnings the fact that the negative impact that we’re seeing from AI is mostly around the very simple types of services. Those are normally services that would sell for $10, $15, which is — I mean, we are moving. I mean, the majority of contribution is coming from more complex services anyway. And as I said, we continue to see AI as a net positive. So it’s contributing more than the offsetting factors of simple products.It happens across several categories in several verticals, but there’s nothing specific to call out. Even if you look at the areas that you might think that AI would influence significantly like translation. But what you’re seeing is actually the very simple services around translation are being affected, the more complex types of services are not. I mean, if you would publish a book and then want to translate it into a different language that you don’t command, I would doubt that you would let AI translate it and go publish the outcome without actually verifying it.

Fiverr’s management is sure that many experts use AI as part of their workflow, but they do not rely on the AI blindly

I’m sure many experts actually use AI tools in their process of work, but they don’t rely on blindly letting AI run the work for them, but it is more of the modern tech that they use in order to amplify their creative process.

Mastercard (NYSE: MA)

Scam Protect is a new service launched by Mastercard’s management to protect users against cybercrime; Scam Protect combines Mastercard’s identity biometric AI and open banking capabilities

Cybercrime is a growing concern, last year alone, people in the United States lost over $12 billion to Internet scams. Scam Protect builds on the cybersecurity protections we have delivered for years, combines our identity biometric AI and open banking capabilities to identify and prevent scans before they occur. 

Mastercard is partnering with Verizon to design new AI tools to identify and block scammers

By combining Mastercard’s Identity Insights with Verizon’s robust network technologies, new AI power tools will be designed to more accurately identify and block scammers. 

Mastercard’s management has continued to enhance the company’s solutions with generative AI; Decision Intelligence Pro is a real-time transaction fraud solution for banks that is powered by generative AI to improve scoring and fraud detection by 20%; management sees tremendous opportunity with generative AI and has created a central role for AI

We continue to enhance our solutions with generative AI to deliver even more value, a world-leading real-time fraud solution, Decision Intelligence, has been helping banks score and safely approve billions of transactions, ensuring the safety of consumers and the entire payments networks for years. The next-generation technology, Decision Intelligence Pro is supercharged by generative AI to improve the overall score and boost fraud detection rates on average by 20%…

…We see tremendous opportunity on the AI side, particularly on the generative AI side, and we’ve created a central role for that. 

Meta Platforms (NASDAQ: META)

Meta is building a number of different AI services, including Meta AI (an AI assistant), creator AIs, business AIs, internal coding and development AIs, and hardware for AI interactions

We are building a number of different AI services from Meta AI, our AI assistant that you can ask any question across our apps and glasses, to creator AIs that help creators engage their communities and that fans can interact with, to business AIs that we think every business eventually on our platform will use to help customers buy things and get customer support, to internal coding and development AIs, to hardware like glasses for people to interact with AIs and a lot more.

Meta’s management released the company’s new version of Meta AI recently and it is powered by the company’s latest foundational model, Llama 3; management’s goal is for Meta AI to be the world’s leading AI service; tens of millions of people have tried Meta AI and the user feedback has been very positive; Meta AI is currently in English-speaking countries, but will be rolled out in more languages and countries in the coming months; management believes that the Llama3 version of Meta AI is the most intelligent AI assistant; Meta AI can be used within all of Meta’s major apps; besides being able to answer queries, Meta AI can also create animations as well as generate images while users are typing, which is a magical experience; Meta AI can also be used in Search within Meta’s apps, and Feed and Groups on Facebook

Last week, we had the major release of our new version of Meta AI that is now powered by our latest model, Llama 3. And our goal with Meta AI is to build the world’s leading AI service, both in quality and usage. The initial rollout of Meta AI is going well. Tens of millions of people have already tried it. The feedback is very positive. And when I first checked in with our teams, the majority of feedback we were getting was people asking us to release Meta AI for them wherever they are. So we’ve started launching Meta AI in some English speaking countries, and we’ll roll out in more languages and countries over the coming months…

…We believe that Meta AI with Llama 3 is now the most intelligent AI assistant that you can freely use. And now that we have the superior quality product, we’re making it easier for lots of people to use it within WhatsApp, Messenger, Instagram, and Facebook…

…In addition to answering more complex queries, a few other notable and unique features from this

release: Meta AI now creates animations from still images, and now generates high quality images so

fast that it can create and update them as you’re typing, which is pretty awesome. I’ve seen a lot of people commenting about this experience online and how they’ve never seen or experienced anything like it before…

…Along with using Meta AI within our chat surfaces, people will now be able to use Meta AI in Search within our apps, as well as Feed and Groups on Facebook. We expect these integrations will complement our social discovery strategy as our recommendation systems help people to discover and explore their interests while Meta AI enables them to dive deeper on topics they’re interested in. 

Meta’s foundational AI model, Llama3, has three versions with different number of parameters; management thinks the two smaller versions are both best-in-class for their scale; the 400+ billion parameter version of Llama3 is still undergoing training and is on track to be industry-leading; management thinks the Llama3 models will improve from further open source contributions

I’m very pleased with how Llama 3 has come together so far. The 8B and 70B parameter models that we released are best-in-class for their scale. The 400+B parameter model that we’re still training seems on track to be industry-leading on several benchmarks. And I expect that our models are just going to improve further from open source contributions. 

Meta’s management wants the company to invest significantly more in the coming years to build more advanced AI models and the largest scale AI services in the world, but the AI investments will come ahead of any meaningful revenue-generation from these new AI products

This leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world. As we’re scaling capex and energy expenses for AI, we’ll continue focusing on operating the rest of our company efficiently. But realistically, even with shifting many of our existing resources to focus on AI, we’ll still grow our investment envelope meaningfully before we make much revenue from some of these new products…

… …We anticipate our full-year 2024 capital expenditures will be in the range of $35-40 billion, increased from our prior range of $30-37 billion as we continue to accelerate our infrastructure investments to support our AI roadmap. While we are not providing guidance for years beyond 2024, we expect capex will continue to increase next year as we invest aggressively to support our ambitious AI research and product development efforts.

Meta’s management thinks there are a few ways to build a massive AI business for Meta – these include business messaging, introducing ads and paid content in AI interactions, and selling access to powerful AI models and AI compute – in addition to the benefits to Meta’s current digital advertising business through the use of AI; management thinks business messaging is one of Meta’s nearer-term opportunities; management’s long-term vision for business messaging is to have AI agents that can accomplish goals rather than merely be a chatbot that replies to messages; management thinks that the capabilities of Meta’s business messaging AI technology will see massive improvements in as short as a year’s time

There are several ways to build a massive business here, including scaling business messaging, introducing ads or paid content into AI interactions, and enabling people to pay to use bigger AI models and access more compute. And on top of those, AI is already helping us improve app engagement which naturally leads to seeing more ads, and improving ads directly to deliver more value…

… The cost of engaging with people in messaging is still very high. But AI should bring that down just dramatically for businesses and creators. And I think that, that has the potential. That’s probably the — beyond just increasing engagement and increasing the quality of the ads, I think that, that’s probably one of the nearer-term opportunities, even though that will — it’s not like next quarter or the quarter after that scaling thing, but it’s — but that’s not like a 5-year opportunity either…

…I think that the next phase for a lot of these things are handling more complex tasks and becoming more like agents rather than just chat bots, right? So when I say chatbot, what I mean is if you send a message and it replies to your message, right? So it’s almost like almost a 1:1 correspondence. Whereas what an agent is going to do is you give it an intent or a goal, then it goes off and probably actually performs many queries on its own in the background in order to help accomplish your goal, whether that goal is researching something online or eventually finding the right thing that you’re looking to buy…  I think basically, the larger models and then the more advanced future versions that will be smaller as well are just going to enable much more interesting interactions like that. So I mean if you think about this, I mean, even some of the business use cases that we talked about, you don’t really just want like sales or customer support chatbot that can just respond to what you say. If you’re a business, you have a goal, right? You’re trying to support your customers well and you’re trying to position your products in a certain way and encourage people to buy certain things that map to their interests and would they be interested in? And that’s more of like a multiturn interaction, right?

So the type of business agent that you’re going to be able to enable with just a chatbot is going to be very naive compared to what we’re going to have in a year even, but beyond that, too, is just the reasoning and planning abilities if these things grow to be able to just help guide people through the business process of engaging with whatever your goals are as a creator of a business. So I think that that’s going to be extremely powerful. 

Meta’s AI recommendation system is currently delivering 30% of posts on the Facebook feed (up 2x over the last few years) and more than 50% of the content people see on Instagram (the first time this threshold is reached)

Right now, about 30% of the posts on Facebook feed are delivered by our AI recommendation system. That’s up 2x over the last couple of years. And for the first time ever, more than 50% of the content people see on Instagram is now AI recommended.

Revenue from two of Meta’s end-to-end AI-powered advertising tools, Advantage+ Shopping and Advantage+ App Campaigns, have more than doubled since last year; test results for the single-step automation feature of Advantage+ has resulted in a 28% decrease in cost per click or per objective for advertisers; Meta has significant runway to broaden adoption of the end-to-end automation features of Advantage+ and the company has enabled more conversion types

If you look at our two end-to-end AI-powered tools, Advantage+ Shopping and Advantage+ App Campaigns, revenue flowing through those has more than doubled since last year…

…So on the single-step automation, Advantage Plus audience, for example, has seen significant growth in adoption since we made it the default audience creation experience for most advertisers in Q4, and that enables advertisers to increase campaign performance by just using audience inputs as a suggestion rather than a hard constraint. And based on tests that we ran, campaigns using Advantage Plus audience targeting saw on average, a 28% decrease in cost per click or per objective compared to using our regular targeting.

On the end-to-end automation products like Advantage Plus shopping and Advantage Plus app campaigns, we’re also seeing very strong growth…  We think there’s still significant runway to broaden adoption, so we’re trying to enable more conversion types for Advantage Plus shopping. In Q1, we began expanding the list of conversions that businesses could optimize for. So previously, it only supported purchase events, and now we’ve added 10 additional conversion types. And we’re continuing to see strong adoption now across verticals.

Meta’s management continues to develop Meta’s own AI chips; Meta’s Training and Inference Accelerator chip is less expensive for Meta and has already been running some of Meta’s recommendation workloads

We’ll also keep making progress on building more of our own silicon. Our Meta Training and Inference Accelerator chip has successfully enabled us to run some of our recommendations-related workloads on this less expensive stack, and as this program matures over the coming years we plan to expand this to more of our workloads as well.

Meta’s management sees a market for a fashionable pair of AI glasses without holographic displays; management thinks that glasses are the ideal device for an AI assistant because the glasses can see what you see and hear what you hear; management recently launched Meta AI with Vision on its AI glasses; Meta’s AI glasses continue to do well and are sold out in many styles and colours

I used to think that AR glasses wouldn’t really be a mainstream product until we had full holographic displays — and I still think that will be awesome and is mature state of the product. But now it seems pretty clear that there’s also a meaningful market for fashionable AI glasses without a display. Glasses are the ideal device for an AI assistant because you can let them see what you see and hear what you hear, so they have full context on what’s going on around you as they help you with whatever you’re trying to do. Our launch this week of Meta AI with Vision on the glasses is a good example where you can now ask questions about things you’re looking at…

…The Ray-Ban Meta glasses that we built with Essilor Luxottica continue to do well and are sold out in many styles and colors, so we’re working to make more and release additional styles as quickly as we can.

Meta’s management is improving the monetisation efficiency of the company’s products partly by using larger AI models in its new ads ranking architecture, Meta Lattice (which was rolled out last year) in place of smaller models, as well as using AI to provide more automation – ranging from point-automation to end-to-end automation – for advertisers through its Advantage+ portfolio; Meta Lattice drove improved ad performance over the course of 2023 when it was deployed across Facebook and Instagram

The second part of improving monetization efficiency is enhancing marketing performance. Similar to our work with organic recommendations, AI is playing an increasing role in these efforts. First, we are making ongoing ads modeling improvements that are delivering better performance for advertisers. One example is our new ads ranking architecture, Meta Lattice, which we began rolling out more broadly last year. This new architecture allows us to run significantly larger models that generalize learnings across objectives and surfaces in place of numerous, smaller ads models that have historically been optimized for individual objectives and surfaces. This is not only leading to increased efficiency as we operate fewer models, but also improving ad performance. Another way we’re leveraging AI is to provide increased automation for advertisers. Through our Advantage+ portfolio, advertisers can automate one step of the campaign set up process – such as selecting which ad creative to show – or automate their campaign completely using our end-to-end automation tools, Advantage+ Shopping and Advantage+ App ads. We’re seeing growing use of these solutions, and we expect to drive further adoption over the course of the year while applying what we learn to our broader ads investments…

…We’ve talked a little bit about the new model architecture at Meta Lattice that we deployed last year that consolidates smaller and more specialized models into larger models that can better learn what characteristics improve ad performance across multiple services, like Feed and Reels and multiple types of ads and objectives at the same time. And that’s driven improved ad performance over the course of 2023 as we deployed it across Facebook and Instagram to support multiple objectives.

Meta’s recommendation products historically each had their own AI models, and a new model architecture to power multiple recommendation products was being developed recently; the new model architecture was tested last year on Facebook Reels and generated 8%-10% increases in watch time; the new model architecture has been extended beyond Reels and management is hopeful that the new architecture will unlock better video recommendations over time

Historically, each of our recommendation products, including Reels, in-feed recommendations, et cetera, has had their own AI model. And recently, we’ve been developing a new model architecture with the aim for it to power multiple recommendations products. We started partially validating this model last year by using it to power Facebook Reels. And we saw meaningful performance gains, 8% to 10% increases in watch time as a result of deploying this. This year, we’re actually planning to extend the singular model architecture to recommend content across not just Facebook Reels, but also Facebook’s video tab as well. So while it’s still too early to share specific results, we’re optimistic that the new model architecture will unlock increasingly relevant video recommendations over time. And if it’s successful, we’ll explore using it to power other recommendations.

Meta’s management is seeing adoption of Meta’s generative AI (GenAI) ad creative features across verticals and different advertiser sizes; some of these features are enjoying outsized adoption; Meta expects improvements to its underlying foundational AI models to improve the output quality of its GenAI ad creative features

The more near-term version is around the GenAI ad creative features that we have put into our ads creation tools. And it’s early, but we’re seeing adoption of these features across verticals and different advertiser sizes. In particular, we’ve seen outsized adoption of image expansion with small businesses, and this will remain a big area of focus for us in 2024, and I expect that improvements to our underlying foundation models will enhance the quality of the outputs that are generated and support new features on the road map. But right now, we have features supporting text variations, image expansion and background generation, and we’re continuing to work to make those more performance for advertisers to create more personalized ads at scale.

In early tests of using business AIs for business messaging, Meta’s management is receiving positive feedback from users

The longer-term piece here is around business AIs. We have been testing the ability for businesses to set up AIs for business messaging that represent them in chats with customers starting by supporting shopping use cases such as responding to people asking for more information on a product or its availability. So this is very, very early. We’ve been testing this with a handful of businesses on Messenger and WhatsApp, and we’re hearing good feedback with businesses saying that the AIs have saved them significant time while customer — consumers noted more timely response times. And we’re also learning a lot from these tests to make these AIs more performant over time as well.

Meta’s management has gotten more optimistic and ambitious on AI compared to just 3 months ago because of the company’s work with Llama3 and Meta AI

[Question] Can you just talk about what’s changed most in your view in the business and the opportunity now versus 3 months ago? 

[Answer]  I think we’ve gotten more optimistic and ambitious on AI. So previously, I think that our work in this — I mean when you were looking at last year, when we released Llama 2, we were very excited about the model and thought that, that was going to be the basis to be able to build a number of things that were valuable that integrated into our social products. But now I think we’re in a pretty different place. So with the latest models, we’re not just building good AI models that are going to be capable of building some new good social and commerce products. I actually think we’re in a place where we’ve shown that we can build leading models and be the leading AI company in the world. And that opens up a lot of additional opportunities beyond just ones that are the most obvious ones for us. So that’s — this is what I was trying to refer to in my opening remarks where I just view the success that we’ve seen with the way that Lama 3 and Meta AI have come together as a real validation technically that we have the talent, the data and the ability to scale infrastructure to do leading work here.

Meta’s AI capex can be categorised into 2 buckets, with one being core AI work that has a very ROI-driven (return on investment driven) approach and which still generates very strong returns, and the other being generative AI and other advanced research work that has tremendous potential but has yet to produce returns; Meta’s AI capex for the 2 buckets are in capacity that is fungible

We’ve broadly categorized our AI investments into 2 buckets. I think of them as sort of core AI work and then strategic bets, which would include Gen AI and the advanced research efforts to support that. And those are just really at different stages as it relates to being able to measure the return and drive revenue for our business.

So with our core AI work, we continue to have a very ROI-driven approach to investment, and we’re still seeing strong returns as improvements to both engagement and ad performance have translated into revenue gains.

Now the second area, strategic bets, is where we are much earlier. Mark has talked about the potential that we believe we have to create significant value for our business in a number of areas, including opportunities to build businesses that don’t exist on us today. But we’ll need to invest ahead of that opportunity to develop more advanced models and to grow the usage of our products before they drive meaningful revenue. So while there is tremendous long-term potential, we’re just much earlier on the return curve than with our core AI work.

What I’ll say though is we’re also building our systems in a way that gives us fungibility in how we use our capacity, so we can flex it across different use cases as we identify what are the best opportunities to put that infrastructure toward.

Meta is already shifting a lot of resources from other parts of the company into its AI efforts

I would say broadly, we actually are doing that in a lot of places in terms of shifting resources from other areas, whether it’s compute resources or different things in order to advance the AI efforts. 

Meta has partnered with Google and Bing for Meta AI’s search citations, but management has no intention to build a search ads business

[Question] You partnered with Google and Bing for Meta AI organic search citations. So I guess stepping back, do you think that Meta AI longer term could bring in search advertising dollars at some point?

[Answer] On the Google and Microsoft partnerships, yes, I mean we work with them to have real-time information in Meta AI. It’s useful. I think it’s pretty different from search. We’re not working on search ads or anything like that. I think this will end up being a pretty different business.

Microsoft (NASDAQ: MSFT)

Azure took market share again in 2024 Q1; Microsoft’s management thinks that (1) Azure offers the most diverse selection of AI accelerators, including those from Nvidia, AMD, and Microsoft’s own custom chips, (2) Azure offers the best selection of foundational AI models, including LLMs and SLMs (small language models), and (3) Azure’s Models as a Service offering makes it easy for developers to work with LLMs and SLMs without having to worry about technical infrastructure; >65% of Fortune 500 use Azure OpenAI service; hundreds of paid customers are using Azure’s Models as a Service to access third-party AI models including those from Cohere, Meta, and Mistral; Azure grew revenue by 31% in 2024 Q1 (was 30% in 2023 Q4), with 7 points of growth from AI services (was 6 points in 2023 Q4); Azure’s non-AI consumption business also saw broad greater-than-expected demand 

Azure again took share as customers use our platforms and tools to build their own AI solutions. We offer the most diverse selection of AI accelerators, including the latest from NVIDIA, AMD as well as our own first-party silicon…

…More than 65% of the Fortune 500 now use Azure OpenAI service. We also continue to innovate and partner broadly to bring customers the best selection of frontier models and open source models, LLMs and SLMs…

…Our Models as a Service offering makes it easy for developers to use LLMs and SLMs without having to manage any underlying infrastructure. Hundreds of paid customers from Accenture and EY to Schneider Electric are using it to take advantage of API access to third-party models including, as of this quarter, the latest from Cohere, Meta and Mistral…

… Azure and other cloud services revenue grew 31% ahead of expectations, while our AI services contributed 7 points of growth as expected. In the non-AI portion of our consumption business, we saw greater-than-expected demand broadly across industries and customer segments as well as some benefit from a greater-than-expected mix of contracts with higher in-period recognition. 

Microsoft’s management continues to build on the company’s partnership with OpenAI for AI work

Our AI innovation continues to build on our strategic partnership with OpenAI. 

Microsoft’s management thinks that Phi-3, announced by Microsoft recently, is the most capable and cost-effective SLM and it’s being trialed by a number of companies

With Phi-3, which we announced earlier this week, we offer the most capable and cost-effective SLM available. It’s already being trialed by companies like CallMiner, LTIMindtree, PwC and TCS.

Azure AI customers are growing and spending more with Microsoft; over half of Azure AI customers use Microsoft’s data and analytics tools and they are building applications with deep integration between these tools and Azure AI

All up, the number of Azure AI customers continues to grow and average spend continues to increase…

… Over half of our Azure AI customers also use our data and analytics tools. Customers are building intelligent applications running on Azure, PostgreSQL and Cosmos DB with deep integrations with Azure AI. TomTom is a great example. They’ve used Cosmos DB along with Azure OpenAI service to build their own immersive in-car infotainment system. 

GitHub Copilot now has 1.8 million paid subscribers, up 35% sequentially; even established enterprises are using GitHub Copilot; >90% of Fortune 100 companies are GitHub customers; GitHub’s revenue was up 45% year-on-year

GitHub Copilot is bending the productivity curve for developers. We now have 1.8 million paid subscribers with growth accelerating to over 35% quarter-over-quarter and continues to see increased adoption from businesses in every industry, including Itau, Lufthansa Systems, Nokia, Pinterest and Volvo Cars. Copilot is driving growth across the broader GitHub platform, too. AT&T, Citigroup and Honeywell all increased their overall GitHub usage after seeing productivity and code quality increases with Copilot. All up, more than 90% of the Fortune 100 are now GitHub customers, and revenue accelerated over 45% year-over-year.

Microsoft has new AI-powered features within its low-code and no-code tools for building applications; 30,000 organisations – up 175% sequentially – across all industries have used Copilot Studio to customise or build their own copilot; Cineplex used Copilot Studio to build a copilot for customer service agents to significantly reduce the time needed to handle queries; Copilot Studio can be really useful for enterprises to ground their AIs with enterprise data, and people are really excited about it

Anyone can be a developer with new AI-powered features across our low-code, no-code tools, which makes it easier to build an app, automate workflow or create a Copilot using natural language. 30,000 organizations across every industry have used Copilot Studio to customize Copilot for Microsoft 365 or build their own, up 175% quarter-over-quarter. Cineplex, for example, built a Copilot for customer service agents, reducing query handling time from as much as 15 minutes to 30 seconds…

…Copilot Studio is really off to the races in terms of the product that most people are excited because one of the things in the enterprise is you want to ground your copilot with the enterprise data, which is in all of these SaaS applications, and Copilot Studio is the tool to use there to make that happen.

More than 330,000 organisations, including half of the Fortune 100, have used AI-features within Microsoft’s Power Platform

All up, over 330,000 organizations, including over half of Fortune 100, have used AI-powered capabilities in Power Platform, and Power Apps now has over 25 million monthly active users, up over 40% year-over-year.

In 2024 Q1, Microsoft’s management made Copilot available to all organisations; nearly 60% of Fortune 500 are using Copilot; many large companies have purchased more than 10,000 Copilot seats each; management is seeing higher usage of Copilot from early adopters, including a 50% jump in Copilot-assisted interactions per user in Teams; Microsoft has added more than 150 Copilot capabilities since the start of the year, including Copilot for Service, Copilot for Sales, Copilot for Finance, and Copilot for Security

This quarter, we made Copilot available to organizations of all types and sizes from enterprises to small businesses. Nearly 60% of the Fortune 500 now use Copilot, and we have seen accelerated adoption across industries and geographies with companies like Amgen, BP, Cognizant, Koch Industries, Moody’s, Novo Nordisk, NVIDIA and Tech Mahindra purchasing over 10,000 seats. We’re also seeing increased usage intensity from early adopters, including a nearly 50% increase in the number of Copilot-assisted interactions per user in Teams, bridging group activity with business process workflows and enterprise knowledge…

…We’re accelerating our innovation, adding over 150 Copilot capabilities since the start of the year…

… This quarter, we made our Copilot for Service and Copilot for Sales broadly available, helping customer service agents and sellers at companies like Land O’Lakes, Northern Trust, Rockwell Automation and Toyota Group generate role-specific insights and recommendations from across Dynamics 365 and Microsoft 365 as well as third-party platforms like Salesforce, ServiceNow and Zendesk. And with our Copilot for Finance, we are drawing context from Dynamics as well as ERP systems like SAP to reduce labor-intensive processes like collections and contract and invoice capture for companies like Dentsu and IDC…

…A great example is Copilot for Security, which we made generally available earlier this month, bringing together LLMs with domain-specific skills informed by our threat intelligence and 78 trillion daily security signals to provide security teams with actionable insights.

Microsoft’s management is seeing ISVs (independent software vendors) build their own Copilot integrations, with Adobe being an example

ISVs are also building their own Copilot integrations. For example, new integrations between Adobe Experience Cloud and Copilot will help marketeers access campaign insights in the flow of their work. 

Copilot in Windows is now available on 225 million PCs, up 2x sequentially; Microsoft’s largest PC partners have announced AI PCs in recent months; management recently introduced new Surface devices that comes with NPUs (neural processing units) that can power on-device AI experiences; management thinks that the presence of Copilot can help Microsoft create a new device-category for AI

When it comes to devices, Copilot in Windows is now available on nearly 225 million Windows 10 and Windows 11 PCs, up 2x quarter-over-quarter. With Copilot, we have an opportunity to create an entirely new category of devices purpose built for this new generation of AI. All of our largest OEM partners have announced AI PCs in recent months. And this quarter, we introduced new Surface devices, which includes integrated NPUs to power on device AI experiences like auto framing and live captions. And there’s much more to come. In just a few weeks, we’ll hold a special event to talk about our AI vision across Windows and devices.

More than 200 healthcare organisations are using Microsoft’s DAX Copilot

In health care, DAX Copilot is being used by more than 200 health care organizations, including Providence, Stanford Health care and WellSpan Health. 

Established auto manufacturers are using Microsoft’s AI solutions to improve their factory operations

And in manufacturing, this week at Hannover Messe, customers like BMW, Siemens and Volvo Penta shared how they’re using our cloud and AI solutions to transform factory operations.

LinkedIn AI-assisted messages have a 40% higher acceptance rate and are accepted >10% faster by job seekers; LinkedIn’s AI-powered collaborative articles now have more than 12 million contributions and helped engagement on LinkedIn reach a new record in 2024 Q1; LinkedIn Premium’s revenue was up 29% year-on-year in 2024 Q1, with AI features helping to produce the growth

Features like LinkedIn AI-assisted messages are seeing a 40% higher acceptance rate and accepted over 10% faster by job seekers saving hirers time and making it easier to connect them to candidates. Our AI-powered collaborative articles, which has reached over 12 million contributions are helping increase engagement on the platform, which reached a new record this quarter. New AI features are also helping accelerate LinkedIn Premium growth with revenue up 29% year-over-year. 

Microsoft’s management expects capex to increase materially sequentially in 2024 Q2 (FY2024 Q4) because of cloud and AI infrastructure investments; management sees near-term AI demand as being higher than available capacity; capex in FY2025 is expected to be higher than in FY2024, but this will be driven ultimately by the amount of AI inference demand; operating margin in FY2025 is expected to be down by only 1 point compared to FY2024

We expect capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-outs and the timing of finance leases. We continue to bring capacity online as we scale our AI investments with growing demand. Currently, near-term AI demand is a bit higher than our available capacity…

…In FY ’25, that focus on execution should again lead to double-digit revenue and operating income growth. To scale to meet the growing demand signal for our cloud and AI products, we expect FY ’25 capital expenditures to be higher than FY ’24. These expenditures over the course of the next year are dependent on demand signals and adoption of our services, so we will manage that signal through the year. We will also continue to prioritize operating leverage. And therefore, we expect FY ’25 operating margins to be down only about 1 point year-over-year, even with our significant cloud and AI investments as well as a full year of impact from the Activision acquisition…

… Then, Amy referenced what we also do on the inference side, which is, one, we first innovate and build products. And of course, we have an infrastructure business that’s also dependent on a lot of ISVs building products that run on our infrastructure. And it’s all going to be demand driven. In other words, we track very closely what’s happening with inference demand, and that’s something that we will manage, as Amy said in her remarks, very, very closely.

Microsoft’s management expects Azure to grow revenue by 30%-31% in constant currency, similar to stronger-than-expected 2024 Q1 results, driven by AI

For Intelligent Cloud, we expect revenue to grow between 19% and 20% in constant currency or USD 28.4 billion to USD 28.7 billion. Revenue will continue to be driven by Azure, which, as a reminder, can have quarterly variability primarily from our per user business and in-period revenue recognition depending on the mix of contracts. In Azure, we expect Q4 revenue growth to be 30% to 31% in constant currency or similar to our stronger-than-expected Q3 results. Growth will be driven by our Azure consumption business and continued contribution from AI with some impact from the AI capacity availability noted earlier.

Management’s AI-related capital expenditure plans for Microsoft has two layers to it, namely, training and inference; for training, management wants Microsoft to have capacity to train large foundation models and stay a leader in that area; for inference, management is watching inference demand

[Question] It looks like Microsoft is on track to ramp CapEx over 50% year-on-year this year to over $50 billion. And there’s media speculation of more spending ahead with some reports talking about like $100 billion data center. So obviously, investments are coming well ahead of the revenue contribution, but what I was hoping for is that you could give us some color on how you as the management team try to quantify the potential opportunities that underlie these investments because they are getting very big. 

[Answer]  At a high level, the way we, as a management team, talk about it is there are 2 sides to this, right? There is training and there’s inference. What — given that we want to be a leader in this big generational shift and paradigm shift in technology, that’s on the training side. We want to be able to allocate the capital required to essentially be training these large foundation models and stay on the leadership position there. And we’ve done that successfully all the way today, and you’ve seen it flow through our P&L, and you can continue to see that going forward. Then, Amy referenced what we also do on the inference side, which is, one, we first innovate and build products. And of course, we have an infrastructure business that’s also dependent on a lot of ISVs building products that run on our infrastructure. And it’s all going to be demand driven. In other words, we track very closely what’s happening with inference demand, and that’s something that we will manage, as Amy said in her remarks, very, very closely.

Microsoft’s management feels good about demand for Azure, because (1) they think Azure is a market-share taker since it has become the go-to choice for anybody who is working on an AI project, (2) they are seeing that AI projects on Azure do not stop with just calling AI models and there are many other cloud computing services in Azure that are required, (3), there’s migration to Azure, and (4) the optimisation cycle from the recent past has given more budget for people to start new workloads

[Question] How would you characterize the demand environment? On one hand, you have bookings in Azure both accelerating year-over-year in the quarter, but we’re seeing a lot of future concern, hesitation from other vendors we all cover. So I think everyone would love to get your sense of budget health for customers this year.

[Answer] On the Azure side, which I think is what you specifically asked, we feel very, very good about the — we’re fundamentally a share taker there because if you look at it from our perspective, at this point, Azure has become a port of call for pretty much anybody who is doing an AI project. And so that’s sort of been a significant help for us in terms of acquiring even new customers…

…The second thing that we’re also seeing is AI just doesn’t sit on its own. So AI projects obviously start with calls to AI models, but they also use a vector database. In fact, Azure Search, which is really used by even ChatGPT, is one of the fastest growing services for us. We have Fabric integration to Azure AI and so — Cosmos DB integration. So the data tier, the dev tools is another place where we are seeing great traction. So we are seeing adjacent services in Azure that get attached to AI…

… lastly, I would say, migration to Azure as well. So this is not just all an AI story. 

We are also looking at customers — I mean, this is something that we have talked about in the past, which is there’s always an optimization cycle. But there’s also — as people optimize, they spend money on new project starts, which will grow and then they’ll optimize. So it’s a continuous side of it. So these are the 3 trends that are playing out on Azure in terms of what at least we see on demand side.

Microsoft’s management thinks that a good place to watch for the level of maturation for AI will be what’s happening in terms of standard issues for software teams; they are seeing Copilots increasingly becoming “standard issue” for software teams; they think companies will need to undergo a cultural shift to fully embrace AI tools and it will take some time, but the rate of adoption of Copilot is also faster than anything they have seen in the past

[Question] We’re seeing companies shifting their IT spending to invest in and learn about AI rather than receiving additional budgets for AI. At some point for AI to be transformative, as everyone expects, it needs to be accretive to spending. Satya, when do you believe AI will hit the maturity level?

[Answer] A good place to start is to watch what’s happening in terms of standard issues for software teams, right? I mean if you think about it, they bought tools in the past. Now you basically buy tools plus Copilot, right? So you could even say that this is characterized as perhaps shift of what is OpEx dollars into effectively tool spend because it gives operating leverage to all of the OpEx dollars you’re spending today, right? That’s really a good example of, I think, what’s going to happen across the board. We see that in customer service. We see that in sales. We see that in marketing, anywhere there’s operations…

…one of the interesting rate limiters is culture change inside of organizations. When I say culture change, that means process change…  That requires not just technology but in fact, companies to go do the hard work of culturally changing how they adopt technology to drive that operating leverage. And this is where we’re going to see firm-level performance differences…

…And so yes, it will take time to — for it to percolate through the economy. But this is faster diffusion, faster rate of adoption than anything we have seen in the past. As evidenced even by Copilot, right, it’s faster than any suite we have sold in the past.

Netflix (NASDAQ: NFLX)

Netflix has been working with machine learning (ML) for almost two decades, with ML being foundational for the company’s recommendation systems; management thinks that generative AI can be used to help creators improve their story-telling, and there will always be a place for creators

[Question]  What is the opportunity for Netflix to leverage generative AI technology in the near and long term? What do you think great storytellers should be focused on as this technology continues to emerge quickly? 

[Answer] Worth noting, I think, that we’ve been leveraging advanced technologies like ML for almost 2 decades. These technologies are the foundation for our recommendation systems that help us find these largest audiences for our titles and deliver the most satisfaction for members. So we’re excited to continue to involve and improve those systems as new technologies emerge and are developed.

And we also think we’re well positioned to be in the vanguard of adoption and application of those new approaches from our just general capabilities that we’ve developed and how we’ve already developed systems that do all these things.

We also think that we have the opportunity to develop and deliver new tools to creators to allow them to tell their stories in even more compelling ways. That’s great for them, it’s great for the stories, and it’s great for our members. 

And what should storytellers be focused on? I think storytellers should be focused on great storytelling. It is incredibly hard and incredibly complex to deliver thrilling stories through film, through series, through games. And storytellers have a unique and critical role in making that happen, and we don’t see that changing.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue had incredibly strong growth in 2024 Q1, driven by demand for the Hopper GPU computing platform; compute revenue was up by 5x while networking revenue was up by 3x

Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year.

Nvidia’s management thinks that cloud providers are getting a 5x return on spending on Nvidia’s AI products over 4 years; management also thinks that cloud providers serving LLMs (large language models) via APIs (application programming interfaces) can earn $7 in revenue for every $1 spent on Nvidia’s H200 servers through running inference 

Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud providers’ investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over 4 years…

…H200 nearly doubles the inference performance of H100, delivering significant value for production deployments. For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over 4 years.

Nvidia’s management sees Nvidia GPUs as offering the best time-to-train AI models, the lowest cost to train AI models, and the lowest cost to run inference on AI models

For cloud rental customers, NVIDIA GPUs offer the best time-to-train models, the lowest cost to train models and the lowest cost to inference large language models.

Leading LLM (large language model) providers are building on Nvidia’s AI infrastructure in the cloud

Leading LLM companies such as OpenAI, Adept, Anthropic, Character.ai, Cohere, Databricks, DeepMind, Meta, Mistral, XAi, and many others are building on NVIDIA AI in the cloud.

Tesla is using Nvidia’s GPUs for its FSD (Full Self Driving) version 12 software for AI-powered autonomous driving; Nvidia’s management sees automotive as the largest enterprise vertical within its Data Center business this year

We supported Tesla’s expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD version 12, their latest autonomous driving software based on Vision. NVIDIA Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption.

Meta Platform’s Llama3 LLM was trained on a large cluster of Nvidia GPUs

A big highlight this quarter was Meta’s announcement of Llama 3, their latest large language model, which was trained on a cluster of 24,000 H100 GPUs. Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp, and Messenger. Llama 3 is openly available and has kickstarted a wave of AI development across industries.

Nvidia’s management sees inferencing of AI models growing as generative AI makes its way into more consumer internet applications

As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute.

Nvidia’s management sees inferencing accounting for 40% of Data Center revenue over the last 4 quarters

In our trailing 4 quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly.

Nvidia’s management is seeing companies build AI factories (large clusters of AI chips); Nvidia worked with more than 100 customers in 2024 Q1 to build AI factories that range in size from hundreds to tens of thousands of GPUs

Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out.  In Q1, we worked with over 100 customers building AI factories ranging in size from hundreds to tens of thousands of GPUs, with some reaching 100,000 GPUs.

Nvidia’s management is seeing growing demand from nations for AI infrastructure and they see revenue from sovereign AI reaching high single-digit billions in 2024

From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in sovereign AI. Sovereign AI refers to a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce, and business networks. Nations are building up domestic computing capacity through various models. Some are procuring and operating sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use. For example, Japan plans to invest more than $740 million in key digital infrastructure providers, including KDDI, Sakura Internet, and SoftBank to build out the nation’s sovereign AI infrastructure. France-based Scaleway, a subsidiary of the Iliad Group, is building Europe’s most powerful cloud native AI supercomputer. In Italy, Swisscom Group will build the nation’s first and most powerful NVIDIA DGX-powered supercomputer to develop the first LLM natively trained in the Italian language. And in Singapore, the National Supercomputer Centre is getting upgraded with NVIDIA Hopper GPUs, while Singtel is building NVIDIA’s accelerated AI factories across Southeast Asia…

…From nothing the previous year, we believe sovereign AI revenue can approach the high single-digit billions this year.

Nvidia’s revenue in China is down significantly in 2024 Q1 because of export restrictions for leading AI chips; management expects to see strong competitive forces in China going forward

We ramped new products designed specifically for China that don’t require export control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward.

Because of improvements in CUDA algorithms, Nvidia’s management has been able to drive a 3x improvement in LLM inference speed on the H100 chips, which translates to a 3x cost reduction when serving AI models

Thanks to CUDA algorithm innovations, we’ve been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3.

Nvidia’s management sees the demand for the company’s latest AI chips to well exceed supply into 2025

We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply, and we expect demand may exceed supply well into next year.

Nvidia’s strong networking growth in 2024 Q1 was driven by Infiniband

Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2.

Nvidia’s management has started shipping its own Ethernet solution for AI networking called Spectrum-X Ethernet; management believes that Spectrum-X is optimised for AI from the ground-up, and delivers 1.6x higher networking performance for AI workloads compared with traditional ethernet; Spectrum-X is already ramping with multiple customers, including in a GPU cluster with 100,000 GPUs; Spectrum-X opens a new AI networking market for Nvidia and management thinks it can be a multi-billion product within a year; management is going all-in on Ethernet for AI networking, but they still see Infiniband as the superior solution; Infiniband started as a computing fabric and became a network, whereas Ethernet was a network that is becoming a computing fabric

In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up. It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet-only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year…

…But we’re all in on Ethernet, and we have a really exciting road map coming for Ethernet. We have a rich ecosystem of partners. Dell announced that they’re taking Spectrum-X to market. We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market.

And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet to network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we’re going to make it a much better computing fabric. And we’re committed, fully committed, to all 3 links, NVLink computing fabric for single computing domain, to InfiniBand computing fabric, to Ethernet networking computing fabric. And so we’re going to take all 3 of them forward at a very fast clip. 

Nvidia’s latest AI chip-platform, Blackwell, delivers 4x faster training speeds, 30x faster inference speeds, and 25x lower total cost of ownership, compared to the H100 chip and enables real-time generative AI on trillion-parameter LLMs; the Blackwell platform includes Nvidia’s Inifiniband and Ethernet switches; management has built Blackwell to be compatible with all kinds of data centers; the earliest deployers of Blackwell include Amazonn, Google, Meta, and Microsoft; Nvidia’s management is on a 1-year development rhythm with the Blackwell platform-family, so there will be a new version of Blackwell in the next 12 months

At GTC in March, we launched our next-generation AI factory platform, Blackwell. The Blackwell GPU architecture delivers up to 4x faster training and 30x faster inference than the H100 and enables real-time generative AI on trillion-parameter large language models. Blackwell is a giant leap with up to 25x lower TCO and energy consumption than Hopper. The Blackwell platform includes the fifth-generation NVLink with a multi-GPU spine and new InfiniBand and Ethernet switches, the X800 series designed for a trillion-parameter scale AI. Blackwell is designed to support data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand networking, and air cooling to liquid cooling. Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hoppers launched and representing every major computer maker in the world… 

…Blackwell time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and XAi…

…I can announce that after Blackwell, there’s another chip. And we are on a 1-year rhythm.

Nvidia’s management has introduced AI software called Nvidia Inference Microservices that allow developers to quickly build and deploy generative AI applications across a broad range of use cases including text, speech, imaging, vision, robotics, genomics, and digital biology

We announced a new software product with the introduction of NVIDIA Inference Microservices, or NIM. NIM provides secure and performance-optimized containers powered by NVIDIA CUDA acceleration in network computing and inference software, including Triton and PrintServer and TensorRT-LLM with industry-standard APIs for a broad range of use cases, including large language models for text, speech, imaging, vision, robotics, genomics, and digital biology. They enable developers to quickly build and deploy generative AI applications using leading models from NVIDIA, AI21, Adept, Cohere, Getty Images, and Shutterstock, and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Snowflake and Stability AI. NIMs will be offered as part of our NVIDIA AI enterprise software platform for production deployment in the cloud or on-prem.

Nvidia’s GPUs that are meant for gaming on personal computers (PCs) can also be used for running generative AI applications on PCs; Nvidia and Microsoft has a partnership that help Windows to run LLMs up to 3x faster on PCs equipped with Nvidia’s GeForce RTX GPU

From the very start of our AI journey, we equipped GeForce RTX GPUs with CUDA Tensor cores. Now with over 100 million of an installed base, GeForce RTX GPUs are perfect for gamers, creators, AI enthusiasts, and offer unmatched performance for running generative AI applications on PCs. NVIDIA has full technology stack for deploying and running fast and efficient generative AI inference on GeForce RTX PCs…

…Yesterday, NVIDIA and Microsoft announced AI performance optimizations for Windows to help run LLMs up to 3x faster on NVIDIA GeForce RTX AI PCs.

Nvidia’s management is seeing game developers using the company’s AI services to create non-playable life-like characters in games

Top game developers, including NetEase Games, Tencent and Ubisoft are embracing NVIDIA Avatar Character Engine (sic) [ Avatar Cloud Engine ] to create lifelike avatars to transform interactions between gamers and non-playable characters.

Nvidia’s management thinks that the combination of generative AI and the Omniverse can drive the next wave of professional visualisation growth; the Ominverse has helped Wistron to reduce production cycle times by 50% and defect rates by 40%

We believe generative AI and Omniverse industrial digitalization will drive the next wave of professional visualization growth…

…Companies are using Omniverse to digitalize their workflows. Omniverse power digital twins enable Wistron, one of our manufacturing partners, to reduce end-to-end production cycle times by 50% and defect rates by 40%. 

Nvidia’s management sees generative AI driving a platform shift in the full computing stack

With generative AI, inference, which is now about fast token generation at massive scale, has become incredibly complex. Generative AI is driving a from-foundation-up full stack computing platform shift that will transform every computer interaction. From today’s information retrieval model, we are shifting to an answers and skills generation model of computing. AI will understand context and our intentions, be knowledgeable, reason, plan and perform tasks. We are fundamentally changing how computing works and what computers can do, from general purpose CPU to GPU accelerated computing, from instruction-driven software to intention-understanding models, from retrieving information to performing skills and, at the industrial level, from producing software to generating tokens, manufacturing digital intelligence.

Nvidia’s management sees token generation from LLMs driving multi-year build out of AI factories

Token generation will drive a multiyear build-out of AI factories…

… Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out.

Nvidia’s management does not think that the demand they are seeing for the company’s AI chips is a pull-ahead of demand, because the the chips are being consumed

[Question] How are you ensuring that there is enough utilization of your products and that there isn’t a pull-ahead or a holding behavior because of tight supply, competition or other factors? 

[Answer] The demand for GPUs in all the data centers is incredible. We’re racing every single day. And the reason for that is because applications like ChatGPT and GPT-4o, and now it’s going to be multi-modality, Gemini and its ramp and Anthropic, and all of the work that’s being done at all the CSPs are consuming every GPU that’s out there. There’s also a long line of generative AI startups, some 15,000, 20,000 startups that are in all different fields, from multimedia to digital characters, of course, all kinds of design tool application, productivity applications, digital biology, the moving of the AV industry to video so that they can train end-to-end models to expand the operating domain of self-driving cars, the list is just quite extraordinary. We’re racing actually. Customers are putting a lot of pressure on us to deliver the systems and stand those up as quickly as possible. And of course, I haven’t even mentioned all of the sovereign AIs who would like to train all of their regional natural resource of their country, which is their data, to train their regional models. And there’s a lot of pressure to stand those systems up. So anyhow, the demand, I think, is really, really high and it outstrips our supply.

Nvidia’s management thinks that AI is not merely a chips problem – it is a system problem

The third reason has to do with the fact that we build AI factories. And this is becoming more apparent to people that AI is not a chip problem only. It starts, of course, with very good chips and we build a whole bunch of chips for our AI factories, but it’s a systems problem. In fact, even AI is now a systems problem. It’s not just one large language model. It’s a complex system of a whole bunch of large language models that are working together. And so the fact that NVIDIA builds this system causes us to optimize all of our chips to work together as a system, to be able to have software that operates as a system, and to be able to optimize across the system.

Nvidia’s management sees the highest performing AI chip as having the lowest total cost of ownership (TCO)

Today, performance matters in everything. This is at a time when the highest performance is also the lowest cost because the infrastructure cost of carrying all of these chips cost a lot of money. And it takes a lot of money to fund the data center, to operate the data center, the people that goes along with it, the power that goes along with it, the real estate that goes along with it, and all of it adds up. And so the highest performance is also the lowest TCO.

From the point of view of Nvidia’s management, customers do not mind buying Nvidia’s AI chips today even though better ones are going to come out tomorrow because they are still very early in their build-out of their AI infrastructure, and they want to ship AI advancements fast

[Question]  I’ve never seen the velocity that you guys are introducing new platforms at the same combination of the performance jumps that you’re getting…  it’s an amazing thing to watch but it also creates an interesting juxtaposition where the current generation of product that your customers are spending billions of dollars on is going to be not as competitive with your new stuff very, very much more quickly than the depreciation cycle of that product. So I’d like you to, if you wouldn’t mind, speak a little bit about how you’re seeing that situation evolve itself with customers. 

[Answer]  If you’re 5% into the build-out versus if you’re 95% into the build-out, you’re going to feel very differently. And because you’re only 5% into the build-out anyhow, you build as fast as you can… there’s going to be a whole bunch of chips coming at them, and they just got to keep on building and just, if you will, performance-average your way into it. So that’s the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them. Let me give you an example of time being really valuable, why this idea of standing up a data center instantaneously is so valuable and getting this thing called time-to-train is so valuable. The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that’s 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better?

All of Nvidia’s AI-related hardware products runs on its CUDA software; management thinks that AI performance for Nvidia AI-hardware users can improve over time simply from improvements that the company will be making to CUDA in the future

And all of it — the beautiful thing is all of it runs CUDA. And all of it runs our entire software stack. So if you invest today on our software stack, without doing anything at all, it’s just going to get faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs. 

Shopify (NASDAQ: SHOP)

Shopify Magic is Shopify’s suite of AI products and management’s focus is on providing AI tools for merchants to simplify business operations and enhance productivity

Touching briefly on AI. Our unique position enables us to tap into the immense potential of AI for entrepreneurship and our merchants. Currently, the most practical applications of AI are found in tools that simplify business operations and enhance productivity, all of which we’ve been developing deeper capabilities with our AI product suite, Shopify Magic. 

Shopify’s management is using AI tools for precision marketing, and drove a 130% increase in merchant ads within its primary marketing channel from 2023 Q4 to 2024 Q1 while still being within payback guardrails

Our goal is to always get the most out of every existing channel up to our guardrail limits and continuingly find and experiment with new channels. That is what we build our tools and our AI models to do, and we’re using them to create some incredibly compelling opportunities. Let me give you a very recent example. At the end of last year and early into January, we drove significant efficiency improvements in one of our primary channels in performance marketing, where teams have created and leveraged advanced models using AI and machine learning, which now allows us to target our audiences with unprecedented precision. Using these models and strategies, we drove nearly 130% increase in merchant ads within our primary marketing channel from Q4 to Q1, while still remaining squarely within our payback guardrails.

Shopify has produced good revenue growth despite its headcount remaining flat for 3 quarters; management thinks Shopify can keep headcount growth low while the business continues to grow; the use of AI internally is an important element of how Shopify can continue to drive growth while keeping headcount growth low; an example of an internal use-case of AI is merchant support, where Shopify has (1) seen more than half of support interactions being assisted, and often fully-resolved, by AI, (2) been able to provide 24/7 live support in 8 additional languages that previously were offered only for certain hours, (3) decreased the duration of support interactions, (4) reduce the reluctance of merchants to ask questions, and (4) reduced the amount of toil on support staff

We know our team is one of our most valuable assets. And given that it makes up over half of our cost base, we believe we’ve architected ourselves to be faster and more agile, which has enabled us to consistently deliver 25% revenue growth, excluding logistics, all while keeping our headcount flat for 3 straight quarters. More importantly, because of the structure and the automation we have worked to put in place, we think we can continue to operate against very limited headcount growth while achieving a continued combination of consistent top line growth and profitability…

…We continue to remain disciplined on headcount with total headcount remaining essentially flat for the past 3 quarters, all while maintaining and, in fact, accelerating our product innovation capabilities and continuing the top line momentum of our business. How we leverage AI internally is an important element of how we are able to do that…

During Q1, over half of our merchant support interactions were assisted with AI and often fully resolved with the help of AI. AI has enabled 24/7 live support in 8 additional languages that previously were offered only certain hours of the day. We have significantly enhanced the merchant experience. The average duration of support interactions has decreased. And the introduction of AI has helped reduce the reluctance that some merchants previously had towards asking questions that they might perceive as trivial or naive. Additionally, our support staff has experienced a significant reduction in the amount of toil that is part of their jobs. We are improving the merchant support process and achieving much greater efficiency than ever before.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management confirmed that there are no major damages to the company’s fabs and major operations from the recent earthquake in Taiwan – the largest in the region in 25 years – so there are no major disruptions to the supply of AI chips

On April 3, an earthquake of 7.2 magnitude struck Taiwan, and the maximum magnitude of our fabs was 5. Safety systems and protocols at our fabs were initiated immediately and all TSMC personnel are safe. Based on TSMC’s deep experience and capabilities in earthquake response and damage prevention as well as regular disasters trials, the overall tool recovery in our fabs reached more than 70% within the first 10 hours and were fully recovered by the end of the third day. There were no power outages, no structural damage to our fabs, and there’s no damage to our critical tools, including all our EUV lithography tools. That being said, a certain number of wafers in process were impacted and had to be scrapped, but we expect most of the lost production to be recovered in the second quarter and thus, minimum impact to our second quarter revenue. We expect the total impact from the earthquake to reduce our second quarter gross margin by about 50 basis points, mainly due to the losses associated with wafer scraps and material loss…

…Although it was largest earthquake in Taiwan in the last 25 years, we worked together tirelessly and were able to resume for operation at all our fab within 3 days with minimal disruptions, demonstrating the resilience of our operation in Taiwan.

TSMC’s management is seeing a strong surge in AI-related demand, and thinks that this supports their view of a structural acceleration in demand for energy-efficient computing

The continued surge in AI-related demand supports our already strong conviction that structural demand for energy-efficient computing is accelerating in an intelligent and connected world. 

TSMC’s management sees the company as a key enabler of AI; the increase in complexity of AI models, regardless of the approaches taken, requires increasingly powerful semiconductors, and this is where TSMC’s value increases, because the company excels at manufacturing the most advanced semiconductors

TSMC is a key enabler of AI applications. AI technology is evolving to use our increasingly complex AI models, which needs to be supported by more powerful semiconductor hardware. No matter what approach is taken, it requires use of the most advanced semiconductor process technologies. Thus, the value of our technology position is increasing as customers rely on TSMC to provide the most advanced process and packaging technology at scale, with a dependable and predictable cadence of technology offering. In summary, our technology leadership enable TSMC to win business and enables our customer to win business in the AI market.

TSMC’s management is seeing nearly every AI innovator working with the company

Almost all the AI innovators are working with TSMC to address the insatiable AI-related demand for energy-efficient computing power. 

TSMC’s management is forecasting the company’s revenue from AI processors to more than double in 2024 and account for low-teens percentage of total revenue; management expects AI processor revenue to grow at 50% annually over the next 5 years and account for more than 20% of TSMC’s total revenue by 2028; management has a narrow definition of AI processors and expect them to be the strongest growth driver for TSMC’s overall HPC (high performance computing) platform and overall revenue over the next few years

We forecast the revenue contribution from several AI processors to more than double this year and account for low teens percent of our total revenue in 2024. For the next 5 years, we forecast to grow at 50% CAGR and increase to higher than 20% of our revenue by 2028. Several AI processes are narrowly defined as GPUs, AI accelerators, and CPUs performing training and inference functions, and do not improve the networking, edge or on-device AI. We expect several AI processor to be the strongest driver of our HPC platform for growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.

TSMC’s management thinks that strong HPC and AI demand means that it is strategically important for the company to expand its global manufacturing footprint

Given the strong HPC and AI-related demand, it is strategically important for TSMC to expand our global manufacturing footprint to continue to support our U.S. customers, increased customer trust, and expand our future growth potential.

TSMC has received strong support from the US government for its Arizona fabs and one of them has been upgraded to be a fab for 2nm process technology to support AI-demand, and it is scheduled for volume production in 2028; management is confident that the Arizona fabs will have the same quality as TSMC’s Taiwan fabs

In Arizona, we have received a strong commitment and support from our U.S. customers and plan to build 3 fabs, which help to create greater economies of scale..

…Our second fab has been upgraded to utilize 2-nanometer technologies to support a strong AI-related demand in addition to the previously announced 3-nanometer. We recently completed the taping of in which the last construction beam was raised into place and volume production is scheduled to begin in 2028…

…We are confident that once we begin volume production, we will be able to deliver the same level of manufacturing quality and reliability in each of our fab in Arizona as from our fab in Taiwan.

TSMC’s management believes the company’s 2nm technology is industry-leading and nearly every AI innovator is working with the company on its 2nm technology; management thinks 2nm will enable TSMC to capture AI-related growth opportunities in the years ahead

Finally, I will talk about our N2 status. Our N2 technology leads industry in addressing the industry’s insatiable need for energy-efficient computing, and almost all AI innovators are working with TSMC…

… With our strategy of continuous enhancement, N2 and its derivative will further extend our technology leadership position and enable TSMC to capture the AI-related growth opportunities well into future.

TSMC’s management is seeing very, very strong AI-related data center demand, while traditional server demand is slow; there is a shift in wallet-share from hyperscalers from traditional servers to AI servers and that is favourable for TSMC because TSMC has a lower presence in the traditional CPU-centric server space; TSMC is doubling its production capacity for AI-related data centre chips, but it’s still not enough to meet its customers’ demand

However, AI-related data center demand is very, very strong. And traditional server demand is slow, lukewarm…

…The budget for hyperscale player, their wallet-share shift from traditional server to AI server is favorable for TSMC. And we are able to capture most of the semiconductor content in an AI servers area as we defined GPU, ACA networking processor, et cetera. Well, we have a lower presence in those CPU-only, CPU-centric traditional server. So we expect our growth will be very healthy…

…Let me say it again, the demand is very, very strong, and we have done our best we put all the effort to increase the capacity. It probably more than double this year as compared with last year. However, not enough to meet the customers’ demand, and we leverage our OSAT partners that to complement of TSMC’s capacity to fulfill our customers need. Still not enough, of course. 

TSMC’s management is working on selling TSMC’s value in the manufacture of AI chips

[Question] I think it’s clear that AI is producing a large profit pool at your owners. And the HBM is also driving super normal returns for memory players. So my question is, does TSMC believe they’re getting their fair share of the returns in the AI value chain today? And is there a scope for TSMC to raise pricing for AI chips in the future?

[Answer] We always say that we want to sell our value, but it is a continuous process for TSMC. And let me tell you that we are working on it. We are happy that our customers are doing well. And if customers do well, TSMC does well.

TSMC’s management still expects the company’s capex intensity (capex as a percentage of revenue) to level off somewhere around the mid-30s range in the next several years even with the AI-boom, but they are ready to increase capex if necessary

[Question] My second question is just relating to the upward expectations you gave for the AI accelerators. Curious how that time, how you’re looking at the CapEx, if you say that we’re entering either higher growth or investment cycle, where capital intensity could need to rise up above that mid-30s range that you set 

[Answer] We work with our customers closely and our CapEx and capacity planning are always based on the long-term structural market demand profile that is underpinned by the multiyear megatrends….  The capital intensity, in the past few years, it was high as we invested heavily to meet the strong customer demand. Now the increase — the rate of increase for the capex is leveling off, so this year and the next several years, we are expecting that the capital intensity is somewhere at the mid-30s level. But as I just said, if there are opportunities in the future years, then we will invest accordingly.

TSMC’s management wants to support all of TSMC’s AI customers’ needs, and not just the needs of its major AI customer (presumably Nvidia)

 We want to make sure that all our customers get supported, probably not enough this year. But for next year, we try. We try very hard. And you mentioned about giving up some market share, that’s not my consideration. My consideration is to help our customers to be successful in their market…

…[Question] So since your major customers said there’s no room for other type of AI computing chips, but it seems like TSMC is happy to assist some similar customers, right? So is that right interpretation about your comments.

[Answer] Yes.

Most of TSMC’s AI customers are using the 5nm or 4nm technologies, but they are working with TSMC on even more advanced nodes – such as 3nm and 2nm – because the advanced nodes are more energy-efficient, and energy efficiency in AI data centres is really important; in the past, TSMC’s then-leading edge chips only see smartphone demand, but with 2nm, TSMC will see demand from smartphones and HPC, so the early-revenue from 2nm is expected to be even larger than 3nm’s early-revenue 

[Question] I think currently, most of the AI accelerator, mostly in 5-nanometers, which is N minus 1 comparing to a smartphone for now. So when do we expect them to catch up or surpass in terms of technology node? Do we see them to be the technology driver in 2 nanometers or above?

[Answer] Today, all the AI accelerators, most of them are in the 5- or 4-nanometer technology. My customers are working with TSMC for the next node, even for the next, next node, they have to move fast because, as I said, the power consumption has to be considered in the AI data center. So the energy-efficient is very important. So our 3-nanometer is much better than the 5-nanometer. And again, it will be improved in the 2-nanometer. So all I can say is all my customers are working on this kind of trend from 4-nanometer to 3 to 2…

…[Question] Do we see a bigger revenue in the first 2 years of the 2 nanometers because in the past, it’s only smartphone, but in 2-nanometer, it would be both smartphone and HPC customers.

[Answer] With the demand that we’re seeing, we do expect N2 revenue contribution to be even larger than N3, just like 3 is a larger contribution or larger node than 5, et cetera, et cetera.

TSMC’s management is seeing die sizes increase with edge-AI or on-device AI; management thinks that the replacement cycle for smartphones and PCs will be a little accelerated in the future and the edge-AI trend will be very positive for TSMC

Let me mention the edge-AI or the on-device AI, the first order of magnitude is the die size. We saw with AI for neuro processor inside, the die size will be increased, okay? That’s the first we observed. And it’s happening. And then for the future, I would think that replacement cycle for smartphone and kind of a PC will be accelerated a little bit in the future, at least. It’s not happening yet, but we do expect that will happen soon. And all in all, I would say that on-device AI will be very positive for TSMC because we kept the larger share of the market. 

Tencent (NASDAQ: TCEHY)

Engagement of Weixin users is increasingly supplemented by consumption of content in chat at moments and recommended content on video accounts and mini programs; this was driven by AI recommendations 

For Weixin, users are increasingly supplementing their stable consumption of social graph supply content in chat at moments with consumption of algorithmically recommended content in official accounts and video accounts and engagement with Mini Programs diverse range of services.  This trend benefits from our heavy investment in AI, which makes the recommendation better and better over time.

Official accounts achieved healthy year-on-year pageview growth, driven AI-powered recommendation algorithms

For official accounts, which enable creators to share text and images and chosen topics with interested followers, it achieved healthy year-on-year pageview growth. As AI-powered recommendation algorithms allow us to provide targeted high-quality content more effectively.

Tencent’s online advertising revenue was up 26% in 2024 Q1 because of increased engagements from AI-powered ad targeting; ad spend from all major categories increased in 2024 Q1 except for automotives; during the quarter, Tencent upgraded its ad tech platform and made generative AI-powered ad creation tools available to boost ad creation efficiency and better targeting

For online advertising, our revenue was RMB 26.5 billion in the quarter up 26% year-on-year, benefiting from increased engagements in AI-powered ad targeting. Ad spend from all major categories except automotive increased year-on-year, particularly from games, internet services and consumer goods sectors. During the quarter, we upgraded our ad tech platform to help advertisers manage ad campaigns more effectively, and we made generative AI-powered ad creation tools available to all advertisers. These initiatives enable advertisers to create ads more efficiently and to deliver better targeting.

Hunyuan (Tencent’s foundational LLM) was scaled up using the mixture of experts approach; management is deploying Hunyuan in more Tencent services; management is open-sourcing a version of Hunyuan that provides text-image generative AI

And for Hunyuan, the main model achieved significant progress as we’ve scaled up using the mixture of experts approach, and we’re deploying Hunyuan in more of our services. Today, we announced that we’re making a version of Hunyuan providing text image generative AI available on an open source basis.

Tencent’s operating capex was RMB6.6b in 2024 Q1, up massively from a low base in 2023 Q1 but down slightly sequentially, because of spending on GPUs and servers to support Hunyuan and the AI ad recommendation algo

Operating CapEx was RMB 6.6 billion, up 557% year-on-year from a low base quarter last year, mainly driven by investment in GPUs and servers to support our Hunyuan and AI ad recommendation algorithms.

Tencent’s management expects advertising revenue growth to decelerate from 2024 Q1’s level, but still expects to outpace the broader industry because (1) Tencent’s ad load is still small relative to the advertising real estate available, and (2) AI will help the advertising business and can easily double or even triple Tencent’s currently low click-through rates; management thinks Tencent’s advertising business will benefit from AI disproportionately vis-a-vis competitors who also use AI because Tencent has been under-monetising and has lower click-through rates, so any AI-driven improvements will have a bigger impact; Hunyuan is part of the AI technologies that management has deployed for the advertising business

Around advertising, I’d say that, as you would expect, given the economies mix, advertiser sentiment is also quite mixed and it’s certainly a challenging environment in which to set advertising. The first quarter for us is a slightly unusual quarter because it’s a small quarter for advertising due to the Chinese New Year effect. And so sometimes the accelerations or the decelerations get magnified as a result. So we would expect our advertising growth to be less rapid in subsequent quarters of the year than it was in the first quarter and more similar to consensus expectations for our advertising revenue growth for the rest of the year. But that said, we think that we are in a good position to continue taking share of the market at a rapid rate, given we’re very early in increasing our ad load on video accounts, which is currently around 1/4 of the ad loads of our major competitors with short video products. 

And also given we’re early in capturing the benefits of deploying AI to our ad tech stack. And we think that we will — we are benefiting and will continue to benefit disproportionately from applying AI to our ad tech because historically, as a social media platform, our click-through rates were low. And so starting from that lower base, we can — we have seen we can double or triple click-through rates in a way that’s not possible for ad services that are starting from much higher click through rates…

… [Question] In the future, do you think like under the AI developments like our competitors such as like ByteDance or Alibaba, they also applies AI to their ad business so how do you think that AI will drive to add market share to change in the longer term?

[Answer] Your question around a number of competitors are obviously applying AI as well. And we believe that all of them will benefit from AI, too. But we think that the biggest beneficiaries will be those companies, of which we are one that have very substantial under monetized time spent and now able to monetize that time spend more effectively by deploying AI because the deployment of AI enables an upward structural shift in click-through rates, and that shift is most pronounced for those inventories where the click-through rates were lower to begin with, such as the social media inventory. Those tools also allow advertisers who previously were able to create advertisements for search, which are text in nature, but not to create advertisements for social media, which are image and video in nature, to now use generative AI to create advertisements to social media. So in general, we think there’ll be a reallocation of advertising spend toward those services, which have high time spent, high engagement and are now able to deliver increasing click through rates, increasing transaction volume more commensurate with the time spent and engagement superiority…

…  So on ad tech, we’re innovating around the process of targeting the ads using artificial intelligence. We’re innovating around helping advertisers manage their advertising campaigns. And then most recently, we’ve been — we are now deploying Hunyuan to facilitate advertisers, creating the advertising content.

Tencent’s management thinks that WeChat will be a great distribution channel for AI products, but they are still figuring out the best use case for AI (including Tencent’s own Hunyuan LLM); management is actively testing, and they will roll out the products they think are the best over time

I think we do believe that with the right product than our WeChat platform and our other products, which have a lot of user engagement would be great — will be great distribution channels for these AI products. But I think at this point in time, everybody is actually trying out different products that may work. No one has really come up with a killer application yet with the exception of probably OpenAI, that question and answer from it so I think you should be confident that we have been developing the technology, and we are having a best-in-class technology in Hunyuan and at the same time, we are actively creating and testing out different products to see what would make sense for our existing products and as the time comes, these products will be rolled out on our platform.

Tencent’s management thinks that Hunyuan is currently best being deployed in Tencent’s gaming business for customer service purposes; management has been deploying AI in Tencent’s games, but not necessarily generative AI; Hunyuan will be useful for developing games when it gains multi-modal capabilities, especially in creating high-quality videos, but it will be some time before Hunyuan reaches that level

I think for Hunyuan — it can be assisting game business in multiple ways. Right now, the best the best contributor is actually on the customer service front. When Hunyuan is actually deployed to answer questions and the customer service bought for a lot of our games is actually achieving very high customer satisfaction level. And AI, in general, has already been deployed in our games, but not necessarily the generative AI technology yet. In terms of Hunyuan and, I think, over time, when we actually sort of can move Hunyuan into a multi-modal and especially if we can start creating really high-quality, high fidelity videos, then that would actually be helpful. Before that happens, Hunyuan can actually sort of be using MPCs and create a certain sort of interactive experiences but it’s not going to be able to take over the very heavy growth of content creation in gaming yet. I think you’ll probably be a couple more generations before it can be for game production.

Tesla (NASDAQ: TSLA)

Tesla’s FSD v12 is a pure AI-based self driving technology; FSD v12 is now turned on for all North American Tesla vehicles – around 1.8 million vehicles – that are running on Hardware 3 or later and it is used on around half of the vehicles, with the percentage of users increasing each week; more than 300 billion miles have been driven with FSD v12; management thinks that it’s only a matter of time before Tesla’s autonomous driving capabilities exceeds human-reliability

Regarding FSD V12, which is the pure AI-based self-driving, if you haven’t experienced this, I strongly urge you to try it out. It’s profound and the rate of improvement is rapid. And we’ve now turned that on for all cars, with the cameras and inference computer, everything from Hardware 3 on, in North America. So it’s been pushed out to, I think, around 1.8 million vehicles, and we’re seeing about half of people use it so far and that percentage is increasing with each passing week. So we now have over 300 billion miles that have been driven with FSD V12…

…I think it should be obvious to anyone who’s driving V12 in a Tesla that it is only a matter of time before we exceed the reliability of humans and we’ve not much time with that. 

Tesla’s management believes that the company’s vision-based approach with end-to-end neural networks for full self driving is better than other approaches, because it mimics the way humans drive, and the global road networks are designed for biological neural nets and eyes

Since the launch of Full Self-Driving — Supervised Full Self-Driving, it’s become very clear that the vision-based approach with end-to-end neural networks is the right solution for scalable autonomy. And it’s really how humans drive. Our entire road network is designed for biological neural nets and eyes. So naturally, cameras and digital neural nets are the solution to our current road system…

… I think we just need to — it just needs to be obvious that our approach is the right approach. And I think it is. I think now with 12.3, if you just have the car drive you around, it is obvious that our solution with a relatively low-cost inference computer and standard cameras can achieve self-driving. No LiDARs, no radars, no ultrasonic, nothing.

Tesla has reduced the subscription price of FSD to US$99 a month; management is talking to one major auto manufacturer on licensing Tesla’s FSD software; it will take time for third-party automakers to use Tesla’s autonomous driving technology as a massive design change is needed for the vehicles even though all that is needed is for cameras and an inference computer to be installed

To make it more accessible, we’ve reduced the subscription price to $99 a month, so it’s easy to try out…

…We’re in conversations with one major automaker regarding licensing FSD…

…I think we just need to — it just needs to be obvious that our approach is the right approach. And I think it is. I think now with 12.3, if you just have the car drive you around, it is obvious that our solution with a relatively low-cost inference computer and standard cameras can achieve self-driving. No LiDARs, no radars, no ultrasonic, nothing… No heavy integration work for vehicle manufacturers…

… So I wouldn’t be surprised if we do sign a deal. I think we have a good chance we do sign a deal this year, maybe more than one. But yes, it would be probably 3 years before it’s integrated with a car, even though all you need is cameras and our inference computer. So just talking about a massive design change.

Tesla’s management has been expanding the company’s core AI infrastructure and the company is no longer training-constrained; Tesla has 35,000 H100 GPUs that are currently working, and management expects to have 85,000 H100 GPUs by end-2024 for AI training

Over the past few months, we’ve been actively working on expanding Tesla’s core AI infrastructure. For a while there, we were training-constrained in our progress. We are, at this point, no longer training-constrained, and so we’re making rapid progress. We’ve installed and commissioned, meaning they’re actually working, 35,000 H100 computers or GPUs. GPU is a wrong word, they need a new word. I always feel like a [ wentz ] when I say GPU because it’s not. GPU stands — G stands for graphics. Roughly 35,000 H100S are active, and we expect that to be probably 85,000 or thereabouts by the end of this year in training, just for training. 

Tesla’s AI robot, Optimus, is able to do simple factory tasks and management thinks it can do useful tasks by the end of this year; management thinks Tesla can sell Optimus by the end of next year; management still thinks that Optimus will be an incredibly valuable product if it comes to fruition; management thinks that Tesla is the best-positioned manufacturer of humanoid robots with efficient AI inference to be able to reach production at scale

[Question] What is the current status of Optimus? Are they currently performing any factory tasks? When do you expect to start mass production?

[Answer] We are able to do simple factory tasks or at least, I should say, factory tasks in the lab. In terms of actually — we do think we will have Optimus in limited production in the factory — in natural factory itself, doing useful tasks before the end of this year. And then I think we may be able to sell it externally by the end of next year. These are just guesses. As I’ve said before, I think Optimus will be more valuable than everything else combined. Because if you’ve got a sentient humanoid robots that is able to navigate reality and do tasks at request, there is no meaningful limit to the size of the economy. So that’s what’s going to happen. And I think Tesla is best positioned of any humanoid robot maker to be able to reach volume production with efficient inference on the robot itself.

The vision of Tesla’s management for autonomous vehicles is for the company to own and operate some autonomous vehicles within a Tesla fleet, and for the company to be an Airbnb- or Uber-like platform for other third-party owners to put their vehicles into the fleet; management thinks Tesla’s fleet can be tens of millions of cars worldwide – even more than 100 million – and as the fleet grows, it will act as a positive flywheel for Tesla in terms of producing data for training

And something I should clarify is that Tesla will be operating the fleet. So you can think of like how Tesla — you think of Tesla like some combination of Airbnb and Uber, meaning that there will be some number of cars that Tesla owns itself and operates in the fleet. There will be some number of cars — and then there’ll be a bunch of cars where they’re owned by the end user. That end user can add or subtract their car to the fleet whenever they want, and they can decide if they want to only let the car be used by friends and family or only by 5-star users or by anyone. At any time, they could have the car come back to them and be exclusively theirs, like an Airbnb. You could rent out your guestroom or not any time you want. 

So as our fleet grows, we have 7 million cars — 9 million cars, going to eventually tens of millions of cars worldwide. With a constant feedback loop, every time something goes wrong, that gets added to the training data and you get this training flywheel happening in the same way that Google Search has the sort of flywheel. It’s very difficult to compete with Google because people are constantly doing searches and clicking and Google is getting that feedback loop. So same with Tesla, but at a scale that is maybe difficult to comprehend. But ultimately, it will be tens of millions…

… And then I mean if you get like to the 100 million vehicle level, which I think we will, at some point, get to, then — and you’ve got a kilowatt of useable compute and maybe your own Hardware 6 or 7 by that time, then you really — I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone more than any company, probably more than any company.

Tesla’s management thinks that the company can sell AI inference compute capacity that’s sitting in Tesla vehicles when they are not in use; Tesla cars are running Hardware 3 and Hardware 4 now, while Hardware 5 is coming; unlike smartphones or computers, the computing capacity of Tesla vehicles is entirely within Tesla’s control, and the company has skills on deploying compute workloads to each individual vehicle

I think there’s also some potential here for an AWS element down the road where if we’ve got very powerful inference because we’ve got a Hardware 3 in the cars, but now all cars are being made with Hardware 4. Hardware 5 is pretty much designed and should be in cars hopefully towards the end of next year. And there’s a potential to run — when the car is not moving, to actually run distributed inference. So kind of like AWS, but distributed inference. Like it takes a lot of computers to train an AI model, but many orders of magnitude less compute to run it. So if you can imagine a future [ path ] where there’s a fleet of 100 million Teslas, and on average, they’ve got like maybe a kilowatt of inference compute, that’s 100 gigawatts of inference compute distributed all around the world. It’s pretty hard to put together 100 gigawatts of AI compute. And even in an autonomous future where the car is perhaps used instead of being used 10 hours a week, it is used 50 hours a week. That still leaves over 100 hours a week where the car inference computer could be doing something else. And it seems like it will be a waste not to use it…

…And then I mean if you get like to the 100 million vehicle level, which I think we will, at some point, get to, then — and you’ve got a kilowatt of useable compute and maybe your own Hardware 6 or 7 by that time, then you really — I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone more than any company, probably more than any company…

…Yes, probably because it takes a lot of intelligence to drive the car anyway. And when it’s not driving the car, you just put this intelligence to other uses, solving scientific problems or answer in terms of [ this horse ] or something else… We’ve already learned about deploying workloads to these nodes… And unlike laptops and our cell phones, it is totally under Tesla’s control. So it’s easier to see the road products plus different nodes as opposed to asking users for permission on their own cell phones would be very tedious… 

… So like technically, yes, I suppose like Apple would have the most amount of distributed compute, but you can’t use it because you can’t get the — you can’t just run the phone at full power and drain the battery. So whereas for the car, even if you’re a kilowatt-level inference computer, which is crazy power compared to a phone, if you’ve got 50 or 60 kilowatt hour pack, it’s still not a big deal. Whether you plug it or not, you could run for 10 hours and use 10 kilowatt hours of your kilowatt of compute power.

Safety is very important for Tesla; management has been conducting safety-training for Tesla’s AI-powered self driving technology through the use of millions of clips of critical safety events collected from Tesla vehicles; the company runs simulations for safety purposes before pushing out a new software version to early users and before it gets pushed to external users; once the new software is with external users, it’s constantly monitored by Tesla; FSD v12’s feedback loop of issues, fixes, and evaluations happens automatically because the AI model learns on its own based on data it is getting

Yes, we have multiple years of validating the safety. In any given week, we train hundreds of neural networks that can produce different trajectories for how to drive the car, replay them through the millions of clips that we have already collected from our users and our own QA. Those are like critical events, like someone jumping out in front or like other critical events that we have gathered database over many, many years, and we replay through all of them to make sure that we are net improving safety. 

And then we have simulation systems. We also try to recreate this and test this in close to fashion. And some of this is validated, we give it to our QA networks. We have hundreds of them in different cities, in San Francisco, Los Angeles, Austin, New York, a lot of different locations. They are also driving this and collecting real-world miles, and we have an estimate of what are the critical events, are they net improvement compared to the previous week builds. And once we have confidence that the build is a net improvement, then we start shipping to early users, like 2,000 employees initially that they would like it to build. They will give feedback on like if it’s an improvement or they’re noting some new issues that we did not capture in our own QA process. And only after all of this is validated, then we go to external customers.

And even when we go external, we have like live dashboards of monitoring every critical event that’s happening in the fleet sorted by the criticality of it. So we are having a constant pulse on the build quality and the safety improvement along the way. And then any failures like Elon alluded to, we’ll get the data back, add it to the training and that improves the model in the next cycle. So we have this like constant feedback loop of issues, fixes, evaluations and then rinse and repeat.

And especially with the new V12 architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don’t have to be creative and like how they code the algorithms. It’s mostly learning on its own based on data. So you see that, okay, every failure or like this is how a person chooses, this is how you drive this intersection or something like that, they get the data back. We add it to the neural network, and it learns from that trained data automatically instead of some engineers saying that, oh, here, you must rotate the steering wheel by this much or something like that. There’s no hard inference conditions. If everything is neural network, it’s pretty soft, it’s probabilistic and circular. That’s probabilistic distribution based on the new data that it’s getting.

Tesla’s management has good insight on the level of improvement Tesla’s AI-powered self-driving technology can be over a 3-4 month time frame, based on a combination of model size scaling, data scaling, training compute scaling, and architecture scaling

And we do have some insight into how good the things will be in like, let’s say, 3 or 4 months because we have advanced models that our far more capable than what is in the car, but have some issues with them that we need to fix. So they are there’ll be a step change improvement in the capabilities of the car, but it will have some quirks that are — that need to be addressed in order to release it. As Ashok was saying, we have to be very careful in what we release the fleet or to customers in general. So like — if we look at say 12.4 and 12.5, which are really could arguably even be V13, V14 because it’s pretty close to a total retrain of the neural nets and in each case, are substantially different. So we have good insight into where the model is, how well the car will perform, in, say, 3 or 4 months…

… In terms of scaling, people in here coming and they generally talk about models scaling, where they increase the model size a lot and then their corresponding gains in performance, but we have also figured out scaling loss and other access in addition to the model side scaling, making also data scaling. You can increase the amount of data you use to train the neural network and that also gives similar gains and you can also scale up by training compute. You can train it for much longer and one more GPUs or more Dojo nodes, and that also gives better performance. And you can also have architecture scaling where you count with better architectures for the same amount of compute produce better results. So a combination of model size scaling, data scaling, training compute scaling and the architecture scaling, we can basically extrapolate, okay, with the continue scaling based at this ratio, we can predict future performance. 

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management will soon roll out a game-changing AI-fueled forecasting tool on the company’s Kokai platform

We are quickly approaching some of the biggest UX and product rollouts of Kokai that nearly all of our customers will begin to use and see benefits from over the next few quarters, including a game-changing AI-fueled forecasting tool.

Trade Desk’s management has been using AI since 2016; management has always thought about AI as a copilot for humans even before Trade Desk was founded

We’ve been deploying AI in our platform since we launched Koa in 2016…

… To that end, we’ve known since before our company existed that the complexity of assessing millions of ad opportunities every second, along with hundreds of variables for each impression, is beyond the scope of any individual human. We have always thought about AI as a copilot for our hands-on keyboard traders.

Through Kokai, Trade Desk is bringing AI to many decision-points in the digital advertising process; Trade Desk is also incorporating AI into new relevance indices in Kokai for advertisers to better understand the relevance of different ad impressions in reaching their target audience; US Cellular used Trade Desk’s TV Quality Index to improve its conversion rate by 71%, reach 66% more households, and decrease cost per acquisition by 24%

And with Kokai, we are bringing the power of AI to a broader range of key decision points than ever, whether it’s in relevant scoring forecasting, budget optimization, frequency management or upgraded measurement. AI is also incorporated into a series of new indices that score relevance, which advertisers can use to better understand the relevance of different ad impressions in reaching their target audience. For example, U.S. Cellular worked with their agency, Harmelin Media, to leverage our TV Quality Index to better reach new customers. Their conversion rates improved 71%. They reached 66% more households by optimizing frequency management, and their cost per acquisition decreased 24%. I think it’s important to understand how we’re putting AI to work in Kokai because this kind of tech dislocation will bring new innovators. 

Visa (NYSE: V)

Visa’s management is using AI to improve the company’s risk offerings; the company’s Visa Protect for account-to-account payments feature is powered by AI-based fraud detection models; another of the features, Visa Deep Authorization, is powered by a deep-learning recurrent neural network model for risk scoring of e-commerce payments specifically in the USA

Across our risk offerings, we continue to bolster them through our technology, innovation, and AI expertise and are expanding their utility beyond the Visa network. Recently, we announced 3 such capabilities in our Visa Protect offering. The first is the expansion of our signature solutions, Visa Advanced Authorization and Visa Risk Manager for non-Visa card payments, making them network-agnostic. This allows issuers to simplify their fraud operations into a single fraud detection solution. The second is the release of Visa Protect for account-to-account payments, our first fraud prevention solution built specifically for real-time payments, including P2P digital wallets, account-to-account transactions and Central Bank’s instant payment systems. Powered by AI-based fraud detection models, this new service provides a real-time risk score that can be used to identify fraud on account-to-account payments. We’ve been piloting both of these in a number of countries, and our strong results thus far have informed our decision to roll these out globally. The third solution is Visa Deep Authorization. It is a new transaction risk scoring solution tailored specifically to the U.S. market to better manage e-commerce payments powered by a world-class deep-learning recurrent neural network model and petabytes of contextual data…

…What we found in the U.S. e-commerce market is that, on the one hand, it’s the most developed e-commerce market on the planet. On the other hand, it’s become the place of the most sophisticated fraud and attack vectors that we see anywhere in the world. And so what we are bringing to market with Visa Deep Authorization is an e-commerce transaction risk scoring platform and capability that is specifically tailored and built for the unique sets of attack vectors that we’re seeing in the U.S. So as I was mentioning in my prepared remarks, it’s built on deep learning technology that’s specifically tuned to some of the sequential and contextual view of accounts that we’ve had in the U.S. market. 

Wix (NASDAQ: WIX)

Wix’s management released its AI website builder in 2024 Q1, which is the company’s cornerstone product; the AI website builder utilises a conversational AI chat experience where users describe their intent and goals, and it is based on Wix’s decade-plus of knowledge in website creation and user behaviour; the AI-generated sites include all relevant pages, business solutions (such as scheduling and e-commerce), and functions; management thinks the AI website builder is a unique product in the market; management is seeing strong utilisation of the AI website builder, with hundreds of thousands of sites already been created in a few months since launch by both Self Creators and Partners

Notably, this quarter, we released the highly anticipated AI website builder. This is our cornerstone AI product. It leverages our 10-plus years of web creation expertise and unparalleled knowledge based on users’ behavior through a conversational AI chat experience. Users describe their intent and goals. Our AI technology then creates a professional, unique, and fully built-out website that meets the users’ needs. Importantly, the AI-generated site includes all relevant pages with personalized layout themes, text, images and business solutions such as scheduling, e-commerce and more. Best of all, this website are fully optimized with Wix-reliable infrastructure, including security and performance as well as built in marketing, SEO, CRM and analytics tools. There is truly nothing like this on the market. Excitingly, feedback on the AI website building has been incredible. In just a few short months since its launch, hundreds of thousands of sites have been already been created using this tool by both Self Creators and Partner. This strong response and utilization is a testament to the depth of our AI expertise and strength of our product. 

Wix released AI-powered image enhancement tools within Wix Product Studio in April which allow users to edit images in a high-quality manner through prompts

In April, we released a suite of AI-powered image enhancement tools that provide users with the capability to create professional images on their own. High-quality images are an essential part of a professional website but often hard to achieve without the help of professional photographer. New users will be able to easily erase objects, generate images, edit them to add or replace objects with a simple prompt, all without ever leaving the Wix Product Studio. 

Wix will be releasing more AI products in 2024; the upcoming products include AI business assistants; the AI business assistants are in beta testing and management is seeing great feedback

This new capabilities are just the start of a robust pipeline of AI-enabled products still to come this year, including a variety of vertical AI business assistants that will be released for the year. A couple of these assistants are currently in beta testing and seeing great results and feedback. 

Wix is seeing that its AI products are resulting in better conversion of users into premium subscribers; management believes that Wix’s AI products will be a significant driver of Self Creators growth in the years ahead

We are seeing a tangible benefit from our entire AI offering particularly a better conversions among users into premium subscription. I strongly believe that our AI capability will be significant — a significant driver of Self Creators growth in 2024 and beyond.

 Wix’s AI tools will be exposed very frequently to both existing and new users of the Wix platform

[Question] I wanted to kind of follow on to that and just kind of understand with respect to the AI tools. Do you see this primarily impacting the new customers? 

[Answer] When users are building their websites, all the website creation tools are visible to them and are helping them. Most of our users will stay a few years or more than that with the same website and sometimes — and they’ll update it, but they’re not going to recreate it. So, in that term, of course, the exposure is limited. But the integration of the vertical assistance is something that means that every time you go to the website, you’re going to have a recommendation, and the ideas and things you can do with AI. So, the exposure will be pretty much every time you go into the website. And that is significantly higher. And if you think about the fact that we have a lot of people that run their business in top of Wix, it means that all of those guys will be daily or almost daily exposed to new products with AI…

…You’re going to find AI tools, but they are not going to replace what you already know how to do. Sometimes, if you want to change an image, for example, it’s easier to click on change image instead of writing to the prompt, hey, please change the third image from the top, right? So, it’s always about the combination of how you do things in a balanced way, while allowing users to feel comfortable with the changes, not move beyond that. 

Wix’s management believes that AI will be a boom for new technologies and innovation and will lead to more growth for Wix

I believe that there’s so much potential for new things coming with AI, so much potential with new things coming with market trends and new technologies introduced into the market that I believe that we’re going to continue to see significant innovation, growing innovation coming from small businesses and bigger businesses in the world, which will probably result in the formation of additional growth for us. 

Zoom Video Communications (NASDAQ: ZM)

Zoom is now far beyond just video conferencing, and AI is infused across its platform

Our rapid innovation over the years has taken us far beyond video conferencing. Every step of the way has been guided by our mission to solve customer problems and enable greater productivity. In the process, we have very deliberately created a communication and collaboration powerhouse with AI infused natively across the platform.

Zoom’s management announced Zoom Workplace, an AI-powered collaboration platform in March; Zoom Workplace already has AI-powered features but will soon have Ask AI Companion; Zoom Workplace also improves other Zoom products through AI Companion capabilities; the AI features in Zoom Workplace are provided at no additional cost

In March we announced Zoom Workplace, our AI-powered collaboration platform designed to help our customers streamline communications, improve productivity, increase employee engagement, and optimize in-person time. Within the launch of Zoom Workplace are new enhancements and capabilities like multi-speaker view, document collaboration, AI-powered portrait lighting, along with upcoming features and products like Ask AI Companion, which will work across the platform to help employees make the most of their time. The Workplace launch also boosts Zoom Phone, Team Chat, Events and Whiteboard with many more AI Companion capabilities to help make customers more productive…

…When you look at our Workplace customers, guess what, AI is not only a part of that but also at no additional cost, right? So that is our vision.

Expedia has signed a quadruple-digit seat deal for Zoom Revenue Accelerator, which includes AI products that can help Expedia to drive revenue

Let me thank Expedia, who needs no introduction, for becoming a Lighthouse Zoom Revenue Accelerator customer in the quarter, leaning heavily into our AI products to drive revenue. A power user of Zoom Phone for years, they wanted to better automate workflows, coach sellers and drive efficiencies. We partnered with them on an initial quadruple-digit seat Zoom Revenue Accelerator deal, which includes working directly with their team to improve and tailor the product based on their business model and industry-specific use case.

Centerstone, a nonprofit organisation, expanded Zoom Phone and Zoom Contact Center in 2024 Q1 to leverage AI to provide better care for its beneficiaries

Let me also thank Centerstone, a nonprofit health system specializing in mental health and substance use disorder treatments for individuals, families, and veterans, for doubling down on Zoom. Seeing strong value from their existing Zoom Meetings, Phone and Rooms deployment, in Q1, they expanded Zoom Phone and added Zoom Contact Center in order to leverage AI to provide better care, and Zoom Team Chat in order to streamline communications all from a single platform.

Zoom AI Companion is now enabled in >700,000 customer accounts just 8 months after launch; AI Companion improves the value proposition of all of Zoom’s products and it’s provided to customers without charging customers more; AI Companion also helps Zoom improve monetisation because its presence in Zoom’s Business Services enables Zoom to charge a premium price because the AI features are a key differentiator; management will leverage AI Companion to build a lot of new things

Zoom AI Companion has grown significantly in just eight months with over 700,000 customer accounts enabled as of today. These customers range all the way from solopreneurs up to enterprises with over 100,000 users…

… I think AI Companion not only help our Meetings, Phone, or Team Chat, it’s across the entire Zoom Workplace platform plus all the Business Services, right? Our approach, if you look at our Workplace, the deployment, right, for the entire collaboration platform not only makes all those services better but also customers appreciate it, right, without charging the customers more, right? We do add more value to customers at no additional cost, right? That’s kind of the power part of the Zoom company. At the same time, in terms of monetization, as I mentioned earlier, if you look at our Business Services, AI is a key differentiation, right, AI and we charge a premium price as well, and that’s the value. At the same time, we also are going to leverage AI Companion to build a lot of new things, new services like Ask AI that will be introduced later this year and also some other new services that we’re working on as well.

One of Zoom’s management’s key priorities is to embed AI across all of Zoom Workplace and Business Services

Embedding AI across all aspects of Zoom Workplace and Business Services is a key priority as we continue to drive productivity and engagement for our customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Coupang, Datadog, Etsy, Fiverr, Mastercard, Meta Platforms, Microsoft, Netflix, Shopify, TSMC, Tesla, The Trade Desk, Visa, Wix, and Zoom. Holdings are subject to change at any time.

Have Apple’s Share Buybacks Been Good For Shareholders?

Apple has used a staggering amount of cash to buyback shares. Has it been a good use of capital for shareholders?

Apple has spent a whopping US$651.4 billion on share repurchases from September 2011 to December 2023. 

For perspective, Broadcom, the 9th largest company listed on the US stock market, currently has a market cap of US$617 billion. Apple could have bought the 9th largest listed company in the US using the cash it spent on buybacks. This brings us to the question, were Apple’s share buybacks the best use of its cash?

How much return did the buybacks create?

To judge if Apple made the right decision, we need to look at how much earnings per share growth the buybacks achieved.

Back in September 2011, Apple had roughly 26 billion shares outstanding on a split-adjusted basis. As of 20 October 2023, the date of the regulatory report for the fiscal year ended 30 September 2023 (FY2023), Apple had 15.55 billion shares outstanding. This is a 40% drop in shares outstanding. The lower share count, achieved through buybacks, has had a profound impact on Apple’s earnings per share.

In FY2023, Apple generated US$97 billion in net income and US$6.13 in diluted earnings per share. If the buybacks didn’t happen and Apple’s shares outstanding remained at 26 billion for FY2023 – instead of 15.55 billion – its diluted earnings per share would only be US$3.73 instead of US$6.13. Said another way, if Apple opted not to reduce its share count, the company would have needed its net income in FY2023 to be higher by US$62 billion in order to generate a similar diluted earnings per share figure.

So, Apple’s US$651 billion investment in share buybacks has created US$62 billion in “annual net income” to the company, and possibly more in the future as Apple’s net income continues to climb.

Could it have done better?

Although it’s clear now that Apple’s buybacks have had a positive impact on its diluted earnings per share, the next question is if the buybacks were the best use of the company’s capital. 

Broadcom, the company whose market cap is close to the cumulative amount Apple has spent on buybacks, generated net income of US$14 billion in its most recent fiscal year.

If Apple had bought Broadcom instead, it would only have generated US$14 billion more in net profit, far less than the implied US$62 billion growth achieved from buying back its own shares. This would have resulted in substantially less earnings per share growth than the buybacks. In comparison, Apple’s buybacks seem like a good investment decision. 

I know that using Broadcom as an example may not be the best comparison as Apple could have bought Broadcom for much less in 2012. Nevertheless, it gives some perspective on the different possible uses of capital.

Conclusion

Was buybacks the single best use of cash for Apple? Probably not. But was it a bad investment? Definitely not. The return on investment through Apple’s buyback program has resulted in a large jump in its earnings per share. The US$62 billion “increase” in annual earnings could also continue to rise if Apple’s earnings grows over time. Although there could possibly have been better investments, I think Apple made a decent decision to focus on buybacks over the past few years.

But should Apple continue buying back shares? This is the question on everyone’s lips right now, especially with Apple recently announcing a new US$110 billion buyback authorisation.

Buybacks provide a good return only if shares are trading at cheap valuations. Apple’s management needs to continue evaluating the company’s valuation when making future buyback decisions. With Apple’s valuation increasing in the past few years, management will need to decide if conducting buybacks today still provides good value for shareholders or if other forms of investments will be more impactful.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Apple Inc. Holdings are subject to change at any time.

Insights From Berkshire Hathaway’s 2024 Annual General Meeting

Warren Buffett shared plenty of wisdom at the recent Berkshire Hathaway AGM.

Warren Buffett is one of my investment heroes. On 4 May 2024, he held court at the 2024 Berkshire Hathaway AGM (annual general meeting).

For many years, I’ve anticipated the AGM to hear his latest thoughts. But this year’s session is especially poignant because Buffett’s long-time right-hand man, the great Charlie Munger, passed away last November and it is the first Berkshire AGM in decades where Buffett’s Batman did not have Munger’s Robin at his side. For me, there were three especially touching moments during the meeting. 

First, the AGM kicked off with a highlights-reel of Munger’s notable zingers and it was a beautiful tribute to his wisdom. Second, Munger received a standing ovation from the AGM’s attendees after the highlights-reel was played. Third, while answering a question, Buffett turned to his side and said “Charlie?” before he could catch himself; Buffett then followed up: “I had actually checked myself a couple times already, but I slipped. I’ll slip again.”

Beyond the endearing sentimentality, the Berkshire meeting contained great insights from Buffett and other senior Berkshire executives that I wish to share and document. Before I get to them, I would like to thank my friend Thomas Chua for performing a great act of public service. Shortly after the AGM ended, Thomas posted a transcript of the session at his excellent investing website Steady Compounding

Without further ado, the italicised passages between the two horizontal lines below are my favourite takeaways after I went through Thomas’ transcript.


Berkshire shares are slightly undervalued in Buffett’s eyes, but Berkshire has troubling buying its shares in a big way because its shareholders do not think about selling

Buffett: And our stock is at a level where it adds slightly to the value when we buy in shares. But we would. We would really buy it in a big way, except you can’t buy it in a big way because people don’t want to sell it in a big way, but under certain market conditions, we could deploy quite a bit of money in repurchases…

…We can’t buy them like a great many other companies because it just doesn’t trade that way. The volume isn’t the same because we have investors, and the investors, the people in this room, really, they don’t think about selling. 

Apple is a very high-quality business to Buffett and Berkshire plans to own it for a long time

Buffett: And that’s sort of the story of why we own American Express, which is a wonderful business. We own Coca Cola, which is a wonderful business, and we own apple, which is an even better business. And we will own, unless something really extraordinary happens, we will own Apple and American Express in Coca Cola when Greg takes over this place. 

Buffett sold a small portion of his Apple shares, despite it being a high-quality business, because he wants to build Berkshire’s cash position; he can’t find anything attractive in the current environment

Becky Quick: In your recent shareholder letter, I noticed that you have excluded Apple from this group of businesses. Have you or your investment manager’s views of the economics of Apple’s business or its attractiveness as an investment changed since Berkshire first invested in 2016?…

Buffett: We will have Apple as our largest investment, but I don’t mind at all, under current conditions, building the cash position. I think when I look at the alternative of what’s available, the equity markets, and I look at the composition of what’s going on in the world, we find it quite attractive…

…I don’t think anybody sitting at this table has any idea of how to use it [referring to Berkshire’s US$182 billion cash pile] effectively. And therefore, we don’t use it. And we don’t use it now at 5.4%. But we wouldn’t use it if it was at 1%. Don’t tell the Federal Reserve that…

…It’s just that things aren’t attractive, and there’s certain ways that can change, and we’ll see whether they do.

Buffett thinks higher taxes in America are likely to come given current fiscal policies that are resulting in large fiscal deficits

Buffett: I would say with the present fiscal policies, I think that something has to give, and I think that higher taxes are quite likely, and the government wants to take a greater share of your income, or mine or Berkshire’s, they can do it. And they may decide that someday they don’t want the fiscal deficit to be this large, because that has some important consequences, and they may not want to decrease spending a lot, and they may decide they’ll take a larger percentage of what we earn and we’ll pay it. 

Buffett thinks Berkshhire’s biggest investments will remain within the USA because it has a strong, productive economy and he understands the USA the best

Buffett: Well, our primary investments will always be in the United States… You won’t find us making a lot of investments outside the United States, although we’re participating through these other companies in the world economy. But I understand the United States rules, weaknesses, strengths, whatever it may be. I don’t have the same feeling for economies generally around the world. I don’t pick up on other cultures extremely well. And the lucky thing is, I don’t have to, because I don’t live in some tiny little country that just doesn’t have a big economy. I’m in an economy already, that is, after starting out with half a percent of the world’s population, has ended up with well over 20% of the world’s output in an amazingly short period of time. So we will be American oriented.

Munger only pounded the table twice with Buffett on investing matters, and they were for BYD and Costco

Buffett: But Charlie twice pounded the table with me and just said, you know, buy, buy, buy. And BYD was one of them and Costco was the other. And we bought a certain amount of Costco and we bought quite a bit of BYD. But looking back, he already was aggressive. But I should have been more aggressive in Costco. It wasn’t fatal that we weren’t. But he was right big time in both companies.

The energy needed for AI and data centres will double or triple today’s total energy demand by the mid-2030s, even though it took 100-plus years for total energy demand to rise to today’s level; utilities will need to invest massive amounts of capital to meet this demand

Greg Abel: If we look at the demand that’s in place for Mid American Iowa utility over the next, say, into the mid 2030s associated with AI and the data centers, that demand doubles in that short period of time, and it took 100 years plus to get where we are today, and now it’s going to double…If we then go to, say, Nevada, where we own two utilities there and cover the lion’s share in Nevada, if you go over a similar timeframe and you look at the underlying demand in that utility and say, go into the later 2030s, it triples the underlying demand and billions and billions of dollars have to be put in.

The electric utility industry is a lousier business compared to many others that Berkshire owns stakes in

Abel: The electric utility industry will never be as good as, I mean, just remotely as good as, you know, the kind of businesses we own in other arenas. I mean, you look at the return on tangible equity at Coca Cola or American Express or to really top it off, Apple. It’s just, it’s, you know, it’s just a whole different game.

Buffett thinks the impact of AI on human society – both good and bad – is yet to be seen…

Buffett: I don’t know anything about AI, but I do have, that doesn’t mean I deny its existence or importance or anything of the sort. And last year I said that we let a genie out of the bottle when we, when we developed nuclear weapons, and that Genie has been doing some terrible things lately. And the power of that genie is what, you know, scares the hell out of me. And then I don’t know any way to get the genie back in the bottle. And AI is somewhat similar… 

…We may wish we’d never seen that genie or may do wonderful things, and I’m certainly not the person that can evaluate that, and I probably wouldn’t have been the person that could have evaluated it during World War Two, whether we tested a 20,000 ton bomb that we felt was absolutely necessary for them United States, and would actually save lives in the long run. But where we also had Edmund Teller, I think it was, it was on a parallel with Einstein in terms of saying, you may, with this test, ignite the atmosphere in such a way that civilization doesn’t continue. And we decided to let the genie out of the bottle and it accomplished the immediate objective. But whether, whether it’s going to change the future of society, we will find out later.

… but he also thinks AI could enable scammers in a very powerful way

Buffett: Fairly recently, I saw an AI image in front of my eyes on the screen, and it was me and it was my voice. And wearing the kind of clothes I wear, and my wife or my daughter wouldn’t have been able to detect any difference. And it was delivering a message that no way came from me. So it. When you think of the potential for scamming people, if you can reproduce images that I can’t even tell that, say, I need money, you know, it’s your daughter, I’ve just had a car crash. I need $50,000 wired. I mean, scamming has always been part of the American scene, but this would make me, if I was interested in investing in scamming. It’s going to be the growth industry of all time, and it’s enabled in a way.

Munger was Buffett’s best investment sparring partner apart from himself; Munger was also a trusted partner in so many other areas of Buffett’s life

Buffett: In terms of managing money, there wasn’t anybody better in the world to talk to for many, many decades than Charlie. And that doesn’t mean I didn’t talk to other people. But if I didn’t think I could do it myself, I wouldn’t have done it. So to some extent, I talked to myself on investments…

…When I found Charlie, for example, in all kinds of matters, not just investment, I knew I’d have somebody that. Well, I’ll put it this way. You can think about this. Charlie, in all the years we worked together, not only never once lied to me, ever but he didn’t even shape things so that he told half lies or quarter lies to sort of stack the deck in the direction he wanted to go. He was, he absolutely considered total of utmost importance that he never lied.

Climate change can be good for insurers if they’re able to price policies appropriately and reset prices periodically

Ajit Jain: Climate risk is certainly a factor that has come into focus in a very, very big way more recently. Now, the one thing that mitigates the problem for us, especially in some of the reinsurance operations we are in, is our contractual liabilities are limited to a year in most cases. So as a result of which, at the end of a year, we get the opportunity to reprice, including the decision to get out of the business altogether if we don’t like the pricing in the business. But the fact that we are making bets that tie us down to one year at a time certainly makes it possible for us to stay in the business longer term than we might have otherwise because of climate change. I think the insurance industry, in spite of climate change, in spite of increased risk of fires and flooding, it’s going to be an okay place to be in. 

Buffett: Climate change increases risks and in the end it makes our business bigger over time. But not if we, if we misprice them, we’ll also go broke. But we do it one year at a time, overwhelmingly…

Jain: The only thing I’d add is that climate change, much like inflation, if done right, can be a friend of the risk bearer…

Buffett: If you look at GeiCo, it had 175,000 policies roughly in 1950, and it was getting roughly $40 a car. So that was $7 million of volume. You know, now we have. We’re getting over $2,000. Well, all the advances in technology and everything like that, if we had been wedded this formula, what we did with $40, we’d have had a terrible business. But in effect, by making the cars much safer, they’ve also made it much more expensive to repair. And a whole bunch of things have happened, including inflation. So now we have a $40 billion business from something that was $7 million back when I called on it. So if we’d operated in a non inflationary world, Geico would not be a $40 billion company.

Buffett is looking at investment opportunities in Canada

Buffett: We do not feel uncomfortable in any way, shape or form putting our money into Canada. In fact, we’re actually looking at one thing now. But, you know, they still have to meet our standards in terms of what we get for our money. But they don’t have a, they don’t have a mental, we don’t have any mental blocks about that country.

Ajit Jain is very, very important in Berkshire’s insurance operations, but there are institutionalised practices in Berkshire’s insurance operations established by Jain that cannot be imitated by competitors, so the insurance operations will still be in a good place even if Jain leaves the scene

Buffett: We won’t find another Ajit, but we have an operation that he has created and that’s at least part of it. There are certain parts of it that are almost impossible for competitors to imitate, and if I was in their shoes, I wouldn’t try and imitate them. And so we’ve institutionalized some of our advantages, but Ajit is. Well, his presence allowed us to do it and he did it. But now we’ve created a structure that didn’t exist when he came in 1986. Nothing close to it existed with us or with anybody else…

Jain: The fact of the matter is, nobody is irreplaceable. And we have Tim Cook here in the audience, I believe, who has proved that and has set an example for a lot of people who follow.

Biographies are a wonderful way to have conversations with great people from the past

Buffett: Sometimes people would say to me or Charlie at one of these meetings, you know, if you had only have lunch with one person that lived over the last 2000 or so years, you know, who would you want to have it with? Charlie says, I’ve already met all of them. You know, because he read all the books.

Figure out who you want to spend the last day of your life with, and meet them often

Buffett: What you should probably ask yourself is that who do you feel that you’d want to start spending the last day of your life with? And then figure out a way to start meeting them, or tomorrow, and meet them as often as you can, because why wait a little last day and don’t bother with the others?

Cybersecurity is now big business for insurers, but Berkshire is very careful with it because the total amount of losses are tough to know; Berkshire tries to not write cybersecurity insurance

Jain: Cyber insurance has become a very fashionable product these days over these last few years. It is at least a $10 billion market right now globally, and profitability has also been fairly high. I think profitability is at least 20% of the total premium has ended up as profit in the pockets of the insurance bearers…

…we at Berkshire tend to be very, very careful when it comes to taking on cyber insurance liabilities for the part of. Actually for two reasons. One is it’s very difficult to know what is the quantum of losses that can be subject to a single occurrence, and the aggregation potential of cyber losses, especially if some cloud operation comes to a standstill. That aggregation potential can be huge, and not being able to have a worst case gap on it is what scares us. Secondly, it’s also very difficult to have some sense of what we call loss cost, or the cost of goods sold could potentially be. It’s not just for a single loss, but for losses across over time, they have been fairly well contained out of 100 cents of the dollar. The premium losses over the last four or five years, I think, have not been beyond forty cents of the dollar, leaving a decent profit margin. But having said that, there’s not enough data to be able to hang your hat on and say what your true loss cost is.

So in our insurance operations, I have told the people running the operations is I’ve discouraged them from writing cyber insurance to the extent they need to write it so as to satisfy certain client needs. I have told them, no matter how much you charge, you should tell yourself that each time you write a cyber insurance policy, you’re losing money…

…And our approach is to sort of stay away from it right now until we can have access to some meaningful data and hang our hat on data…

Buffett: I remember the first time it was happened, I think in the 1968 when there were the riots in various cities, because I think it was the Bobby Kennedy death that set it off for the Martin Luther King death. I’m not sure which one. But in any event, when you write a policy, you have a limit in that policy. But the question is, what is one event? So if somebody is assassinated in some town and that causes losses at thousands of businesses all over the country, if you’ve written all those thousands of policies, you have one event, nor do you have a thousand events. And there’s no place where that kind of a dilemma enters into more than cyber. Because if you think about it, if, you know, let’s say you’re writing $10 million of limit per risk, and that’s fine, if you lose 10 million for some event, you can take it. But the problem is if that one event turns out to affect 1000 policies and somehow they’re all linked together in some way and the courts decide that way.

The transition from fossil fuels to renewable energy will take time and currently, it’s not possible to transition completely away from fossil fuels

Abel: When you think of a transition that’s going on within the energy sector, we are transitioning from carbon resources to renewable resources, as was noted, but it will not occur overnight. That transition will take many years. And as we use, be it renewable resources such as solar or wind, they are intermittent, and we do try to combine it with batteries. But at this point in time, time, we cannot transition completely away from the carbon resources…

Buffett: But solar will never be the only source of electricity because, well, Greg may know more about this, but I’m barring some real breakthroughs in storage and that sort of thing. Right? 

Abel: Yeah. Generally a battery right now to do it in an economical way is a four hour battery. And when you think of the time without the sun being available, that’s a challenge. Now there’s a lot of technology advancements and that’s stretching out and you throw dollars, a lot of things, you can accomplish things, but the reality is that there’s a careful balance of the reliability and also balancing.

Buffett knows, sadly, that there’s not much gas left in the tank for him (and also seemed to take a dig at old politicians who are overstaying their welcome)

Buffett: We’ll see how the next management plays the game out at Berkshire. Fortunately, you don’t have too long to wait on that. Generally, I feel fine, but I know a little bit about actuarial tables, and I just. Well, I would say this. I shouldn’t be taking on any four year employment contracts like several people doing in this world in an age where you can’t be quite that sure where you’re going to be in four years.

Berkshire has special cultural aspects that would be really attractive for the right kind of person

Buffett: We’ve got an entity that if you really aspire to be a certain kind of manager, of a really large entity, there’s nothing like it in the world. So we’ve got something to offer the person who we want to have…

Abel: The culture we have at Berkshire and that being our shareholders, being our partners and our managers of our business, having that ownership mentality, that’s never going to change and that will attract the right managers at every level. So I think, as Warren said, we have a very special company in Berkshire, but it’s that culture that makes it special, and that’s not going to change.

A great manager cannot fix a terrible business, but will thrive when handed a great business

Buffett: The right CEO can’t make a terrible business great. Tom Murphy, who was the best, he was the best business manager I’ve ever known. And Tom Murphy, you know, he said the real key was buying the right business. And now Murph brought a million other attributes to it after that.

But, you know, Charlie said, what was this? He had a saying on that. But basically, we could have brought in Tom Murphy and told him his job was to run the textile business, and it would have done a little bit better, but it still would have failed. And one of the reasons I stuck with the textile business as long as I did was that I liked Ken Chase so much, and I thought he was a terrific guy, and he was a very good manager. If he’d been a jerk, you know, we’d have quit the textile business much faster, and we’d have been better off. But. So the answer was for him to get in the tv business, like Murph had done and ad supported.

You know, Murph figured that out early, and he started with a pathetic operation, which was a VHF in Albany, New York, competing against GE and everything. And he was operating out of a home for retired nuns, and he only painted the side that faced the street. He had one car dashing around town, and he called it news truck number six. But from that, he built an incredible company, and he built it because he was the best manager I’ve ever met. But beyond that, he was in a good business. And the key will be to have Tom Murphy and then hand them a bunch of good businesses, and he or she will know what to do with it.

Having the resources and the will to act when everyone else does not is a great advantage

Buffett: We’ve gotten from 20 million of net worth to 570 billion. And, you know, we. There aren’t as many things to do, but we can do a few big things better than anybody else can do. And there will be occasional times when we’re the only one willing to act. And at those times, we want to be sure that the US government thinks we’re an asset to the situation and not liability or a supplicant, as the banks were. We’ll say in 2008 and nine, they were all tarred with the same brush. But we want to be sure that the brush that determines our future is not tarred. And I think we’re in the. I don’t think anybody’s got a better position to do it than Berkshire…

…It wasn’t that people didn’t have money in 2008. It’s that they were paralyzed. And we did have the advantage of having some capital and eagerness even to act, and a government that, in effect, looked at as us as an asset instead of a liability.

If autonomous driving takes off and the number of traffic accidents fall, car insurance prices will fall, but on the other hand, the cost of repair of accidents has also skyrocketed, so the overall impact on car insurance prices may be somewhat muted

Buffett: Let’s say there are only going to be three accidents in the United States next year for some crazy reason that anything that reduces accidents is going to reduce costs. But that’s been harder to do than people have done before. But obviously. But if it really happens, the figures will show it, and our data will show it, and the prices will come down…

… If accidents get reduced 50%, it’s going to be good for society and it’s going to be bad for insurance companies’ volume. But, you know, good for society is what we’re looking for so far. You might find kind of interesting. I mean, the number of people killed per 100 million passenger miles driven. I think it actually, when I was young, it was like 15, but even post world war two, it only fell like seven or thereabouts. And Ralph Nader probably has done more for the american consumer than just about anybody in history because that seven or six has now come down to under two. And I don’t think it would have come down that way without him… 

…The point I want to make in terms of Tesla and the fact that they feel that because of their technology, the number of accidents do come down, and that is certainly provable. But I think what needs to be factored in as well is the repair cost of each one of these accidents has skyrocketed. So if you multiply the number of accidents times the cost of each accident, I’m not sure that total number has come down as much as Tesla would like us to believe.

It’s not easy to solve climate change because it involves developed economies (including the USA) telling developing economies they cannot live in the same way today that the developed economies did in the past

Buffett: All of climate change, it’s got a terrible problem just in the fact that the United States particularly has been the one that’s caused the problem the most. And then we’re asking poorer societies to say, well, you’ve got to change the way you live, because we live the way we did. But that really hasn’t been settled yet. It’s a fascinating problem to me, but I don’t have anything to add to how you really slice through the world. 

The prototype of a Berkshire shareholder is a person with a wealthy portfolio, and an even wealthier heart

Buffett: I know she is the prototype. She may have more zeroes, but she’s the prototype of a good many Berkshire Hathaway shareholders. It’ll be the first thing we talk about when we come back. But some of you may have noticed whenever it was a few weeks back, when Ruth Gottesman gave $1 billion to Albert Einstein to take care of all of us, and Ruth doesn’t like a lot of attention drawn to herself. But here’s how they felt at Albert Einstein when they announced that Ruth Gottesman had just made a decision to take care of all of the costs of education at Albert Einstein, and it’s going to be in perpetuity. So let’s just show the film.

Albert Einstein College of Medicine personnel: I’m happy to share with you that starting in August this year, the Albert Einstein College of Medicine will be tuition free.

Buffett: And that’s why Charlie and I have had such fun running Berkshire. She transferred a billion dollars to other people. She happened to do it with Berkshire stock, and, you know, they offered rename the school after and everything like that. But she said, Albert Einstein. That’s a pretty good name to start with. So there’s no ego involved in it, no nothing. She just decided that she’d rather have 100-plus, closer to 150 eventually, of students be able to start out debt free and proceed in life. And she did it happily, and she did it without somebody asking, you know, name it, you know, put my name on for all four sides of neon lights, and I salute her…

…There are all kinds of public companies and wealthy public companies throughout America, and there are certainly cases where in one family, somebody has made a very large amount of money and is devoting it to philanthropy, or much of it to philanthropy, such as the Walton family would be the number one thing in Walmart. And certainly Bill did the same thing, Bill Gates did the same thing at Microsoft. But what is unusual about Berkshire is that a very significant number of Berkshire shareholders located all over the United States, not just in Omaha, but the number of different Berkshire holders who have contributed $100 million or more to their local charities, usually with people not knowing about it. I think it’s many multiples of any other public company in the country. It’s not more multiples than those put a whole lot into philanthropy, and I don’t know the details of the family, but clearly there’s a huge sum of money that the Walmart family, I’m sure, has done all kinds of things philanthropic and will continue to do it.

But I don’t think you’ll find any company where a group of shareholders who aren’t related to each other. So many of them have done something along the lines of what Ruth did a few weeks ago, just to exchange a little piece of paper that they’ve held for five decades, and they’ve lived well themselves. They haven’t denied their family anything, but they don’t feel that they have to create a dynasty or anything, and they give it back to society. And a great many do it anonymously. They do it in many states to some extent…

…But I have to say one thing that was astounding is that the same day we bought a billion dollars worth of Berkshire class a stock from Ruth. So that. And I guess we were actually buying it from the school at that point because he’s just given them. And then. So the transaction was with them. But Mark Millard in our office bought a billion dollars from them, but he also bought $500 million worth of stock from somebody else that nobody will ever have heard of and in a different state. And I won’t elaborate beyond that, but we have had a very significant number of people, and there’s more to come…

…It sort of restores your faith in humanity, that people defer their own consumption within a family for decades and decades, and then they could do something like. And they will. I think it may end up being 150 people to pursue different lives and talented people and diverse people to become a dream of being a doctor and not have to incur incredible debt to do it, or whatever may be the case. There’s a million different examples…

…It sort of restores your faith in humanity, that people defer their own consumption within a family for decades and decades, and then they could do something like. And they will. I think it may end up being 150 people to pursue different lives and talented people and diverse people to become a dream of being a doctor and not have to incur incredible death to do it, or whatever may be the case. There’s a million different examples.

If you understand businesses, you understand stocks

Buffett: If you understand businesses, you understand. You understand common stocks. I mean, if you really know how business works, you are an investment manager. How much you manage, maybe just your own funds or maybe other people. And if you really are primarily interested in getting assets under management, which is where the money is, you know, you don’t really have to understand that sort of thing. But that’s not the case with Ted or Todd, obviously.

Getting extraordinary results in the long-term is not easy, but getting decent results is, if your behaviour is right

Buffett: We’re not positioned though, however, to earn extraordinary returns versus what american business generally earns. I would hope we could be slightly better, but nobody’s going to be dramatically better than some over the next century. It gets very hard. It gets very hard to predict who the winner will be. If you look back, as we did a few meetings ago, as the top 20 companies in the world at ten year intervals, you realize the game isn’t quite as easy as it looks. Getting a decent result actually is reasonably, should be reasonably easy if you just don’t get talked out of doing what has works in the past, and don’t get carried away with fads, and don’t listen to people who have different interests in mind than the interests of our shareholders.

Distribution businesses are not wonderful businesses, but they can perform really well if there’s a great manager at the helm

Buffett: For example, many of the items that the manufacturer just, they don’t want to tie up their capital. If you have a million-plus SKUs – stock keeping units – it’s like selling jelly beans or something like that. And you’re serving a purpose to a degree, but it isn’t your product, in effect. I mean, you’re just a good system for the producer of the equipment to get it to the end user without tying up a lot of capital, being in a business they don’t want to be in. We understand, but there’s no magic to it. With TTI, you had a marvelous man running things, and when you get a marvelous person running something, to some extent, there’s a lot of better people underneath…

…The distribution business is not a wonderful business, but it is a business, and it’s a business that, if it’s big enough, it’s one we would look at and we would buy additional

Buffett and Munger were able to make decisions really quickly because they had built up a tremendous knowledge base over time

Buffett: Charlie and I made decisions extremely fast. But in effect, after years of thinking about the parameters that would enable us to make the quick decision when it presented itself…

…I think the psychologists call this apperceptive mass. But there is something that comes along that takes a whole bunch of observations that you’ve made and knowledge you have and then crystallizes your thinking into action. Big action in the case of Apple. And there actually is something, which I don’t mean to be mysterious, but I really can’t talk about, but it was perfectly legal, I’m sure, you know, that. It just happened to be something that entered the picture that took all the other observations. And I guess my mind reached what they call apperceptive mass, which I really don’t know anything about, but I know the phenomenon when I experience it. And that is, we saw something that I felt was, well, enormously enterprise…

…You know, why do you have this, the person you met? You know, there are all these different potential spouses in the room, and then something happens that you decide that this is the one for you. You know, I think Rogers and Hammerstein, that some enchanted evening, wrote about that. Well, our idea of an enchanted evening is to come up with a business, Charlie and me, and there is an aspect of knowing a whole lot and having a whole lot of experiences and then seeing something that turns on the light bulb…

Abel: Warren, he mentioned Oxy [Occidental Petroleum], which I think is a great example where you made the original decision basically on a weekend with some thought. But as the more you learned about Oxy and the asset position they had, their ability to operate in an exceptional manner, and then a strong CEO around capital allocation. I think your confidence, which was reflected in continuing to acquire more shares, is sort of that type of process.

Buffett: Yeah, it’s exactly to the point. I just learned more as I went along. I’d heard of Occidental Petroleum. Occidental Petroleum happens to be a descendant, not a descendant, but it’s a continuation of City Service, which was the first stock I bought. And, of course, I knew a lot about the oil and gas business, but I didn’t know anything about geology. I knew the economics of it. I had a lot of various things stored in my mind about the business, but I never heard of Vicki until, I guess, it was a Friday or Saturday, and we met on Sunday morning. We made a deal, but that was one sort of deal. And then as time passed, all the kinds of different events happened. You know, Icahn came in. I mean, there are a million things you couldn’t predict at the start, and I formed certain opinions as I went along, but then I learned more as I went along. And then at a point when I heard an investor call that Vicki was on, it put things together for me in a way. It didn’t mean I knew I had a sure thing or anything like that. I don’t know what the price of oil was going to be next year. But I knew that it was something to act on. So we did, and we’re very happy we did, and we still don’t know what the price of oil is going to be next year. Nobody does. But I think the odds are very good that it was – but not a cinch – that it was a good decision, and we’ve got options to buy more stock, and when we get through with it, it could be a worthwhile investment for Berkshire.

Buffett invested in Apple after learning about consumer behaviour from his prior investments

Buffett: People have speculated on how I’ve decided to really put a lot of money into Apple…

…One thing that Charlie and I both learned a lot about was consumer behavior. That didn’t mean we thought we could run a furniture store or anything else. But we did learn a lot when we bought a furniture chain in Baltimore. And we quickly realized that it was a mistake. But having made that mistake, made us smarter about actually thinking through what the capital allocation process would be and how people were likely to behave in the future with department stores and all kinds of things that we wouldn’t have really focused on. So we learned something about consumer behavior from that. We didn’t learn how to run a department store.

Now, the next one was See’s Candy. And See’s Candy was also a study of consumer behavior. We didn’t know how to make candy. There were all kinds of things we didn’t know. But we’ve learned more about consumer behavior as we go along.

And that sort of background, in a very general way, led up to the study of consumer behavior in terms of Apple’s products. And in that case, while I watched what was happening at the furniture mart, in terms of people leaving the store, even though we were selling Apple at a price where we weren’t even making any money, but it was just so popular that if we didn’t have it, people left the store and went to Best Buy or someplace. And if you know the Blumkins, they can’t stand anybody leaving the store, so they behaved accordingly…

… Maybe I’ve used this example before, but if you talk to most people, if they have an iPhone and they have a second car, the second car cost them 30 or $35,000, and they were told that they never could have the iPhone again, or they could never have the second car again. They would give up the second car. But the second car cost them 20 times. Now, people don’t think about their purchases that way, but I think about their behavior. And so we just decide without knowing. I don’t know. There may be some little guy inside the iPhone or something. I have no idea how it works. But I also know what it means. I know what it means to people, and I know how they use it. And I think I know enough about consumer behavior to know that it’s one of the great products, maybe the greatest product of all time. And the value it offers is incredible.

Nobody knows what oil prices would do in the future

Buffett: We still don’t know what the price of oil is going to be next year. Nobody does.

Demand growth for the rail industry is going to be pretty flat, but it is an essential business for the American economy

Buffett: The reality is that the rail industry, if you go back many, many years, it’s flat. There’s not a lot of growth in the industry. There’s opportunities become more efficient, effective, and our margins can go up. But the reality is the demand is going to be flat…

…As I mentioned in the annual report, railroads are absolutely essential to the country. That doesn’t mean they’re on the cutting edge of everything. They’re just essential to the country…

…If you shut down the railroads of the country, it would be incredible, the effects, but. And they would be impossible to construct now. 

Buffett is clear that part of his success is down to luck too

Buffett: I mean, it is absolutely true that if I had to do over again, there’d be a lot of different choices I would make, whether they would have ended up working out as well as things that worked out. It’s hard to imagine how they could have worked out any better…

…You still need luck, you know, you don’t want to. Anybody that says I did it all myself is just kidding. They’re delusional and, you know, actually living a country with a life expectancy is pretty darn good, you know, so that alone is a huge plus. I was born, if I’d been born, my sister’s here, and she was born female, and she’s just as smart as I was and everything. But even my own family, who really did, particularly my dad, love us all equally in a terrific manner. But he still told me that – this is tender – I was born ten years after the 19th amendment was passed, but he basically told my sisters that “Marry young while you still have your looks.” And he told me that “the world, that power in you is new in nature and you really could do anything.” Well, I found there were a lot of things I couldn’t do, but the message given to females and males was incredibly different by the most well meaning and loving of parents.

It’s sometimes to important to simply trust a smart person even if you have no idea what’s going on

Buffett: If you haven’t read it, it’s fascinating to go to Google and read the letter by Leo Szilard and Albert Einstein to President Roosevelt, written about a month before, almost exactly a month before the Germany and Hitler moved into Poland. And it laid out well,  Leo Szilardd knew what was going to happen or had a good hunch of what was going to happen in terms of nuclear bomb development. And he couldn’t get through to Roosevelt. But he knew that a letter signed by Albert Einstein would. So it’s probably the most important letter ever written and you can read it, which is just fascinating to me, but that started the Manhattan Project. That started it. Just everything flowed out of it. And I’ll bet anything that Roosevelt didn’t understand it, but he understood that Albert Einstein sent a letter and he probably knew what he was talking about and he better get, he better start the Manhattan Project. It is just unbelievable what happens in this world. 

Buffett thinks the more important thing to worry about with the US economy would be inflation and fiscal deficit, not the amount of US debt

Quick: Randy Jeffs from Irvine, California. The March 25, 2024 Wall Street Journal reported that the Treasury market is about six fold larger than before the 2008-2009 crisis. Do you think that at some point in time the world market will no longer be able to absorb all of the US debt being offered? 

Buffett: The answer, of course, is I don’t know. But my best speculation is that US debt will be acceptable for a very long time because there’s not much alternative. But it won’t be the quantity. The national debt was nothing to speak of for a long, long time. It won’t be the quantity.

It will be whether in any way inflation would get let loose in a way that really threatened the whole world economic situation. And there really isn’t any alternative to the dollar as a reserve currency. And you get a lot of people who give you a lot of speeches on that, but that really is the answer. And Paul Volcker worried about that back before 1980, but he had threats on his life. And I happened to have a little contact with him at that time. He was an amazing, amazing fellow that in effect decided that he had to act or the financial system would fall apart in some way that he couldn’t predict. And he did it and he had people threatening his life and do all kinds of things, but he was the man for that crisis. But it wasn’t the quantity of US debt that was being offered that threatened the system then. It was the fact that inflation and the future value of the dollar, the cash-is-trash type thinking that turned, that was setting up something that could really affect the future of the world in terms of its economic system. And Paul Volcker took it on…

…I don’t worry about the quantity, I worry about the fiscal deficit. But I’m not a worrier, just generally, I think about it, but I don’t sit and get up and work myself into a stew about it in the least. But I can’t help thinking about it… 

…I think media enters into this and the focus is on the Fed and they just love it because things are always happening and economists are always saying what’s going to happen with the Fed and everything else. But the fiscal deficit is what should be focused on. And Jay Powell is not only a great human being, but he’s a very, very wise man, but he doesn’t control fiscal policy. And every now and then he sends out a kind of a disguised plea for please pay attention to this because that’s where the trouble will be if we have it.

If you’ve been lucky in life, help pull up others too

Quick: On March 4, Charlie’s will was filed with the county of Los Angeles. The first codicil contained an unusual provision. It reads, “Averaged out, my long life has been a favored one, made better by duty, imposed by family tradition, requiring righteousness and service. Therefore, I follow an old practice that I wish was more common now, inserting an ethical bequeath that gives priority not to property, but to transmission of duty.” If you were to make an ethical bequest to Berkshire shareholders, what duties would you impose and why?

Buffett: I’d probably say read, Charlie. I mean, he’s expressed it well, and I would say that if you’re not financially well off, if you’re being kind, you’re doing something that most of the rich people don’t do, even when they give away money. But that’s on the question of whether you’re rich or poor. And I would say, if you’re lucky in life, make sure a bunch of other people are lucky, too.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Apple and Tesla. Holdings are subject to change at any time.

 

Different Types of Stock-Based Compensation and What Investors Need to Know About Them

Stock based compensation can come in many forms. Here are some that most commonly used and what they mean for investors.

I’ve been studying stock-based compensation across a wide range of companies for a few years now.

One thing I’ve learnt is that stock-based compensation can come in a variety of different forms. Each will have a different impact on the future cash flows from a company that accrues to shareholders.

With this in mind, here is a short primer on four of the most common types of stock-based compensation and how they impact the shareholder.

Restricted stock units

Restricted stock units, or RSUs, is the most common form of stock-based compensation I’ve encountered. It is essentially the issuance of new shares to employees.

Well-known companies such as Meta Platforms and Zoom Video Communications issue RSUs to employees. Typically, employees are given RSUs when they join the company. However, these RSUs only turn into shares – or “vest” – over a period of time. Only when they vest can an employee sell them or receive any dividends from holding them.

You can find the number of RSUs outstanding for a company on the notes section of their financial reports . By looking at the RSUs outstanding, you can gauge what is the future diluted share count once all these RSUs have vested.

Options

Another common form of stock-based compensation is options. Options give employees the right to buy new shares of a company at a predetermined price by a certain date.

The good thing about options for shareholders is that unlike RSUs, the company receives cash when these options are exercised. This increases the company’s cash balance and if the exercise price is above the book value, it also increases the book value per share.

If the stock price is below the exercise price upon expiry, employees will not exercise the options and will result in the options expiring worthless. When this happens, no new shares are issued and no cash exchange hands.

A well-known company that predominantly uses options as its form of stock-based compensation is Netflix.

Performance stock units

As its name suggests, performance stock units, or PSUs, are converted to shares for an employee based on a company achieving certain performance goals. PSUs are typically only given to senior executives of a company.

Companies that use PSUs tend to combine it with other forms of stock-based compensation.

Employee share purchase plan

Employee share purchase plans, or ESPPs for short, are programs that allow a company’s employees to purchase new shares of the company with a portion of their salary.

So instead of getting all their salary in cash, they receive some in cash and some in shares. Employees are often incentivized to purchase shares using the ESPP as they can buy new shares of a company at a discount to market prices.

However, employees are only allowed to purchase up to a certain amount of shares per year. Companies such as Medtronic offer employees an ESPP if they wish to make use of it.

Like options, this form of dilution is not as bad for shareholders as a company receives cash in return, unlike both RSUs and PSUs.

Round up

The type of SBC that investors need to be most concerned about is usually RSUs and PSUs. These are dilutive to shareholders and companies do not receive cash back in return.

ESPPs are the least concerning as they usually only result in minimal dilution because of the cap that is imposed on how many shares an employee can purchase in a year, and discount that employees get tends to be small.

Options are also not as concerning as long as the exercise price is reasonably high. Most companies issue options that have an exercise price similar to the market price at the point of grant. However, some companies such as Wise issue options that have exercise prices much lower than market prices. In this case, these options are practically like RSUs and shareholders need to monitor the impact closely.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Meta Platforms, Netflix, Wise, and Zoom Video Communications. Holdings are subject to change at any time.

Debunking An Investment Myth

Instead of fretting over stock prices, it is better to focus on how much cash the company can generate and return to shareholders.

There are some investing beliefs that are widely accepted but may not be entirely true. One such belief is the idea that a company has a “common” intrinsic value. 

When investors think of investing in stocks, the thought is often that a stock has the same intrinsic value for everyone, and eventually the stock price will gravitate toward that intrinsic value. But this may not be the case.

Intrinsic value is dependent on the circumstances of an investor.

Imagine a stock that consistently and predictably pays out $1 per share in dividends every year for eternity. An investor who seeks to find investments that will give a return of 10% a year will be willing to pay $10 per share. In other words, $10 is the “intrinsic value”. On the other hand, another investor may be highly connected and can find high-return investments that gives him 20% a year. This investor will only pay $5 for the above company. His intrinsic value is thus $5 per share.

As you can see, the intrinsic value for the same share is very different.

Intrinsic value changes with rates

Besides the circumstances of each investor, the intrinsic value of a stock can also change when the risk-free rate changes. If the risk free rate goes up, theoretically, investors will gravitate towards the now higher-yielding bonds. As such, stocks will require a higher rate of return and hence their intrinsic value falls.

As the last couple of years have shown, interest rates can have a very big impact on stock prices.

While all this is happening, the company in question is still the same company.

So despite being the same company, it can have different intrinsic values to different people and may also have different intrinsic values on a day-to-day basis based on the risk-free rate at the time.

So what?

This naturally leads to the question, what price will a stock trade at if its intrinsic value differs from person to person and from day to day?

I believe that it’s impossible to know what price a stock should or would trade at. There are too many factors in play. It depends on the market as a whole and with so many market participants, it is almost impossible to know how the stock will be priced.

Given this, instead of focusing on price, we can focus on the dividends that will be distributed to the investor in the future. This means we do not need to predict price movements and our returns are based on the returns that the company will pay to shareholders. Doing this will ensure we are not beholden to fluctuations in stock prices which are difficult to predict.

What’s more predictable is how a company will perform and whether it can generate cash flows and dividends to the shareholders. As such, I prefer focusing my efforts on looking for types of companies with predictable earnings and paying a price that fits my personal investing returns requirement.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

A Radical Idea To Improve Stock-Based Compensation

Here’s a radical idea to improve stock-based compensation so that employees are inclined to drive long term shareholder value.

The idea of giving stock-based compensation is to turn employees into partners. In theory, giving employees stock will make them part-owners of a business and drive them to think and act like a business owner.

However, the reality is that the way SBC programmes of many companies are designed today actually does not motivate employees to think or act like business owners.

In today’s world, SBC is predominantly given in the form of RSUs or options that vest over three to four years. This means that employees are given a fixed number of shares/options every month over a three to four year period. Although this turns employees into shareholders, it may not adequately motivate them to think like owners of a business.

The reason is that employees can sell the stock as soon as they receive them. Many employees are also not inclined to hold the stock for a long period of time, instead opting to sell the stock when the prices go up. Employees may also consider their contribution to the company as too small to make any difference to the stock price. 

As such, this form of SBC does not make employees think like shareholders at all. In fact, I would argue that cash-based compensation would be a better motivator for employees.

Complete lock up

One way that companies try to get around this is to have a lock-up period. In this way, employees are not allowed to sell the shares they receive for a number of years. The lock up period can range from months to years.

But, I think that this is still not enough. Employees need to think like perpetual shareholders where returns are driven by cash flows and ultimately dividends paid to shareholders.

As such, my radical proposal is for SBC to have perpetual lock ups. This means that employees who receive SBC are never allowed to sell unless they are forced to sell via a buyout.

By having perpetual lock ups, employees become true long-term shareholders whose returns are tied to how much cash flow a company is able to return to shareholders.

In this way, employees really think hard about how to maximise cash flow to the company so that the company can pay them a growing stream of dividends in the future instead of just fretting over stock prices. Stock prices are also not entirely in the control of a company as stock prices can also fluctuate based on sentiment and interest rates. Cash flow on the other hand is entirely influenced by management decisions and employee actions.

Although perpetual lock ups may not seem enticing to employees at first, if the company is able to grow and pay dividends in the future, the employee is entitled to a new stream of regular and growing cash income.

Possible push backs

I know there are many possible push backs to this proposal.

For one, some employees may not want to wait so long to receive dividends as an early stage company may take years, if not decades, to start paying dividends. Such a long lock up will not be attractive to employees who want to get rich quick. But that’s the reality of being a long-term shareholder of a business. True business owners are not here to flip the business to someone else but to reap the growing cash flows that the business builds over time. These are patient business builders and that is exactly what we want from employees.

Another pushback would be that it would encourage management to pay dividends instead of investing in other higher return investments. Although this is possible, management who have received shares and are long-term thinkers should be willing to forego some cash dividends today to earn a much larger stream of future cash dividends. Ultimately, a perpetual lock up should drive management to maximise dividend cash flow to themselves over the entire life cycle of the business and not just maximise dividend payment for the near term.

Final words

A perpetual lock-up sounds like a radical idea but it may make employees really think like long-term business partners. 

The current model for stock-based compensation via vesting periods and short lock-ups just do not have the same effect in my view. Employees end up focusing on how to drive short term price movements or they just aren’t motivated at all to think like a business owner. In this case, cash incentives and the current form of SBC is not much different.

The only true way to make employees act and think like long-term shareholders is to make them one. And perpetual lock ups probably are the best way to do this.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2023 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q4 earnings season.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI (2023 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management thinks the company is a leader in delivering generative AI and has a highly differentiated approach through the use of proprietary data and by being friendly with intellectual property

We’re a leader in delivering generative AI across all our clouds. We’re taking a highly differentiated approach across data, models, and interfaces. Our proprietary data is built on decades of deep domain expertise across creative, documents and customer experience management. We leverage large language models as well as have invested in building and delivering our proprietary models in the creative, document, and marketing domains. Our IP-friendly approach is a differentiator for creators and enterprises.

Adobe’s management sees an immense market opportunity for the company in AI and that the company is uniquely positioned to capture the opportunity; Adobe’s end-to-end generative AI solution, GenStudio, is already seeing success with entreprises; GenStudio is a generative AI application that helps marketers plan create, store, and deliver marketing content; GenStudio is integrated across Creative Cloud and Experience Cloud

Every student, communicator, creative professional, and marketer is now focused on leveraging generative AI to imagine, ideate, create and deliver content and applications across a plethora of channels. Adobe is uniquely positioned through the combination of Express, Firefly, Creative Cloud, Acrobat, and Experience Cloud to deliver on this immense market opportunity. The success we are already seeing with our GenStudio offering in the enterprise is validation of our leadership, and we expect that success to translate into other segments as we roll out these solutions throughout the year…

…Adobe GenStudio is a generative AI-first application that allows marketers to quickly plan, create, store, deliver, and measure marketing content in a single intuitive offering. With state-of-the-art generative AI powered by Firefly services, marketers can create on-brand content with unprecedented scale and agility to deliver personalized experiences. Adobe GenStudio natively integrates with multiple Adobe applications across Creative Cloud and Experience Cloud, including Express, Firefly, Workfront, Experience Manager, Customer Journey Analytics and Journey Optimizer. It can be used by brands and their agency partners to unlock new levels of creativity and efficiency in marketing campaigns.

Adobe’s management is seeing strong usage, value and demand for its AI solutions across all customer segments

We’re driving strong usage, value and demand for our AI solutions across all customer segments.

Acrobat AI Assistant uses generative AI to summarise long PDFs, answer questions through a chat interface, help with generating emails, reports, and presentations; AI Assistant has strong data security; Adobe’s management is pleased with the English language beta of AI Assistant and Adobe will be releasing other languages later in the year; management will monetise AI Assistant through a monthly add-on for Reader and Acrobat users; management thinks there’s a lot of monetisation opportunity with AI Assistant, including consumption-based monetisation

The world’s information, whether it’s an enterprise legal contract, a small business invoice, or a personal school form, lives in trillions of PDFs. We were thrilled to announce Acrobat AI Assistant, a massive leap forward on our journey to bring intelligence to PDFs. With AI Assistant, we’re combining the power of generative AI with our unique understanding of the PDF file format to transform the way people interact with and instantly extract additional value from their most important documents. Enabled by a proprietary attribution engine, AI Assistant is deeply integrated into Reader and Acrobat workflows. It instantly generates summaries and insights from long documents, answers questions through a conversational interface, and provides an on-ramp for generating e-mails, reports and presentations. AI Assistant is governed by secure data protocols so that customers can use the capabilities with confidence. We’re pleased with the initial response to the English language beta and look forward to usage ramping across our customer base as we release other languages later in the year. We will monetize this functionality through a monthly add-on offering to the hundreds of millions of Reader users as well as the Acrobat installed base across individuals, teams, and enterprises…

…Everyone is looking at AI Assistant in Acrobat. I certainly hope all of you are using it. It should make your lives more effective. Not just for insight, we think that there’s a lot of opportunity for monetization of insight for AI Assistant on our core base of Acrobat users but also, for the first time, doing consumption-based value. So the hundreds of millions of monthly active users of Reader will also be able to get access to AI Assistant and purchase an add-on pack there, too. So it’s a really broad base to look at how we monetize that.

Adobe’s generative AI model for creative work, Adobe Firefly, has been released for around a year and has been integrated into Creative Cloud and within Adobe Express; users of Creative Cloud and Adobe Express have generated >6.5 billion creative assets to-date (was 4.5 billion in 2023 Q3) across images, vectors, designs, and text effects; Firefly has a web-based interface which has seen tremendous adoption; enterprises can now embed Firefly into their own workflow through Firefly Services, which is commercially safe for enterprises to use

Adobe Express is inspiring millions of users of all skill levels to design more quickly and easily than ever before. In the year since we announced and released Adobe Firefly, our creative generative AI model, we have aggressively integrated this functionality into both our Creative Cloud flagship applications and more recently, Adobe Express, delighting millions of users who have generated over 6.5 billion assets to date.

In addition to creating proprietary foundation models, Firefly includes a web-based interface for ideation and rapid prototyping, which has seen tremendous adoption. We also recently introduced Firefly Services, an AI platform which enables every enterprise to embed and extend our technology into their creative production and marketing workflows. Firefly Services is currently powered by our commercially safe models and includes the ability for enterprises to create their own custom models by providing their proprietary data sets as well as to embed this functionality through APIs into their e-mail, media placement, social, and web creation process…

…… The 6.5 billion assets generated to date include images, vectors, designs, and text effects. 

IBM is an early adopter of Firefly and has used it to generate marketing assets much faster than before and that have produced much higher engagement

Early adopters like IBM are putting Firefly at the center of their content creation processes. IBM used Adobe Firefly to generate 200 campaign assets and over 1,000 marketing variations in moments rather than months. The campaign drove 26x higher engagement than its benchmark and reached more key audiences.

Firefly is now available in mobile workflows through the Adobe Express mobile app beta and has a first-of-its-kind integration with TikTok’s creative assistant; the introduction of Firefly for enterprises has helped Adobe win enterprise clients in the quarter

The launch of the new Adobe Express mobile app beta brings the magic of Adobe Firefly AI models directly into mobile workflows. The first-of-its-kind integration with TikTok’s creative assistant makes the creation and optimization of social media content quicker, easier and more effective than ever before…

…  The introduction of Firefly services for enterprises drove notable wins in the quarter, including Accenture, IPG, and Starbucks. Other key enterprise wins include AECOM, Capital Group, Dentsu, IBM, Nintendo, and RR Donnelley.

During 2023 Q4 (FY2024 Q1), Adobe’s management saw the highest adoption of Firefly-powered tools in Photoshop since the release of Generative Fill in May 2023

Generative Fill in Photoshop continues to empower creators to create in new ways and accelerate image editing workflows. Q1 saw the highest adoption of Firefly-powered tools in Photoshop since the release of Generative Fill in May 2023, with customers adopting these features across desktop, web and most recently, iPad, which added Generative Fill and Generative Expand in December.

Adobe’s management expects Adobe’s AI-powered product features to drive an acceleration in the company’s annual recurring revenue (ARR) in the second half of the year; management thinks the growth drivers are very clear

You can expect to see the product advances in Express with Firefly on mobile, Firefly services and AI Assistant in Acrobat drive ARR acceleration in the second half of the year…

…As we look specifically at Creative Cloud, I just want to sort of make sure everyone takes a step back and looks at our strategy to accelerate the business because I think the growth drivers here are very clear. We are focused on expanding access to users with things like Express on mobile. We want to introduce new offers across the business with things like AI Assistant and also existing capabilities for Firefly coming into our core Firefly, our core Photoshop and Illustrator and flagship applications. We want to access new budget pools with the introduction of Firefly services and GenStudio as we talked about…

…And as we enter the back half of the year, we have capabilities for Creative Cloud pricing with Firefly that have already started rolling out late last year as we talked about, and we’ll be incrementally rolling out throughout the year. We’re ramping Firefly services and Express in enterprise. As we talked about, we saw a very good beginning of that rollout at the — toward the end of Q1. We also expect to see the second half ramping with Express Mobile and AI Assistant coming through. So we have a lot of the back-end capabilities set up so that we can start monetizing these new features, which are still largely in beta starting in Q3 and beyond…

…We are very excited about all the innovation that’s coming out, that’s just starting to ramp in terms of monetization and/or still in beta on the Creative Cloud side. We expect that to come out in Q3 and we’ll start our monetization there. So we continue to feel very confident about the second half acceleration of Creative Cloud…

…Usage of Firefly capabilities in Photoshop was at an all-time high in Q1, Express exports more than doubling with the introduction of Express mobile in beta now, going to GA in the coming months, AI Assistant, Acrobat, same pack pattern. You can see that momentum as we look into the back half of the year. And from an enterprise standpoint, the performance in the business was really, really superb in Q1, strongest Q1 ever in the enterprise. So there’s a lot of fundamental components that we’re seeing around performance of the business that give us confidence as we look into the back half of the year.

Adobe’s management believes that the roll out of personalisation at scale has been limited by the ability of companies to create content variations and this is where generative AI can help

Today, rollout of personalization at scale has been limited by the number of content variations you can create and the number of journeys you can deploy. We believe harnessing generative AI will be the next accelerant with Creative Cloud, Firefly services and GenStudio providing a comprehensive solution for the current supply chain and generative experience model automating the creation of personalized journeys.

Adobe’s management believes that AI augments human ingenuity and expands the company’s market opportunity

We believe that AI augments human ingenuity and expands our addressable market opportunity.

Adobe’s management is monetising Adobe’s AI product features in two main ways – via generative packs and via whole products – and they are progressing in line with expectations; management thinks that it’s still early days for Adobe in terms of monetising its AI product features

I think where there’s tremendous interest and where, if you look at it from an AI monetization, the 2 places that we’re monetizing extremely in line with our expectations, the first is as it relates to the Creative Cloud pricing that we’ve rolled out. And as you know, the generative packs are included for the most part in how people now buy Creative Cloud, that’s rolling out as expected. And the second place where we are monetizing it is in the entire enterprise as it relates to Content and GenStudio. And I’m really happy about how that’s monetizing it. And that’s a combination, Brent, of when we go into an enterprise for creation, whether we provide Creative Cloud or a combination of Express, what we are doing with asset management and AEM, Workflow as well as Firefly services to enable people to do custom models as well as APIs. We’re seeing way more monetization earlier, but again, very much in line with expected…

…As it relates to the monetization of AI, I think we’re in early stages as it relates to experimentation. So we’re looking at both what the value utilization is as well as experimentation. The value utilization is actually really positive for us. I think as it relates to the monetization and the experimentation, we have the generative packs, as you know, in Creative Cloud. I think you will see us more and more have that as part of the normal pricing and look at pricing, because that’s the way in which we want to continue to see people use it. I think in Acrobat, as you’ve seen, we are not actually using the generative packs. We’re going to be using more of an AI Assistant model, which is a monthly model. As it relates to the enterprise, we have both the ability to do custom models, which depends on how much content that they are creating as well as an API and metering. We’ve rolled that out and we started to sell that as part of our GenStudio solution.

Adobe’s management pushed out the enforcement of generative AI credit limits beyond April 2024 because Adobe is still in user-acquisition mode for its AI product features

[Question] You pushed out the enforcement of generative credit limits for some products beyond April that were originally expected sooner. What’s the thinking behind this decision? And what are you seeing thus far in terms of credit consumption and purchasing patterns of those credit packs? 

[Answer] In terms of the timing of the — when we start enforcing credits, don’t read anything into that other than right now we are still very much in acquisition mode. We want to bring a lot of users in. We want to get them using the products as much as possible. We want them coming back and using it…

…So right now, look, the primary point is about proliferation and usage. 

Adobe recently released generative AI capabilities in music composition, voice-dubbing, and lip-syncing; these capabilities will require a lot of generative AI credits from users

 In the last few weeks, we’ve done a couple of sneaks that could also be instructive. Last month, we snuck music composition where you can take any music track, you can give it a music type like hip-hop or orchestra or whatever, and it will transform that initial track into this new type of music. Just this morning, we snuck our ability to do auto-dubbing and lip-syncing where you give it a video of someone talking in, say, English and then you can translate it automatically to French or Spanish or Portuguese or whatever. As you can imagine, those actions will not take 1 credit. Those actions will be much more significant in terms of what they cost.

Adobe’s management thinks that developments in generative AI models for video, such as the recent release of Sora by OpenAI, are a net positive for Adobe; Adobe is also developing its own generative AI video models and will be releasing them later this year

[Question] Clearly, a lot of news around video creation using generative AI during the quarter, of course, with the announcement of Sora. Maybe the question for you folks is can we just talk a little bit about how you think about the market impact that generative AI can have in the video editing market and how maybe Firefly can participate in that trend?

[Answer] So really great advances, but net-net, video is going to be even more of a need for editing applications in order to truly take advantage of generative AI…

…We see the proliferation of video models to be a very good thing for Adobe. We’re going to work with OpenAI around Sora. You’re going to see us obviously developing our own model. You’re going to see others develop their model. All of that creates a tailwind because the more people generate video clips, the more they need to edit that content, right? So whether it’s Premier or After Effects or Express, they have to assemble those clips. They have to color correct those clips. They have to tone-match. They have to enable transitions. So we’re excited about what we’re building, but we’re just as excited about the partnerships that we see with OpenAI and others coming down this path. And if you take a step back, you should expect to see more from us in the weeks ahead with imaging and vector, design, text effects, in the months ahead with audio and video and 3D. We’re very excited about what all of this means, not just for the models, but for our APIs and our tools.

Adobe’s management thinks that Adobe is in a great position to attract talent for technical AI work because they believe that the company has one of the best AI research labs and can provide access to AI computing hardware; management also thinks that Adobe is in a great position to attract talent for AI sales

And so that’s not just an Adobe perspective, but it’s playing out, obviously, in the enterprises as they look at what are the models that they can consider using for production workflows. We’re the only one with the full suite of capabilities that they can do. It’s a really unique position to be in. But it’s also being noticed by the research community, right? And as the community starts looking at places, if I’m a PhD that wants to go work in a particular environment, I start to ask myself the question of which environment do I want to pick. And a lot of people want to do AI in a responsible way. And that has been a very, very good opportunity for us to bring in amazing talent. So we are investing. We do believe that we have the best — one of the best, if not the best, research labs around imaging, around video, around audio, around 3D, and we’re going to continue to attract that talent very quickly. We’ve already talked about we have the broadest set of creative models for imaging, for vector, for design, for audio, for 3D, for video, for fonts and text effects. And so this gives us a broad surface area to bring people in. And that momentum that starts with people coming in has been great.

The second part of this, too, is managing access to GPUs while maintaining our margins. We’ve been able to sort of manage our cost structure in a way that brings in the talent and gives them the necessary GPUs to do their best work…

…Regarding the sales positions in enterprises. In enterprise, we’re in a strong position because what we — this area of customer experience management, it remains a clear imperative for enterprise customers. Everybody is investing in this personalization, at scale and current supply chain. These help drive both growth and profitability. So when you look at these areas, these, from an enterprise perspective, these are a must-have. This is not a need-to-have. And that’s helping us really attract the right kind of talent. We just onboarded, this week, a VP of Sales who had prior experience, a lot of experience in Cisco and Salesforce, et cetera. 

Adobe’s management believes that Adobe’s tools will be in demand even in an AI dominated world and will not be automated away

[Question] I just wanted to give you an opportunity to debunk this hypothesis that is going around that AI, it is generating videos and pictures, but the next step is, it’s going to do the actual editing and put out Premier Pro use or whatnot. So that is probably the existential threat that people are debating.

[Answer] So as it relates to generative content, I’m going to sort of break it up into 2 parts. One is around the tooling and how you create the content and the second is around automation associated with the content…

…So I think the core part of this is that as more of this content creates, you need more toolability, the best models are going to be the models that are safe to use and have control built-in from the ground up. And I think we have the best controls of anyone in the industry. And they need to be able to be done in an automated fashion that can embed into your workflow. 

Adobe’s management believes that as generative AI models proliferate in society, the demand for Adobe’s products will increase partly because there will be a rise in the number of interfaces that people use for creative content

I think the first question that I hear across many folks is, hey, with the advent of AI and the increase in the number of models that people are seeing, whether they be image models or video models, does that mean that the number of seats, both for Adobe and in the world, do they increase? Or do they decrease? To me, there’s no question in my mind that when you talk about the models and interfaces that people will use to do creative content, that the number of interfaces will increase. So Adobe has to go leverage that massive opportunity. But big picture, models will only cause more opportunity for interfaces. And I think we’re uniquely qualified to engage in that, so that’s the first one.

Adobe’s management wants Adobe to work with many different AI models, even those from third-parties

Do we only leverage the Adobe model? Or is there a way in which we can leverage every other model that exists out there? Much like we did with plug-ins, with all of our Creative applications, any other model that’s out there, we will certainly provide ways to integrate that into our applications, so anybody who’s using our application benefits not just from our model creation but from any other model creation that’s out there…

…But long term certainly, as I’ve said with our partnerships, we will have the ability for Adobe, in our interfaces, to leverage any other model that’s out there, which again further expands our opportunity.

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks that AI will be a long-term growth driver for the company, but it’s still early days; management sees three layers to the AI stack – the first being compute and LLMs (large language models), the second being fine-tuning the models, and the third being the building of AI applications – and most of the AI spend today is at the first layer where MongoDB does not compete; MongoDB’s customers are still at the experimentation and prototyping stages of building their initial AI applications and management expects the customers to take time to move up to the second and third layers; management believes that MongoDB will benefit when customers start building AI application

While I strongly believe that AI will be a significant driver of long-term growth for MongoDB we are in the early days of AI, akin to the dial-up phase of the Internet era. To put things in context, it’s important to understand that there are 3 layers to the ad stack. The first layer is the underlying compute and LLMs the second layers of fine-tuning of models and building of AI applications. And the third layer is deploy and running applications that end users interact with. MongoDB’s strategy is to operate at the second and third layers to enable customers to build AI applications by using their own proprietary data together with any LLM, closed or open source on any computing infrastructure.

Today, the vast majority of AI spend is happening in the first layer that is investments in compute to train and run LLM, neither are areas in which we compete. Our enterprise customers today are still largely in the experimentation and prototyping stages of building their initial AI applications, first focused on driving efficiencies by automating existing workflows. We expect that will take time for enterprises to deploy production workloads at scale. However, as organizations look to realize the full benefit of these AI investments, they will turn to companies like MongoDB, offering differentiated capabilities in the upper layers of the stack. Similar to what happened in the Internet area, era when value accrued over time to companies offering services and applications, leveraging the built-out Internet infrastructure, platforms like MongoDB will benefit as customers build AI applications to drive meaningful operating efficiencies and create compelling customer experiences and pursue new growth opportunities…

…While it’s early days, we expect that AI will not only support the overall growth of the market, but also compel customers to revisit both their legacy workloads and build more ambitious applications. This will allow us to win more new and existing workloads and to ultimately establish MongoDB as a standard enterprise accounts. 

MongoDB’s management is already seeing the company’s platform resonate with AI startups that are building applications across wide use cases, and this gives management confidence that MongoDB is a good fit for sophisticated AI workloads

We already see our platform resonating with innovative AI startups building exciting applications for use cases such as real-time patient diagnostics for personalized medicine, cyber threat data analysis for risk mitigation, predictive maintenance for maritime fleets and auto generated animations for personalized marketing campaigns…

…we do see some really interesting start-ups who are building on top of MongoDB. So it gives us confidence about our platform fit for these sophisticated workloads. 

There are three elements that are important when migrating from a relational database to a non-relational database and MongoDB’s current relational migrator offering helps automate the first two elements; the third element – rewriting the application code – is manually intensive and management believes that generative AI can help to tremendously improve the experience there

There are 3 elements to migrating and application to transform the schema, moving the data and rewriting the application code. Our current relational migrator offering is designed to automate large parts of the first 2 elements, but rewriting application code is the most manually intensive element. Gen AI holds tremendous promise to meaningfully reduce the cost and time of rewriting application code. We will continue building AI capabilities into relational migrator, but our view is that the solution will be a mix of products and services.

Samsung Electronics’ Digital Appliances division migrated from its previous MySQL database to MongoDB Atlas; the Samsung Smart Home Service can leverage MongoDB’s document database model to collect real-time data for training AI services; the migration improved response times by >50% and the latency was reduced from 3 seconds to 18 milliseconds

Samsung Electronics Digital Appliances division transitioned from their previous MySQL database to MongoDB Atlas to manage clients’ data more effectively. By leveraging MongoDB’s document model, Samsung’s Smart Home Service can collect real-time data from the team’s AI-powered home appliances and use it for a variety of data-driven initiatives such as training AI services. Their migration to MongoDB Atlas improved response times by more than 50% and this re-latency was reduced from 3 seconds to 18 millisecond, significantly improving availability and developer productivity.

MongoDB’s management believes that the performance and cost of AI applications are still not up to mark, using ChatGPT as an example

And also these technologies maturing, but from both the performance and from a cost point of view, if you played with chat GPT or any of the other chatbots out there or large language models, you’ll know that the performance of these applications to get a response time in the 1 to 2 to 3 seconds, depending on the type of question you’re asking. And so naturally, a chatbot is a very simple but easy to understand use case, but to embed that technology into a sophisticated application, making real-time decisions based on on real-time data, the performance and to some degree, the cost of these architectures are still not there…

…The performance of some of these systems is — I would classify as okay, not great. The cost of inference is quite expensive. So people have to be quite careful about the types of applications they deploy.

MongoDB’s management thinks that this year is when MongoDB’s customers start rolling out a few AI applications and learn; it will be at least another year when the positive impacts of AI to MongoDB’s business really starts showing up

Cstomers are still in the learning phase. They’re — they’re experimenting, they’re prototyping. But I would say you’re not seeing a lot of customers really deploy AI applications at scale. So I think it’s going to take them — I would say, this year is a year where they’re going to do probably roll out a few applications, learn…

… I think it’s going to show up in a business when people are deploying AI apps at scale, right? So I think that’s going to be at least another year.

MongoDB’s management believes that the company is very well-positioned to capture AI application workloads because of the technologies underneath its platform and because it is capable of working with a wide range of AI models

We feel very good about our positioning because from an architecture point of view, the document model, the flexible schema, the ability to handle real-time data, performance at scale, the unified platform, the ability to handle data, metadata and vector data with the same query language, same semantics, et cetera, is something that makes us very, very attractive…

… We feel like we’re well positioned we feel that people really resonate with the unified platform, one way to handle data, metadata and vector data that we are open and composable that we integrate to not only all the different LLMs, we are integrated to different embedding models, and we also essentially also integrate with some of the emerging application frameworks that developers want to use. So we think we’re well positioned

MongoDB’s management is seeing that AI-related decisions are being made at the senior levels of a company, and so MongoDB is engaging with customers at that senior level

The other thing that we’re finding is unlike a typical sale where someone is deciding to either build a new workload or modernize a workload. The AI decision is more of a central decision — centralized decision more than ever. So it allows us to actually go higher in the organization. So we’re actually engaging with customers at much more senior levels because, obviously, this is coming down as a top-down initiative.

MongoDB’s management is seeing the first few wave of AI use cases among companies being focused on reducing costs, co-generation, and increasing developer productivity

In regards to use cases, we’re seeing most customers focus on driving efficiencies in their business because their existing baseline of costs are well known. So it’s much easier for them to determine how much value they can derive by using some of these new AI technologies. So I see the first wave of applications being around reducing costs. You’ve seen some announcements by some customers are saying, focusing on things like customer support and customer service, they really have been — they have found ways to dramatically reduce their cost. That’s not surprising to me. I think co-generation and this increasing developer productivity is another area. I think those are going to be kind of 2 areas where there’s low-hanging fruit. 

MongoDB’s management is seeing high interest in AI across almost every industry

In terms of across industries, I think it’s — obviously, there’s some constraints on some customers based on the regulated nature of their industry. But in general, we see basically high interest across almost every industry that we operate in.

Customers migrate off relational databases to MongoDB for three key reasons – (1) their data model has become brittle with relational architecture, (2) their legacy systems are not scaling properly, and (3) the costs have become high – and they are accompanied by a compelling event; customers also conduct the migration to make their data more AI-enabled

Even for IPO, we had a meaningful number of customers migrating off relational to MongoDB. So they to come in 3 categories of reasons why. First is that the data model has become so brittle with relational architecture that is very hard to build new features and be responsive to their customers. And so they just feel like their ability to innovate has slowed down. The second reason is that the system is just not scaling or performing given the increased number of users or the large amount of data that they have to process that they realize that they have to get off a legacy platform. And the third reason is just the cost of the underlying platform and relative to the ROI that application. So it typically falls in one of those three buckets. Sometimes customers may have all 3 or maybe 2 of the 3 that are driving that demand. And then there’s typically some compelling event. Maybe there’s some milestones they want to hit. Maybe there’s a renewal coming up with the incumbent vendor that’s driving them to potentially move off that vendor as quickly as possible…

… On top of the 3 reasons I gave you in terms of why people moved this now in the both reason which is enabling their data and their applications to be more AI-enabled. And so it’s not just moving to a more modern platform, but making them more AI enabled. And so that’s also something that’s getting customers’ interest.

Okta (NASDAQ: OKTA)

Okta’s management has built a strong pipeline of products that are powered by Okta AI

We’re expanding on the world’s most robust and modern identity platform, and we have a strong pipeline of products and functionality powered by Okta AI.

Okta’s management believes that Spera (the company’s new acquisition) is going to help with threat protection; threat protection with Okta AI and the Spera product will be packaged and monetised as add ons

And so you’re seeing customers really starting as they lean in and do more with modern identity, they’re also at the same time saying, what is this class of tools and technologies and capabilities are going to protect that? And that’s where offerings like Underneath Threat Protection with Okta AI or the Spera product are really going to help. And so I think in terms of how we’re going to price and package and monetize these things, think of — they’re both additional, they’re both additional capabilities with additional licensing fee. 

Salesforce (NYSE: CRM)

Salesforce’s management believes that the company is the world’s No.1 AI CRM

Salesforce is the world’s #1 AI CRM, #1 in sales, #1 in service, #1 in marketing, #1 data cloud, incredible.

In Salesforce’s management’s conversations with CEOs, they are hearing three things that the CEOs want – productivity, higher value customer relationships, and higher margins – and these things can happen through AI; Salesforce’s management thinks that company-leaders know they need to make major investments in AI right now

As I talk to CEOs around the world, they tell me, they want 3 things. You may have heard me say this already, but I’ll say it again. One, they want more productivity, and they’re going to get that productivity through the fundamental augmentation of their employees through artificial intelligence. It’s happening. It’s empirical. Number two is they want higher value customer relationships, which is also going to happen through this AI. And they want higher margins, which we are seeing empirically as well through the — when they use this artificial intelligence in these next-generation products. As we look at productivity, as we look at higher value customer relationships, as we look at higher margins, how do our customers get these things? How are they achieving these goals? It is AI. It is why every CEO and company knows they need to make major investments in AI right now.

Salesforce’s management thinks that the current AI moment will give companies an unprecedented level of intelligence and Salesforce’s Einstein 1 platform can help companies achieve this

And I believe this is the single most important moment in the history of the technology industry. It’s giving companies an unprecedented level of intelligence that will allow them to connect with their customers in a whole new way.  And with our Einstein 1 Platform, we’re helping out our customers transform for the AI future.

Salesforce’s management thinks that popular AI models are not trusted solutions for enterprises because they are trained on amalgamated public data and could hallucinate, providing customers with services that do not exist; this was the exact problem faced by an airline recently, and the airline was a Salesforce customer who did not want to work with Salesforce’s AI technologies

The truth is that these AI models are all trained on amalgamated public data. You all understand that. You’ve all seen the New York Times lawsuit of OpenAI or others who are really going to take, hey, this is all — this amalgamated stolen public data, much of it used without permission, unlicensed, but amalgamated into these single consolidated data stores…

These AI models, well, they could be considered very confident liars, producing misinformation, hallucinations. Hallucinations are not a feature, okay?…

…And there’s a danger though for companies, for enterprises, for our customers that these are not trusted solutions. And let me point out why that is, especially for companies who are in regulated markets. Why this is a big, big deal. These models don’t know anything about the company’s customer relationships and, in some cases, are just making it up. Enterprises need to have the same capabilities that are captivating consumers, those amazing things, but they need to have it with trust and they need to have it with security, and it’s not easy. Look, we all read the story. Now it just happened last week. An airline chatbot prompts by a passenger to book a flight with a 90-day refund window. It turns out the chatbot, running on one of these big models, we won’t have to use any brand names here. We all know who it was, hallucinate the option. It did not exist… 

…The airline said, “Oh, listen, that was just the chatbot. It gets that way some time. We’re so sorry — you know what, that’s just a separate technical entity, a separate legal entity and the airline, “We can’t — we’re not going to hold liability for that.” Well, guess what? That defense did not work in a court of law. The court of law said that, that AI chatbot that made up that incredible new policy for that company, well, that company was going to be held responsible, liable for that policy, that they were going to be held liable for the work of that chatbot. Just as they would for a human employee, they were being held liable for a digital employee…

…And that story I told you on the script. When I saw that last week, I’m like, I’m putting this in the script, that this company, which is a great company and a customer of ours, but did not use our technology, went out there and used some kind of rogue AI that they picked off the Internet. Some engineer just hobbled it, hooked it up, and then it started just skewing these hallucinations and false around their loyalty program, and the courts are holding them liable. Good. Let every CEO wake up and realize, we are on the verge of one of the greatest transformations in the history of technology, but trust must be our highest value.

Salesforce’s management believes that there are three essential components for enterprises to deliver trusted AI experiences, (1) a compelling user interface, (2) a high-quality AI model, and (3) data and metadata; management thinks that Salesforce excels in all three components; management has found that Salesforce customers who are experts on AI have realised that it is the integration of AI models with data and metadata that is the important thing in powering AI experiences, and this is why customers are turning to Salesforce

The reality for every enterprise is that to deliver trusted AI experiences, you need these 3 essential components now.

You need that compelling user interface. There’s no question, a natural and effortless experience. And at Salesforce, we have some of the most intuitive user interfaces that deliver insights and intelligence across sales and service and marketing and commerce and industries. Many of you are on Slack right now. Many of you are on Tableau. Many of you are on MuleSoft are, one of our other products.

Okay. Now what else do you need? Number two, you need a world-class AI model. And now we know there’s many, many models available. Just go to hugging face, which is a company that we’re investor in or look at all the other models. And by the way, not only the thousands of models right now, but there are tens of thousands, hundreds of thousands of models coming. And all the models that are available today will be obsolete 12 months from now. So we have to have an open, extensible and trusted framework inside Salesforce to be receptacles for these models. That’s why Einstein 1 is so important. Then you have to be able to use these AI models. The ones that Salesforce is developing or these public models on Hugging Face or other things, or even bring your own model. Customers are even making their own models, fantastic. Of course, we have great partnerships with OpenAI, with Mythropic, with Cohere with many other AI models. This is the second key component. One is the UI, the second is the model, all right?…

…Now we’re in the enterprise. In the enterprise, you need deep integration of data and metadata for the AI to understand and deliver the critical insights and intelligence that customers need across their business, across sales, service, marketing, commerce, whatever it is. That deep integration of the data and metadata that’s not so easy. That’s not just some amalgamate stolen public data set. In the enterprise, that deep integration of data and metadata. Oh, that’s what Salesforce does. We are a deep integration of data and metadata. That is why it’s very, very exciting…

…And they try to stitch together a variety of AI tools and copilots and this and that and whatever I’ve had so many funny conversations with so many customers that come to me that they’re experts in AI and their. And then I just say to them, but how are you going to deliver this experience? And then finally, they realize, “Oh, I need the deep integration with the data and the metadata. The reason why the metadata is so important is because it describes the data. That’s why so many companies are turning to Salesforce for their AI transformation. Only Salesforce offers these critical layers of AI for our customers, the UI, the model and the deep integration of the data and the metadata make the AI smart and intelligent and insightful. And without the hallucinations and without all of these other — all the other problems. For more than 2 decades, we’ve been trusted with our customers’ data and metadata. And we have a lot of it. 

Salesforce’s management believes that most AI models that are available today will be obsoleted in 12 months’ time, and that a flood of new AI models will be coming soon – because of this, it’s important that Salesforce needs to have an open, extensible framework to work with all kinds of models, and this is where Einstein 1 has an important role to play

And by the way, not only the thousands of models right now, but there are tens of thousands, hundreds of thousands of models coming. And all the models that are available today will be obsolete 12 months from now. So we have to have an open, extensible and trusted framework inside Salesforce to be receptacles for these models. That’s why Einstein 1 is so important.

Salesforce’s management believes that data is even more important for AI than chips, and this is why management is so excited about Salesforce: Because the company has one of the largest data repositories in the world for its customers

 I love NVIDIA, by the way, and what Jensen has done is amazing, and they are delivering very much. In the era of the gold rush, the Levi’s jeans to the gold miners. But we all know where the gold is: the data. The gold is the data. And that’s why we’re so excited about Salesforce because we are one of the very largest repositories of enterprise data and metadata in the world for our customers. And customers are going to start to realize this right now…

…For more than 2 decades, we’ve been trusted with our customers’ data and metadata. And we have a lot of it.

There is a lot of trapped data in Salesforce’s customers which is hindering their AI work; Salesforce’s Data Cloud helps to integrate all the disparate data sources, and it is why the service is Salesforce’s fastest-growing product ever; Data Cloud is now integrated across the entire Salesforce platform, and management is totally focused on Data Cloud in FY2025; using Data Cloud adds huge value for customers who are using other Salesforce services; Data Cloud and Einstein 1 are built on the same metadata framework – which allows customer apps to securely access and understand the data that is on Salesforce’s platform – and this prevents hallucinations and it is something only Salesforce can do

Many of our customers also have islands and thousands of systems of trapped data…

… Trap data is all over the enterprise. Now what trap data could be is you might be using a great company like Snowflake and I less Snowflake or Databricks or Microsoft or you might be using Amazon system or even something like Google, what do you say, BigQuery, all these various databases…

…if you’re using Salesforce, Sales Cloud, Service Cloud, Tableau, Slack, we need to be able to, through our zero copy, automatically integrate into our data cloud, all of those systems and then seamlessly provide that data back into these amazing tools. And that is what we are doing because so many of our customers have islands of trapped data in all of these systems, but the AI is not going to work because it needs to have the seamless amalgamated data experience of data and metadata, and that’s why our Data Cloud is like a rocket ship.

The entire AI revolution is built on this foundation of data, and it’s why we’re so excited about this incredible data cloud. It’s now deeply integrated into all of our apps into our entire platform. Its self-service for all of our customers to turn on. It is our fastest-growing product ever. It’s our total focus for fiscal year ’25.

With Salesforce Data Cloud, Salesforce can unlock this trap data and bring together all of their business and customer data into one place for AI, all while keeping their data safe and secure, and it’s all running inside our Einstein Trust layer, and we’ve deployed it to all of our customers, we unleash now the copilot as well to all of our customers deeply built on our pilot on our data and metadata. And while other copilots just sit and spin because they can’t figure out what the data means and if you haven’t seen the demonstrations, you can see these co-pilots spin, but when they use Salesforce and all of a sudden becomes intelligent, and that is the core of the NSN platform. And all of our apps, all of our AI capabilities, all of the customer data and 1 deeply integrated trusted metadata platform, and that’s why we’re seeing incredible demand for data cloud. Data Cloud brings it all together…

…We’ve never seen traction like this of a new product because you can just easily turn on the Data Cloud and it adds huge value to Sales Cloud. It adds huge value to Service Cloud, the Marketing Cloud and the CDP…

… Because Data Cloud and all of Einstein 1 is built on our metadata framework, as I just described, every customer app can securely access and understand the data and use any bottle, use an EUI workflow, integrate with the platform. That means less complexity, more flexibility, faster innovation, but also we want to say goodbye to these hallucinations. We want to say goodbye to all of these crazy experiences or having with these bots that don’t know what they’re doing because they have no data or metadata, okay? Or the data that they have metadata is like productivity data like the highest level data that’s not deeply integrated data. So only Salesforce can do this.

Payroll company ADP has been a long-time customer of Salesforce but wanted to evaluate other AI solutions; ADP realised that the data and metadata component was lacking in other AI solutions and it is something only Salesforce can provide

We all know the HR and payroll leader, ADP and their incredible new CEO, [indiscernible], amazing. ADP has been a great sales cloud customer for 2 decades. They’ve used Einstein for years. They are one of the first customers we ever have…

…And the company wanted to transform now customer service with AI to give their agents real-time insights, next best actions, auto generating case summaries. But what I have to say to you, it was a little bit embarrassing Salesforce is not #1 on their list. And I said to them, “How can that be? We’re the #1 service cloud. We’re #1 in the Q. We’re #1 in this. Number went work.” “No, we’re going to go evaluate this. We’re going to look at all the different solutions — we’re going to look at all the new AI models. We think we’re just going to hook this model up to this, and we’re going to do that.” And it sounds like a big Rube Goldberg invention what was going to happen there. And so we had to go in and we just wanted to partner with them and say, “All right, show us what you want to do. We’re going to work with you, we’re going to be trusted partners. Let’s go.” 

But like a lot of our customers move into AI, ADP realized it didn’t have a comprehensive deeply integrated platform of data and metadata that could bring together all of this into a single source of truth — and then you get the incredible customer service. Then you get the results that you’re looking for. And it’s deeply integrated with their sales systems with marketing and custom applications. And ADP discovered the only sales force can do this. We were able to show ADP how we could unlock trap data with data cloud, 0 copy, drive intelligence, productivity, efficiency for their sales team with Einstein to levels unimagined just a year ago

Salesforce has a new copilot, Einstein Copilot, which management believes is the first conversational AI assistant the is truly trusted; Einstein Copilot can read across all the data and metadata in Salesforce’s platform to surface valuable sales-actions to take, and that is something human users cannot do; management believes that other copilots cannot do what Einstein Copilot can currently can without deep data integration; management thinks that Einstein Copilot is a cut above other copilots

We’re now incredibly excited to work with all of our customers to take their AI to the next level with Einstein copilot, which is going live tomorrow. Einstein CoPilot, which if you haven’t seen it, and if you haven’t, please come to TrailheadDx next week. This is the first conversational AI assistant for the enterprise that’s truly trusted. It’s amazing. It can answer questions. It can summarize. It can create new content, dynamically automate task on behalf of the user. From the single consistent user experience embedded directly within our platform. 

But let me tell you the 1 thing that can do that’s more important than all of that. It is able to read across all the data and metadata in our platform to get that insight instantly. And you’re going to see that — so the sales rep might ask the Einstein CoPilot, what lead I should focus on or what is the most important thing I need to do with this opportunity. And it may say, you need to resolve this customer’s customer case because this escalation has been around for a week or you better go and answer that lead that came in on the Marketing Cloud before if you want to move this opportunity for it because it’s reading across the entire data set. That is something that individual users cannot do that the copilot can do. With access to customer data and the metadata and sales force, including all this real-time data and website engagement and the ability to read through the data set, that’s why Einstein copilot has all the context to understand the question and surface belied that has the highest value and likelihood to convert. And it can also instantly generate the action plan with the best steps to close the deal, such as suggesting optimal meeting times on the lead contacts, known preferences even draping e-mail. If you haven’t seen the video that I put on my Twitter feed last night, there’s a 5-minute video that goes through — all of these incredible things that it’s able to do, there’s never been an enterprise AI capability quite like it. It’s amazing…

… I assure you, without the deep integration of the day of the metadata across the entire platform within copilots deep integration of that data, they cannot do it. I assure you they cannot because they cannot. — because they don’t have the data on the meta data, which is so critical to making an AI assistant so successful…

And I encourage you to try the demos yourself to put our copilot up against any other copilot. Because I’ll tell you that I’ve seen enterprise copilots from these other companies and actions and they just spend and spin and spin…

…I’ve used those copilots from the competitors, have not seen them work yet….

…Einstein is the only copilot with the ability to truly understand what’s going on with your customer relationships. It’s one conversational AI assistant, deeply connected to trusted customer data and metadata.

Einstein 1 is driving sales price uplift in existing Salesforce customers, while also attracting new customers to Salesforce; Salesforce closed 1,300 Einstein deals in FY2024; Einstein 1 has strong early signs after being launched for just 4-plus months

In fact, we continue to see significant average sales price uplift from existing customers who upgrade to Einstein 1 edition. It’s also attracting new customers to Salesforce, 15% of the companies that purchased our Einstein 1 addition in FY ’24 were net new logos…

… In FY ’24, we closed 1,300 Einstein deals, as more customers are leveraging our generative and predictive AI capabilities…

…. I think the way to think about the price uplift moving to Einstein 1 addition used to be a limited edition plus, is really about the value that we’re providing to our customers because at the end of the day, our ability to get increased price is about the value that we’re going to provide. And so as customers start to ramp up their abilities on AI, ramp up their learnings and understand what it means for them economically, our ability to get price will be dictated by that. Early signs of that are pretty strong. We feel good about the progress we’ve seen. It’s only been in market for 4-plus months now in FY ’24, but we’re encouraged by what we’re seeing.

Slack now comes with AI-search features; Salesforce’s management thinks Slack can become a conversational interface for any application

We just launched SlackAI with features like AI search channel recaps and thread summaries to meet the enormous demand for embedded AI in the flow of work from customers like Australian Post and OpenAI. It’s amazing to see what Slack has accomplished in a decade. And frankly, it’s just the beginning, we have a great vision for the future of Slack as a conversational interface for any application. 

Bajaj Finance in India is using Einstein for AI experiences and in 2023 Q4, Bajaj become Salesforce’s second largest Data Cloud customer globally

India continues to be a bright spot for us, growing new business at 35% year-over-year, and we continue to invest in the region to meet the needs of customers, including Bajaj Finance. I had the great opportunity to meet with our CEO, Rajeev Jain in January, and a top priority for him was using Einstein to deliver predictive and generative AI across their entire lending business, which they run on Salesforce. In Q4, Bajaj became the second largest data cloud customer globally, building their AI foundation on the Einstein One platform

Salesforce’s management would be very surprised if other companies can match Salesforce’s level when it comes to AI

Because if you see anyone else being able to deliver on the promise of enterprise AI at the level of quality and scale and capability of Salesforce, I’ll be very surprised. 

Salesforce is deploying its own AI technologies internally and management is seeing the benefits

We are a big believer on sales on Salesforce. We are deploying our own AI technology internally. Our sales teams are using it. Absolutely, we are seeing benefits right now. But the biggest benefit we’ve seen actually has been in our support operation, with case summaries our ability to get — to tap in a knowledge base is faster to get knowledge surfaced within the flow of work. And so it absolutely is part of our margin expansion strategy going forward, which is how do we leverage our own AI to drive more efficiencies in our business to augment the work that’s being done in sales and in service and in marketing and even into our commerce efforts as well…

…We have to be customer #1 and use it, and I’m excited that we are.

Tencent (NASDAQ: TCEHY)

Tencent’s management thinks that its foundational AI model, Tencent Hunyuan, is now among the best large language models in China and worldwide; Hunyuan excels in multiturn conversations, logical inference and numerical reasoning; Hunyuan has 1 trillion parameters; Hunyuan is increasingly used by Tencent for co-pilot services in the company’s SaaS products; management’s focus for Hunyuan is on its text-related capabilities, especially text-to-video

Our Tencent Hunyuan foundation model is now among the top tier of large language model in China with a notable strength in advanced logical reasoning…

… After deploying leading-edge technologies such as the mixture of experts (MoE) architecture, our foundation model, Tencent Hunyuan, is now achieving top-tier Chinese language performance among large language models in China and worldwide. The enhanced Hunyuan excels particularly in multiturn conversations, logical inference and numerical reasoning, areas which has been challenging for large language models. We have scaled the model up to the 1 trillion perimeter mark, leveraging the MoE architecture to enhance performance and reduce inference costs, and we are rapidly improving the model text to picture and text to video capabilities. We’re increasingly integrating Hunyuan to provide co-pilot services from enterprise SaaS products, including Tencent Meeting and Tencent Docs…

…Among our enterprise Software-as-a-Service products, we deployed AI for real-time content comprehension in Tencent Meeting, deployed AI for prompt based document generation Tencent Docs and rolled out a paid customer acquisition tool for eCom…

… At this point in time, we are actually very focused on the text technology because this is actually the fundamentals of the model. And from text, we have built out text to picture from text, we build out text to video capabilities. And the next important evolution is actually what we have seen with [indiscernible], right? [indiscernible] has done an incredible job with text to a [ long ] video, and we — this is something which we would be developing in [ the next turn ]. When we continue to improve the text fundamental capability of Hunyuan, at the same time, we’ll be developing the text to video capability because we actually think that this is actually very relevant to our core business, which is a content-driven business in the area of short video, long video and games. And that’s the area in which we’ll be developing and moving our Hunyuan into. 

Tencent’s management is developing new generative AI tools for internal content production; management thinks that the main benefits of using AI for internal content production is not to reduce costs, but to enable more rapid monetisation and thus, higher revenue generation

 And we are also developing new gen AI tools for effective content production internally…

…We are increasingly going to be deploying AI, including generative AI in areas such as accelerating the creation of animated content, which is a big business for Tencent Video and a profitable business for Tencent Video in terms of game content, as we discussed earlier, potentially in terms of creating [ code ] in general. But the benefit will show up, not in the substantial cost reductions. It will show up in more rapid content creation, and therefore, more rapid monetization and revenue generation.

Tencent’s management is starting to see significant benefits to Tencent’s business results from deploying AI technology in the company’s businesses; the benefits are particularly clear in Tencent’s advertising business, especially in the short-term; Tencent has seen a 100% increase in click-through rates in the past 18 months in its advertising business through the use of AI

More generally, deploying AI technology in our existing businesses have begun to deliver significant revenue benefits. This is most obvious in our advertising business, where our AI-powered ad tech platform is contributing to more accurate ad targeting, higher ad click-through rates and thus, faster advertising revenue growth rates. We’re also seeing early stage business opportunities in providing AI services to Tencent Cloud customers…

…In terms of the AI short-term benefits, I think financial benefits should be much more indexed towards the advertising side because if you think about the size of our advertising business as call it RMB 100 billion [ a year ]. And if you can just have a 10% increase, right, that’s RMB 10 billion and mostly on profit, right? So that’s the scale of the benefits on the advertising side and especially as we see continued growth of our advertising business and when we add in the Video Accounts e-commerce ecosystem, that just has a very long track of growth potential and also the low ad load right now within Video Accounts.

But on the other hand, if you look at the cloud and business services customers, then you are really facing a relatively nascent market. You still have to sell to these customers. And we spend a lot of time working with all the customers in different industries and trying to figure out what’s the best way of leveraging AI for their business. And then you have to go through a long sales cycle. And then at the same time, it’s competitive because your competitors will actually come in and say, “Oh, they can also provide a similar service.” And despite we believe we have a superior technology and product, it’s actually [ very careful ] and your competitor may actually sort of come in and say they’re going to cut prices, even though there’s an inferior product.

So all these things, all the low-margin, highly competitive and long sales cycle of the 2B business would actually come in to play in that side of the business. So when you compare the two sides of the equation, you can actually clearly see that ramping up advertising is actually going to be much more profitable from the short term. Of course, we’ll continue to do both, right?…

… Martin gave the example of if we can improve click-through rates by 10%, then that’s CNY 10 billion in incremental revenue, probably CNY 8 billion in incremental gross operating profit. In reality, you should view 10% as being in the nature of a floor, not a ceiling. Facebook has seen a substantially bigger improvements in click-through rates for some of our most important inventories, we’ve actually seen our click-through rates increase by 100% in the past 18 months. So when we’re thinking about where the financial benefits of AI, then it’s advertising, click-through rates and therefore, advertisement revenue first and foremost, and that’s a very high flow-through business for us.

Tencent’s management believes that AI technology can be applied in its games business in terms of creating innovative gameplay as well as generating content in existing games, but these will take some time to manifest

In terms of the application of AI to games, then like many things, the boundary between [indiscernible] reality is a function of how far forward [indiscernible] willing to look and [ we’re willing to look very far ] forward. And all of the areas you mentioned, such as AI-powered [ MPCs ], such as AI accelerated graphical content generation, graphical asset generation are areas that [ over for years ] to come, not over the months to come will benefit meaningfully from the deployment of AI. And I think it’s also fair to say that the game industry has always been a mixture of, on the one hand, innovation around gameplay techniques. And on the other hand, deployment of enhanced content — renewed content into existing gameplay. And it’s reasonable to believe that AI will be most beneficial for the second of those activities. But one will continue to require very talented individuals and teams due to focus on the first of those opportunities, which is the creation of innovative game play.

Veeva Systems (NYSE: VEEV)

Veeva’s management has seen very specialised AI models being used for some time – prior to the introduction of large language models to the consumer public – to help with drug discovery, especially in areas such as understanding protein folding

[Question] what are you seeing out of life sciences companies in terms of how AI is changing things. Whether that’s accelerating drug development, whether that’s more targeted marketing, maybe if you could walk us through kind of what those conversations would look like? And what sort of role you think you can play in those changes?

[Answer] I would say the most direct impact and it’s been happening a while before large language models as well with AI and drug discovery. Very, very targeted AI models that can do things like protein folding and analyzing retina images, things like that. So this is — this is very powerful, but very therapeutic area specific, very close to the science in the R&D, and I — there’s not just one AI model there is multiple specialized AI models.

Veeva’s management has seen some experimentation going on with the use of large language models in improving general productivity in the life sciences industry

Then in terms of other areas, really, there’s a lot of experimentation with large language models. And what people look at it for are: a, can I just have general productivity for my people, can they write an e-mail faster? Can they check their e-mail faster? Can they research some information faster. So that’s one thing that’s going on. Also, specific use cases like authoring, can I — can I author a protocol faster? Can I author a regulatory document faster. Now faster is one thing, but also have to be very accurate. So I would say there’s experimentation on that. There’s not yet broad production use on that. And certainly, some of these critical things has to be lot of quality control on it. So those are probably the two biggest use cases — really three: research, general productivity and authoring.

Veeva’s management has developed a product to make Veeva’s data platform extract data in a much faster way so that it works well with AI applications, but otherwise, the company has not invested in LLMs (large language models) because they are not as relevant in the company’s field

And then as far as our role, we’ve been doing some really heavy work over the last 2 years on something in our Vault platform that’s called the Direct Data API. And that’s a pretty revolutionary way of making the data come out of Vault in a consistent — transactionally consistent manner much, much faster, roughly 100x faster than it happens now. That’s going to be critical for all kinds of AI applications on the top, which we may develop, which our customers may develop, and we’re also utilizing that for some really fast system to system transfer between our different Vault family. So that’s been the biggest thing that we’ve done. We haven’t really invested heavily in large language models. So far, we just don’t see quite the application in our application areas, not to say that, that wouldn’t change in the future.

Veeva’s management thinks that the important thing for AI is data – AI models will be a commodity – and Veeva has the advantage in this

I would say we’re in a pretty good position because AI really — the durable thing about AI is the data sources, the data sources. The AI models will come on top, and that will be largely a tech commodity, but the control and the access to the data sources, that’s pretty important, and that’s kind of where Veeva plays.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Meta Platforms, MongoDB, Okta, Salesforce, Starbucks, Tencent, and Veeva Systems. Holdings are subject to change at any time.

Beware of This Valuation Misconception

Don’t value your shares based on cash flow to the firm, value it based on cash flow to the shareholder.

How should we value a stock? That’s one of the basic questions when investing. Warren Buffett answers this question extremely well. He says:

“Intrinsic value can be defined simply: It is the discounted value of the cash that can be taken out of a business during its remaining life.”

While seemingly straightforward, a lot of investors (myself included) have gotten mixed up between cash flow that a company generates and cash that is actually taken out of a business.

While the two may sound similar, they are in fact very different.

Key difference

Extra cash flow that a firm generates is termed free cash flow. This is cash flow that the company generates from operations minus any capital expenditure paid. 

But not all free cash flow to the firm is distributed to shareholders. Some of the cash flow may be used for acquisitions, some may be left in the bank, and some may be used for other investments such as buybacks or investing in other assets. Therefore, this is not cash that a shareholder will receive. The cash flow that is taken out of the business and paid to shareholders is only the dividend. 

When valuing a stock, it is important that we only take cash that will be returned to the shareholder as the basis of the valuation.

Extra free cash flow that is not returned to shareholders should not be considered when valuing a stock.

Common mistake

It is a pretty big mistake to value a stock based on the cash flow that the company generates as it can severely overstate the value of a business.

When using a discounted cash flow model, we should not take free cash flow to the firm  as the basis of valuation but instead use future dividends to value a business.

But what if the company is not paying a dividend?

Well, the same should apply. In the case that there is no dividend yet, we need to account for that in our valuation by only modelling for dividend payments later in the future.

Bottom line

Using discounted cash flow to the firm to value a business can severely overstate its value. This can be extremely dangerous as it can be used to justify extremely unwarranted valuations, leading to buying overvalued stocks.

To be accurate, a company should be valued based only on how much it can return to shareholders.

That said, free cash flow to the firm is not a useless metric in valuation. It is actually the basis of what makes a good company.

A company that can generate strong and growing free cash flows should be able to return an increasing stream of dividends to shareholders in the future. Free cash flow to the firm can be called the “lifeblood” of sustainable dividends.

Of course, all of this also depends on whether management is able to make good investment decisions on the cash it generates.

Therefore, when investing in a company, two key things matter. One, how much free cash flow the firm generates, and two, how good management is in allocating that new capital.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the fourth quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management believes that AI will allow the company to develop the most innovative and personalised AI interfaces in the world, and the company recently acquired GamePlanner AI to do so; Airbnb’s management thinks that popular AI services today, such as ChatGPT, are underutilising the foundational models that power the services; GamePlanner AI was founded by the creator of Apple’s Siri smart assistant

There is a new platform shift with AI, and it will allow us to do things we never could have imagined. While we’ve been using AI across our service for years, we believe we can become a leader in developing some of the most innovative and personalized AI interfaces in the world. In November, we accelerated our efforts with the acquisition of GamePlanner AI, a stealth AI company led by the co-founder and original developer of Siri. With these critical pieces in place, we’re now ready to expand beyond our core business. Now this will be a multiyear journey, and we will share more with you towards the end of this year…

…If you were to open, say, ChatGPT or Google, though the models are very powerful, the interface is really not an AI interface. It’s the same interface as the 2000s, in a sense, the 2010s. It’s a typical classical web interface. So we feel like the models, in a sense, are probably underutilized…

Airbnb’s management does not want to build foundational large language models – instead, they want to focus on the application layer

One way to think about AI is, let’s use a real-world metaphor. I mentioned we’re building a city. And in that city, we have infrastructure, like roads and bridges. And then on top of those roads and bridges, we have applications like cars. So Airbnb is not an infrastructure company. Infrastructure would be a large language model or, obviously, GPUs. So we’re not going to be investing in infrastructure. So we’re not going to be building a large language model. We’ll be relying on, obviously, OpenAI. Google makes — or create a model, Meta creates models. So those are really infrastructure. They’re really developing infrastructure. But where we can excel is on the application layer. And I believe that we can build one of the leading and most innovative AI interfaces ever created. 

Airbnb’s management believes that the advent of generative AI represents a platform shift and it opens the probability of Airbnb becoming a cross-vertical company

Here’s another way of saying it. Take your phone and look at all the icons on your phone. Most of those apps have not fundamentally changed since the advent of Generative AI. So what I think AI represents is the ultimate platform shift. We had the internet. We had mobile. Airbnb really rose during the rise of mobile. And the thing about a platform shift, as you know, there is also a shift in power. There’s a shift of behavior. And so I think this is a 0-0 ball game, where Airbnb, we have a platform that was built for 1 vertical short-term space. And I think with AI — Generative AI and developing a leading AI interface to provide an experience that’s so much more personalized than anything you’ve ever seen before.

Imagine an app that you feel like it knows you, it’s like the ultimate Concierge, an interface that is adaptive and evolving and changing in real-time, unlike no interface you’ve ever seen before. That would allow us to go from a single vertical company to a cross-vertical company. Because one of the things that we’ve noticed is the largest tech companies aren’t a single vertical. And we studied Amazon in the late ’90s, early 2000s, when they went from books to everything, or Apple when they launched the App Store. And these really large technology companies are horizontal platforms. And I think with AI and the work we’re doing around AI interfaces, I think that’s what you should expect of us.

Alphabet (NASDAQ: GOOG)

Alphabet’s Google Cloud segment saw accelerated growth in 2023 Q4 from generative AI

Cloud, which crossed $9 billion in revenues this quarter and saw accelerated growth driven by our GenAI and product leadership.

Alphabet closed 2023 by launching Gemini, a foundational AI model, which has state-of-the-art capabilities; Gemini Ultra is coming soon

We closed the year by launching the Gemini era, a new industry-leading series of models that will fuel the next generation of advances. Gemini is the first realization of the vision we had when we formed Google DeepMind, bringing together our 2 world-class research teams. It’s engineered to understand and combine text, images, audio, video and code in a natively multimodal way, and it can run on everything from mobile devices to data centers. Gemini gives us a great foundation. It’s already demonstrating state-of-the-art capabilities, and it’s only going to get better. Gemini Ultra is coming soon. The team is already working on the next versions and bringing it to our products.

Alphabet is already experimenting Gemini with Google Search; Search Generative Experience (SGE) saw its latency drop by 40% with Gemini

We are already experimenting with Gemini in Search, where it’s making our Search Generative Experience, or SGE, faster for users. We have seen a 40% reduction in latency in English in the U.S. 

Alphabet’s management thinks that SGE helps Google Search (1) answer new types of questions, (2) answer complex questions, and (3) surface more links; management believes that digital advertising will continue to play an important role in SGE; management has found that users find the ads placed above or below an AI overview of searches to be helpful; management knows what needs to be done to incorporate AI into the future experience of Google Search and they see AI assistants or agents as being an important component of Search in the future

By applying generative AI to Search, we are able to serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. People are finding it particularly useful for more complex questions like comparisons or longer queries. It’s also helpful in areas where people are looking for deeper understanding, such as education or even gift ideas. We are improving satisfaction, including answers for more conversational and intricate queries. As I mentioned earlier, we are surfacing more links with SGE and linking to a wider range of sources on the results page, and we’ll continue to prioritize approaches that add value for our users and send valuable traffic to publishers…

…As we shared last quarter, Ads will continue to play an important role in the new search experience, and we’ll continue to experiment with new formats native to SGE. SGE is creating new opportunities for us to improve commercial journeys for people by showing relevant ads alongside search results. We’ve also found that people are finding ads either above or below the AI-powered overview helpful as they provide useful options for people to take action and connect with businesses…

…Overall, one of the things I think people underestimate about Search is the breadth of Search, the amount of queries we see constantly on a new day, which we haven’t seen before. And so the trick here is to deliver that high-quality experience across the breadth of what we see in Search. And over time, we think Assistant will be very complementary. And we will again use generative AI there, particularly with our most advanced models in Bard and allows us to act more like an agent over time, if I were to think about the future and maybe go beyond answers and follow through for users even more. So that is the — directionally, what the opportunity set is. Obviously, a lot of execution ahead. But it’s an area where I think we have a deep sense of what to do.

Alphabet’s latest Pixel 8 phones have an AI-powered feature that lets users search what they see on their phones without switching apps; the Pixel 8s uses Gemini Nano for AI features

Circle to Search lets you search what you see on Android phones with a simple gesture without switching apps. It’s available starting this week on Pixel 8 and Pixel 8 Pro and the new Samsung Galaxy S24 Series…

…Pixel 8, our AI-first phone, was awarded Phone of the Year by numerous outlets. It now uses Gemini Nano with features like Magic Compose for Google Messages and more to come.

Alphabet’s management is seeing that advertisers have a lot of interest in Alphabet’s AI advertising solutions; the solutions include (1) the Automatically Created Assets (ACA) feature for businesses to build better ads and (2) conversational experiences – currently under beta testing – that has helped SMBs be 42% more likely to publish ads with good ad-strength

We are also seeing a lot of interest in our AI-powered solutions for advertisers. That includes our new conversational experience that uses Gemini to accelerate the creation of Search campaigns…

…As we look ahead, we’re also starting to put generative AI in the hands of more and more businesses to help them build better campaigns and even better performing ads. Automatically created assets help advertisers show more relevant search ads by creating tailored headlines and descriptions based on each ad’s context. Adoption was up with strong feedback in Q4. In addition to now being available in 8 languages, more advanced GenAI-powered capabilities are coming to ACA…

…And then last week’s big news was that Gemini will power new conversational experience in Google Ads. This is open and beta to U.S. and U.K. advertisers. Early tests show advertisers are building higher-quality search campaigns with less effort, especially SMBs who are 42% more likely to publish a campaign with good or excellent ad strength. 

Alphabet’s Google Cloud offers AI Hypercomputer (a supercomputing architecture for AI), which is used by high-profile AI startups such as Anthropic and Mistral AI

Google Cloud offers our AI Hypercomputer, a groundbreaking supercomputing architecture that combines our powerful TPUs and GPUs, AI software and multi-slice and multi-host technology to provide performance and cost advantages for training and serving models. Customers like Anthropic, Character.AI, Essential AI and Mistral AI are building and serving models on it.

Vertex AI, which is within Google Cloud, enables users to customise and deploy more than 130 generative AI models; Vertex AI’s API (application programming interface) requests has jumped six times from the first half of 2023 the second half; Samsung is using Vertex AI to provide GenAI models in its Galaxy S24 smartphones while companies such as Shutterstock and Victoria’s Secret are also using Vertex AI

For developers building GenAI applications, we offer Vertex AI, a comprehensive enterprise AI platform. It helps customers like Deutsche Telekom and Moody’s discover, customize, augment and deploy over 130 GenAI models, including PaLM, MedPaLM, Sec-PaLM and Gemini as well as popular open source and partner models. Vertex AI has seen strong adoption with the API request increasing nearly 6x from H1 to H2 last year. Using Vertex AI, Samsung recently announced its Galaxy S24 Series smartphone with Gemini and Imagen 2, our advanced text-to-image model. Shutterstock has added Imagen 2 to their AI image generator, enabling users to turn simple text prompts into unique visuals. And Victoria’s Secret & Co. will look to personalize and improve the customer experience with Gemini, Vertex AI, Search and Conversations.

Duet AI, Alphabet’s AI agents for its Google Workspace and Google Cloud Platform (GCP) services, now has more than 1 million testers, and will incorporate Gemini soon; Duet AI for Developers is the only generative AI offering that supports the entire development and operations lifecycle for software development; large companies such as Wayfair, GE Appliances, and Commerzbank are already using Duet AI for Developers

Customers are increasingly choosing Duet AI, our packaged AI agents for Google Workspace and Google Cloud Platform, to boost productivity and improve their operations. Since its launch, thousands of companies and more than 1 million trusted testers have used Duet AI. It will incorporate Gemini soon. In Workspace, Duet AI is helping employees benefit from improved productivity and creativity at thousands of paying customers around the world, including Singapore Post, Uber and Woolworths. In Google Cloud Platform, Duet AI assists software developers and cybersecurity analysts. Duet AI for Developers is the only GenAI offering to support the complete development and operations life cycle, fine-tuned with the customer’s own core purpose and policies. It’s helping Wayfair, GE Appliances and Commerzbank write better software, faster with AI code completion, code generation and chat support. With Duet AI and Security Operations, we are helping cybersecurity teams at Fiserv, Spotify and Pfizer.

Alphabet’s management believes that the company has state-of-the-art compute infrastructure and that it will be a major differentiator in the company’s AI-related work; managements wants Alphabet to continue investing in its infrastructure

Search, YouTube and Cloud are supported by our state-of-the-art compute infrastructure. This infrastructure is also key to realizing our big AI ambitions. It’s a major differentiator for us. We continue to invest responsibly in our data centers and compute to support this new wave of growth in AI-powered services for us and for our customers.

Alphabet’s AI-powered ad solutions are helping retailers with their omni channel growth; a large big-box retailer saw a 60%+ increase in omni channel ROA (return on advertising) and a 22%+ increase in store traffic

Our proven AI-powered ad solutions were also a win for retailers looking to accelerate omni growth and capture holiday demand. Quick examples include a large U.S. big-box retailer who drove a 60%-plus increase in omni ROAS and a 22%-plus increase in store traffic using Performance Max during Cyber Five; and a well-known global fashion brand, who drove a 15%-plus higher omnichannel conversion rate versus regular shopping traffic by showcasing its store pickup offering across top markets through pickup later on shopping ads.

Alphabet’s management is using AI to make it easier for content creators to create content for Youtube (for example, creators can easily create backgrounds or translate their videos); management also believes the AI tools built for creators can also be ported over to the advertising business to help advertisers

First, creation, which increasingly takes place on mobile devices. We’ve invested in a full suite of tools, including our new YouTube Create app for Shorts, to help people make everything from 15-second Shorts to 15-minute videos to 15-hour live streams with a production studio in the palm of their hands. GenAI is supercharging these capabilities. Anyone with a phone can swap in a new backdrop, remove background extras, translate their video into dozens of languages, all without a big studio budget. We’re excited about our first products in this area from Dream Screen for AI-generated backgrounds to Aloud for AI-powered dubbing…

…You are obviously aware of the made YouTube announcement where we introduced a whole lot of new complementary creativity features on YouTube, including Dream Screen, for example, and a lot of other really interesting tools and thoughts. You can obviously imagine that we can take this more actively to the advertising world already. As you know, it continues already to power AI, a lot of our video ad solutions and measurement capabilities. It’s part of video-rich campaigns. Multi-format ads are — actually, there is a generative creator music that actually makes it easier for creators to design the perfect soundtrack already. And as I said earlier, AI will unlock a new world of creativity. And you can see how this will — if you just look at where models are heading, where multimodal models are heading, where the generation capabilities of those models are heading, you can absolutely see how this will impact and positively impact and simplify the flow for creators, similar to what you see already emerging in some of our core products like ACA on the Search side.

Alphabet’s management expects the company’s capital expenditure in 2024 to be notably higher than in 2023 (it was US$20 billion in 2023), driven by investments in AI infrastructure

With respect to CapEx, our reported CapEx in the fourth quarter was $11 billion, driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. The step-up in CapEx in Q4 reflects our outlook for the extraordinary applications of AI to deliver for users, advertisers, developers, cloud enterprise customers and governments globally and the long-term growth opportunities that offers. In 2024, we expect investment in CapEx will be notably larger than in 2023.

Alphabet’s management is restructuring the company’s workforce not because AI is taking away jobs, but because management believes that AI solutions can deliver significant ROI (return on investments) and it’s important for Alphabet to have an organisational structure that can better build these solutions

But I also want to be clear, when we restructure, there’s always an opportunity to be more efficient and smarter in how we service and grow our customers. We’re not restructuring because AI is taking away roles that’s important here. But we see significant opportunities here with our AI-powered solution to actually deliver incredible ROI at scale, and that’s why we’re doing some of those adjustments.

Alphabet’s management thinks that Search is not just about generative AI

Obviously, generative AI is a new tool in the arsenal. But there’s a lot more that goes into Search: the breadth, the depth, the diversity across verticals, stability to follow through, getting actually access to rich, diverse sources of content on the web and putting it all together in a compelling way.

Alphabet’s management believes that AI features can help level the playing field for SMBs in the creation of effective advertising (when competing with large companies) and they will continue to invest in that area

Our focus has always been here on investing in solutions that really help level the playing field, and you mentioned several of those. So actually, SMBs can compete with bigger brands and more sophisticated advertisers. And so the feedback we’re always getting is they need easy solutions that could drive value quickly, and several of the AI-powered solutions that you’re mentioning are actually making the workflow and the whole on-ramp and the bidded targeting creative and so on, you mentioned that is so much easier for SMBs. So we’re very satisfied with what we’re seeing here. We will continue to invest. 

Amazon (NASDAQ: AMZN)

Amazon’s cloud computing service, AWS, saw an acceleration in revenue growth in 2023 Q4 and management believes this was driven partly by AI

If you look back at the revenue growth, it accelerated to 13.2% in Q4, as we just mentioned. That was an acceleration. We expect accelerating trends to continue into 2024. We’re excited about the resumption, I guess, of migrations that companies may have put on hold during 2023 in some cases and interest in our generative AI products, like Bedrock and Q, as Andy was describing

Amazon’s management reminded the audience that their framework for thinking about generative AI consists of three layers – the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon is investing heavily in all three

You may remember that we’ve explained our vision of three distinct layers in the gen AI stack, each of which is gigantic and each of which we’re deeply investing.

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute…

…In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service…

…At the top layer of the stack is the application layer.

Amazon’s management is seeing revenues accelerate rapidly for AWS across all three layers of the generative AI stack and AWS is receiving significant interest from customers wanting to run AI workloads

Still relatively early days, but the revenues are accelerating rapidly across all three layers, and our approach to democratizing AI is resonating well with our customers. We have seen significant interest from our customers wanting to run generative AI applications and build large language models and foundation models, all with the privacy, reliability and security they have grown accustomed to with AWS

Amazon’s management is seeing that enterprises are still figuring out which layer of the generative AI stack they want to operate in; management thinks that most enterprises will operating in at least two layers, with the technically capable ones operating in all three

When we talk to customers, particularly at enterprises as they’re thinking about generative AI, many are still thinking through at which layers of those three layers of the stack I laid out that they want to operate in. And we predict that most companies will operate in at least two of them. But I also think, even though it may not be the case early on, I think many of the technically capable companies will operate at all three. They will build their own models, they will leverage existing models from us, and then they’re going to build the apps. 

At the first layer of the generative AI stack, AWS is offering the most expansive collection of compute instances with NVIDIA chips; AWS has built its own Trainium chips for training and Inferentia chips for inference; a new version of Trainium – Trainium 2 – was recently announced and it is 4x faster, and has 3x more memory, than the first generation of Trainium; large companies and prominent AI startups are using AWS’s AI chips

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute, we offer the most expansive collection of compute instances with NVIDIA chips. We also have customers who like us to push the price performance envelope on AI chips just as we have with Graviton for generalized CPU chips, which are 40% more price-performant than other x86 alternatives. And as a result, we’ve built custom AI training chips named Trainium and inference chips named Inferentia. In re:Invent, we announced Trainium2, which offers 4x faster training performance and 3x more memory capacity versus the first generation of Trainium, enabling advantageous price performance versus alternatives. We already have several customers using our AI chips, including Anthropic, AirBnB, Hugging Face, Qualtrics, Rico and Snap.

At the middle layer of the generative AI stack, AWS has launched Bedrock, which offers LLMs-as-a-service; Bedrock is off to a very strong start with thousands of customers already using it just a few months after launch; Bedrock has added new models, including those from prominent AI startups, Meta’s Llama2, and Amazon’s own Titan family; customers are excited over Bedrock because building production-quality generative AI applications requires multiple iterations of models, and the use of many different models, and this is where Bedrock excels

In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service, we’ve launched Bedrock, which is off to a very strong start with many thousands of customers using the service after just a few months… We also added new models from Anthropic, Cohere, Meta with Llama 2, Stability AI and our own Amazon Titan family of LLMs. What customers have learned at this early stage of gen AI is that there’s meaningful iteration required in building a production gen AI application with the requisite enterprise quality at the cost and latency needed. Customers don’t want only one model. They want different models for different types of applications and different-sized models for different applications. Customers want a service that makes this experimenting and iterating simple. And this is what Bedrock does, which is why so many customers are excited about it.

At the top layer of the generative AI stack, AWS recently launched Amazon Q, a coding companion; management believes that a coding companion is one of the very best early generative AI applications; Amazon Q is linked with more than 40 popular data-connectors so that customers can easily query their data repositories; Amazon Q has generated strong interest from developers

At the top layer of the stack is the application layer. One of the very best early gen AI applications is a coding companion. At re:Invent, we launched Amazon Q, which is an expert on AWS, writes code, debugs code, tests code, does translations like moving from an old version of Java to a new one and can also query customers various data repositories like Internet, Wikis or from over 40 different popular connectors to data in Salesforce, Amazon S3, ServiceNow, Slack, Atlassian or Zendesk, among others. And it answers questions, summarizes data, carries on a coherent conversation and takes action. It was designed with security and privacy in mind from the start, making it easier for organizations to use generative AI safely. Q is the most capable work assistant and another service that customers are very excited about…

…When enterprises are looking at how they might best make their developers more productive, they’re looking at what’s the array of capabilities in these different coding companion options they have. And so we’re spending a lot of time. Our enterprises are quite excited about it. It created a meaningful stir in re:Invent. And what you see typically is that these companies experiment with different options they have and they make decisions for their employee base, and we’re seeing very good momentum there.

Amazon’s management is seeing that security over data is very important to customers when they are using AI and this is an important differentiator for AWS because its AI services inherit the same security features as AWS – and AWS’s capabilities and track record in security are good

By the way, don’t underestimate the point about Bedrock and Q inheriting the same security and access control as customers get with AWS. Security is a big deal, an important differentiator between cloud providers. The data in these models is some of the company’s most sensitive and critical assets. With AWS’ advantaged security capabilities and track record relative to other providers, we continue to see momentum around customers wanting to do their long-term gen AI work with AWS.

Amazon has launched some generative AI applications across its businesses and are building more; one of the applications launched is Rufus, a shopping assistant, which allows consumers to receive thoughtful responses to detailed shopping questions; other generative AI applications being built and launched by Amazon include a customer-review-summary app, an app for customers to predict how they will fit in apparel, an app for inventory forecasts for each fulfilment centre, and an app to generate copy for ads based on a picture, or generate pictures based on copy; Rufus is seamlessly integrated into Amazon and management thinks Rufus could meaningfully change what discovery looks for shoppers using Amazon

We’re building dozens of gen AI apps across Amazon’s businesses, several of which have launched and others of which are in development. This morning, we launched Rufus, an expert shopping assistant trained on our product and customer data that represents a significant customer experience improvement for discovery. Rufus lets customers ask shopping journey questions, like what is the best golf ball to use for better spin control or which are the best cold weather rain jackets, and get thoughtful explanations for what matters and recommendations on products. You can carry on a conversation with Rufus on other related or unrelated questions and retains context coherently. You can sift through our rich product pages by asking Rufus questions on any product features and it will return answers quickly…

…. So if you just look at some of our consumer businesses, on the retail side, we built a generative AI application that allowed customers to look at summary of customer review, so that they didn’t have to read hundreds and sometimes thousands of reviews to get a sense for what people like or dislike about a product. We launched a generative AI application that allows customers to quickly be able to predict what kind of fit they’d have for different apparel items. We built a generative AI application in our fulfillment centers that forecasts how much inventory we need in each particular fulfillment center…Our advertising business is building capabilities where people can submit a picture and an ad copy is written and the other way around. 

…  All those questions you can plug in and get really good answers. And then it’s seamlessly integrated in the Amazon experience that customers are used to and love to be able to take action. So I think that that’s just the next iteration. I think it’s going to meaningfully change what discovery looks like for our shopping experience and for our customers.

Amazon’s management believes generative AI will drive tens of billions in revenue for the company over the next few years

Gen AI is and will continue to be an area of pervasive focus and investment across Amazon primarily because there are a few initiatives, if any, that give us the chance to reinvent so many of our customer experiences and processes, and we believe it will ultimately drive tens of billions of dollars of revenue for Amazon over the next several years.

Amazon’s management expects the company’s full-year capital expenditure for 2024 to be higher than in 2023, driven by increased investments in infrastructure for AWS and AI

We define our capital investments as a combination of CapEx plus equipment finance leases. In 2023, full year CapEx was $48.4 billion, which was down $10.2 billion year-over-year, primarily driven by lower spend on fulfillment and transportation. As we look forward to 2024, we anticipate CapEx to increase year-over-year primarily driven by increased infrastructure CapEx to support growth of our AWS business, including additional investments in generative AI and large language models.

AWS’s generative AI revenue is pretty big in absolute numbers, but small in the context of AWS already being a $100 billion annual-revenue-run-rate business

If you look at the gen AI revenue we have, in absolute numbers, it’s a pretty big number. But in the scheme of a $100 billion annual revenue run rate business, it’s still relatively small, much smaller than what it will be in the future, where we really believe we’re going to drive tens of billions of dollars of revenue over the next several years. 

Apple (NASDAQ: AAPL)

Many of the features in Apple’s latest product, the virtual reality headset, the Vision Pro, features are powered by AI

There’s an incredible amount of technology that’s packed into the product. There’s 5,000 patents in the product. And it’s, of course, built on many innovations that Apple has spent multiple years on, from silicon to displays and significant AI and machine learning, all the hand tracking, the room mapping, all of this stuff is driven by AI.

Apple has been spending a lot of time and effort on AI and management will share details later in 2024

As we look ahead, we will continue to invest in these and other technologies that will shape the future. That includes artificial intelligence where we continue to spend a tremendous amount of time and effort, and we’re excited to share the details of our ongoing work in that space later this year…

…In terms of generative AI, which I would guess is your focus, we have a lot of work going on internally as I’ve alluded to before. Our MO, if you will, has always been to do work and then talk about work and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we’ve got some things that we’re incredibly excited about that we’ll be talking about later this year.

Apple’s management thinks there is a huge opportunity for Apple with generative AI but will only share more details in the future

Let me just say that I think there is a huge opportunity for Apple with gen AI and AI and without getting into more details and getting out in front of myself.

Arista Networks (NYSE: ANET)

Arista Networks’ management believes that AI at scale needs Ethernet at scale because AI workloads cannot tolerate delays; management thinks that 400 and 800-gigabit Ethernet will become important or AI back-end GPU clusters

AI workloads are placing greater demands on Ethernet as they have both data and compute-intensive across thousands of processes today. Basically, AI at scale needs Ethernet at scale. AI workloads cannot tolerate the delays in the network because the job can only be completed after all flows are successfully delivered to the GPU clusters. All it takes is one culprit or worst-case link to throttle an entire AI workload…

…. We expect both 400 and 800-gigabit Ethernet will emerge as important pilots for AI back-end GPU clusters. 

Arista Networks’ management is pushing the company and the Ultra Ethernet Consortium to improve Ethernet technology for AI workloads in three key ways; management believes that Ethernet is superior to Infiniband for AI-related data networking because Ethernet provides flexible ordering of data transfer whereas Infiniband is rigid

Three improvements are being pioneered by Arista and the founding members of the Ultra Ethernet Consortium to improve job completion time. Number one, packet spring. AI network topology meets packet spring to allow every flow to simultaneously access all parts of the destination. Arista is developing multiple forms of load balancing dynamically with our customers. Two is flexible ordering. Key to an AI job completion is the rapid and reliable bulk transfer with flexible ordering using Ethernet links to optimally balance AI-intensive operations, unlike the rigid ordering of InfiniBand. Arista is working closely with its leading vendors to achieve this. Finally, network congestion. In AI networks, there’s a common in-cost congestion problem whereby multiple uncoordinated senders can send traffic to the receiver simultaneously. Arista’s platforms are purpose-built and designed to avoid these kinds of hotspots, evenly spreading the load across multi-packs across a virtual output queuing VoQ losses fabric.

Arista Networks’ management thinks the company can achieve AI revenue of at least $750 million in 2025

We are cautiously optimistic about achieving our AI revenue goal of at least $750 million in AI networking in 2025…

…. So our AI performance continues to track well for the $750 million revenue goal that we set last November at Analyst Day. 

Arista Networks’ management sees the company becoming the gold-standard for AI data-networking

We have more than doubled our enterprise revenue in the last 3 years and we are becoming the gold standard for client-to-cloud-to-AI networking with 1 EOS and 1 CloudVision Foundation. 

In the last 12 months, Arista Networks has participated in a large number of AI project bids, and in the last five projects where there was a situation of Ethernet versus Infiniband, Arista Networks has won four of them; over the last 12 months, a lot has changed in terms of how Infiniband was initially bundled into AI data centres; management believes that Ethernet will become the default standard for AI networking going forward

To give you some color on the last 3 months, I would say difficult to project anything in 3 months. But if I look at the last year, which maybe last 12 months is a better indication, we have participated in a large number of AI bids and when I say large, I should say they are large AI bids, but there are a small number of customers actually to be more clear. And in the last 4 out of 5, AI networking clusters we have participated on Ethernet versus InfiniBand, Arista has won all 4 of them for Ethernet, one of them still stays on InfiniBand. So these are very high-profile customers. We are pleased with this progress…

…The first real consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet become — versus InfiniBand becomes really important because as you may recall, a year ago, I told you we were outside looking in, everybody had an Ethernet — everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now and the back-end cluster for our AI is the Arista 7800 AI spine, which in a single chassis with north of 500 terabit of capacity can give you a substantial number of ports, 400 or 800. So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the Analyst Day, we can expand that to a 2-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs nonblocking. This lossless architecture for Ethernet. And then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion controls, packet spring and working with a suite of [ UC ] mix is what I think will make Ethernet the default standard for AI networking going forward. 

Arista Networks’ management believes that owners and operators of AI data centres would not want to work with white box data switches (non-branded and commoditised data switches) because data switches are mission critical in AI data centres, so users would prefer reliable and higher-quality data switches

I think white box is here to stay for a very long time if somebody just wants a throwaway commodity product, but how many people want throwaway commodity in the data center? They’re still mission-critical, and they’re even more mission-critical for AI. If I’m going to spend multimillion dollars on a GPU cluster, and then the last thing I’m going to do is put a toy network in, right? So to put this sort of in perspective, that we will continue to coexist with a white box. There will be use cases where Arista’s blue box or a stand-alone white box can run either SONiC or FBOSS but many times, the EOS software stack is really, really something they depend on for availability, analytics, automation, and there’s — you can get your network for 0 cost, but the cost of downtime is millions and millions of dollars.

Arista Networks is connecting more and more GPUs and management believes that the picture of how a standard AI data centre Ethernet switch will look like is starting to form; AI is still a small part of Arista Networks’ business but one that should grow over time

On the AI side, we continue to track well. I think we’re moving from what I call trials, which is connecting hundreds of GPUs to pilots, which is connecting thousands of GPUs this year, and then we expect larger production clusters. I think one of the questions that we will be asking ourselves and our customers is how these production clusters evolve. Is it going to be 400, 800 or a combination thereof? The role of Ultra Ethernet Consortium and standards and the ecosystem all coming together, very similar to how we had these discussions in 400 gig will also play a large part. But we’re feeling pretty good about the activity. And I think moving from trials to pilots this year will give us considerable confidence on next year’s number…

…AI is going to come. It is yet to come — certainly in 2023, as I’ve said to you many, many times, it was a very small part of our number, but it will gradually increase.

Arista Networks’ management is in close contact with the leading GPU vendors when designing networking solutions for AI data centres

Specific to our partnership, you can be assured that we’ll be working with the leading GPU vendors. And as you know, NVIDIA has 90% or 95% of the market. So Jensen and I are going to partner closely. It is vital to get a complete AI network design going. We will also be working with our partners in AMD and Intel so we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.

Arista Networks’ management believes that the company is very well-positioned for the initial growth spurts in AI networking

Today’s models are moving very rapidly, relying on a high bandwidth, predictable latency, the focus on application performance requires you to be sole sourced initially. And over time, I’m sure it’ll move to multiple sources, but I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade.

ASML (NASDAQ: ASML)

ASML’s management believes that 2025 will be a strong year for the company because of the long-term trends in its favour (this includes AI and digitalisation, customer-inventory-levels becoming better, and the scheduled opening of many semiconductor fabrication plants)

So essentially unchanged I would say in comparison to what we said last quarter. So if we start looking at 2025. As I mentioned before, we are looking at a year of significant growth and that is for a couple of reasons. First off, we think the secular trends in our industry are still very much intact. If you look at the developments around AI, if you look at the developments around electrification, around energy transition etcetera, they will need many, many semiconductors. So we believe the secular trends in the industry are still very, very strong. Secondly I think clearly by 2025 we should see our customers go through the up cycle. I mean the upward trend in the cycle. So that should be a positive. Thirdly, as we also mentioned last time it’s clear that many fab openings are scheduled that will require the intake of quite some tools in the 2025 time frame.

ASML’s management is seeing AI-related demand drive a positive inflection in the company’s order intake

And I think AI is now particularly something which could be on top of that because that’s clearly a technology transition. But we’ve already seen a very positive effect of that in our Q4 order intake…

…After a few soft quarters, the order intake for the quarter was very, very strong. Actually a record order intake at €9.2 billion. If you look at the composition of that, it was about 50/50 for Memory versus Logic. Around €5.6 billion out of the €9.2 was related to EUV, both Low NA and High NA.

ASML’s management is confident that AI will help to drive demand for the company’s EUV (extreme ultraviolet) lithography systems from the Memory-chips market in the near future

 In ’23, our Memory shipments were lower than the 30% that you mentioned. But if you look at ’25, and we also take into account what I just said about AI and the need for EUV in the DDR5 and in the HBM era, then the 30% is a very safe path and could be on the conservative side.

ASML’s management thinks that the performance of memory chips is a bottleneck for AI-related workloads, and this is where EUV lithography is needed; management was also positively surprised at how important EUV was for the development of leading-edge memory chips for AI

I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV…

…  And were we surprised? I must be — I say, yes, to some extent, we were surprised in the meetings we’ve had with customers and especially the Memory because we’re leading-edge Memory customers. We were surprised about the technology requirements of — for litho, EUV specifically and how it impacts how important it is for the rollout and the ramp of the memory solutions for AI. This is why we received more EUV orders than we anticipated because it was obvious in the detailed discussions and the reviews with our customers, that EUV is critical in that sense. And that was a bit of a surprise, that’s a positive surprise. 

[Question] Sorry, was that a function of EUV layer count or perhaps where they’re repurposing equipment? And so now they’re realizing they need more footprint for EUV.

[Answer] No, it is layer count and imaging performance. And that’s what led to the surprise, the positive surprise, which indeed led to more orders.

ASML’s management sees the early shoots of recovery observed in the Memory chip market as being driven by both higher utilisation across the board, and by the AI-specific technology transition

I think it’s — what we’re seeing is, of course, the information coming off our tools that we see the utilization rates going up. That’s one. Clearly, there’s also an element of technology transition. That’s also clear. I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV. So it’s a bit of a mix. I mean, yes, you’ve gone through, I think, the bottom of this memory cycle with prices going up, utilizations increasing, and that combined with the technology transition driven by AI. That’s a bit what we see today. So it’s a combination of both, and I think that will continue.

ASML’s management is thinking if their planned capacity buildout for EUV lithography systems is too low, partly because of AI-driven demand for leading edge chips

We have said our capacity buildout will be 90 EUV Low-NA systems, 20 High-NA whereby internally, we are looking at that number as a kind of a base number where we’re investigating whether that number should be higher. The question is whether that 90 is going to be enough. Now we have to realize, we are selling wafer capacity, which is not only a function of the number of units, but also a function of the productivity of those tools. Now we have a pretty aggressive road map for the productivity in terms of wafers per hour. So it’s a complex question that you’re asking. But actually, we need to look at this especially against the math that we’re seeing for little requirements in the area of AI, whether it’s HBM or whether it is Logic, whether the number of units and the road map on productivity, which gives wafers because the combination is wafer capacity, whether that is sufficient.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing growing engagement in AI with a 75% sequential jump in the use of next-gen AI integrations

In observability, we now have more than 700 integrations allowing our customers to benefit from the latest AWS, Azure and GCP abilities as well as from the newly emerging AI stack. We continued to see increasing engagement there with the use of our next-gen AI integrations growing 75% sequentially in Q4.

Datadog’s management continues to add capabilities to Bits AI, the company’s natural language incident management copilot, and is improving the company’s LLM (large language model) observability capabilities

In the generative AI and LLM space, we continued to add capability to Bits AI, our natural language incident management copilot. And we are advancing LLM observability to help customers investigate where they can safely deploy and manage their models in production.

Currently, 3% of Datadog’s annualised recurring revenue (ARR) comes from next-gen AI native customers (was 2.5% in 2023 Q3); management believes the AI opportunity will be far larger in the future as all kinds of customers start incorporating AI in production; the AI native customers are companies that Datadgo’s management knows are substantially all based on AI

Today, about 3% of our ARR comes from next-gen AI native customers, but we believe the opportunity is far larger in the future as customers of every industry and every size start doing AI functionality in production…

…It’s hard for us to wrap our arms exactly around what is GenAI, what is not among our customer base and their workload. So the way we chose to do it is we looked at a smaller number of companies that we know are substantially all based on AI so these are companies like the modal providers and things like that. So 3% of ARR, which is up from what we had disclosed last time.

Microsoft said that AI accounts for six percentage points of Azure’s growth, but Datadog’s management is seeing AI-native companies on Datadog’s Azure business account for substantially more than the six percentage points mentioned

I know one number that everyone has been thinking about is one cloud, in particular, Microsoft, disclosed that 6% of their growth was attributable to AI. And we definitely see the benefits of that on our end, too. If I look at our Azure business in particular, there is substantially more than 6% that is attributable to AI native as part of our Azure business. So we see completely this trend is very true for us as well. It’s harder to tell with the other cloud providers because they don’t break those numbers up.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business, and that Datadog is ideally positioned for these

We continue to believe digital transformation and cloud migration are long-term secular growth drivers of our business and critical motion for every company to deliver value and competitive advantage. We see AI adoption as an additional driver of investment and accelerator of technical innovation and cloud migration. And more than ever, we feel ideally positioned to achieve our goals and help customers of every size in every industry to transform, innovate and drive value through technology adoption.

Datadog experienced a big slowdown from its digitally native customers in the recent past, but management thinks that these customers could also be the first ones to fully leverage AI and thus reaccelerate earlier

We suddenly saw a big slowdown from the digital native over the past year. On the other hand, they might be the first ones to fully leverage AI and deploy it in production. So you might see some reacceleration earlier from some of them at least.

Datadog’s management sees the attach rates for observability going up for AI workloads versus traditional workloads

[Question] If you think about the very long term, would you think attach rates of observability will end up being higher or lower for these AI workloads versus traditional workloads?

[Answer] We see the attach rate going up. The reason for that is our framework for that is actually in terms of complexity. AI just adds more complexity. You create more things faster without understanding what they do. Meaning you need — you shift a lot of the value from building to running, managing, understanding, securing all of the other things that need to keep happening after that. So the shape of some of the products might change a little bit because the shape of the software that runs it changes a little bit, which is no different from what happened over the past 10, 15 years. But we think it’s going to drive more need for observability, more need for security products around that.

Datadog’s management is seeing AI-native companies using largely the same kind of Datadog products as everyone else, but the AI-native companies are building the models, so the tooling for understanding the models are not applicable for them

[Question] Are the product SKUs, these kind of GenAI companies are adopting, are they similar or are they different to the kind of other customer cohorts?

[Answer] Today, this is largely the same SKUs as everybody else. These are infrastructure, APM logs, profiling these kind of things that they are — or really the monitoring, these kind of things that these customers are using. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base. And we see also where we see the bulk of the opportunity in the longer term, not in the handful of model providers that [ anybody ] is going to use. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base.

Datadog has a much larger presence in inference AI workloads as compared to training AI workloads; Datadog’s management sees that the AI companies that are scaling the most on Azure are scaling on inference

There’s 2 parts to the AI workloads today. There’s training and there’s inference. The vast majority of the players are still training. There’s only a few that are scaling with inference. The ones that are scaling with inference are the ones that are driving our ARR because we are — we don’t — we’re not really present on the training side, but we’re very present on the inference side. And I think that also lines up with what you might see from some of the cloud providers, where a lot of the players or some of the players that are scaling the most are on Azure today on the inference side, whereas a lot of the other players still largely training on some of the other clouds.

Etsy (NASDAQ: ETSY)

Etsy’s management recently launched Gift Mode, a feature where a buyer can type in details of a person and occasion, and AI technology will match the buyer with a gift; Gift Mode has more than 200 recipient persons, and has good early traction with 6 million visits in the first 2 weeks

So what’s Gift Mode? It’s a whole new shopping experience where gifters simply enter a few quick details about the person they’re shopping for, and we use the power of artificial intelligence and machine learning to match them with unique gifts from Etsy sellers. Creating a separate experience helps us know immediately if you’re shopping for yourself or someone else, hugely beneficial information to help our search engines solve for your needs. Within Gift Mode, we’ve identified more than 200 recipient personas, everything from rock climber to the crossword genius to the sandwich specialist. I’ve already told my family that when shopping for me, go straight to the music lover, the adventurer or the pet parent… 

…Early indications are that Gift Mode is off to a good start, including positive sentiment from buyers and sellers in our social channels, very strong earned media coverage and nearly 6 million visits in the first 2 weeks. As you test and shop in Gift Mode, keep in mind that this is just the beginning.

Etsy’s management is using AI to understand the return on investment of the company’s marketing spend

We’ve got pretty sophisticated algorithms that work on is this bid — is this click worth this much right now and how much should we bid. And so to the extent that CPCs rise, we naturally pull back. Or to the extent that CPC is lower, we naturally lean in. The other thing, by the way, it’s not just CPCs, it’s also conversion rates. So in times when people are really budget constrained, we see them actually — we see conversion rate across the industry go down. We see people compare some shop a lot more. And so we are looking at all of that and not humans, but machines using AI are looking at a very sophisticated way at what’s happening with conversion rate right now, what’s happening with CPCs right now. And therefore, how much is each visit worth and how much should we be bidding. 

Fiverr (NYSE: FVRR)

Fiverr’s management is seeing strong demand for the AI services vertical, with AI-related keyword searches growing sevenfold in 2023 

Early in January last year, we were the first in the market to launch a dedicated AI services vertical, creating a hub of businesses to higher AI talent. Throughout the year, we continue to see tremendous demand for those services with searches that contain AI-related keywords in our market base growing sevenfold in 2023 compared to 2022. 

Fiverr’s management has seen AI create a net-positive 4% impact to Fiverr’s business by driving a mix-shift for the company from simple services – such as translation and voice-over – to complex services; complex services now represent 1/3 of Fiverr’s market base are typically larger and longer-duration; complex categories are where a human touch is needed and adds value while simple categories are where technology can do a good job without humans; Fiverr’s management thinks that simple categories will be automated away by AI while complex categories will become more important

Overall, we estimate AI created a net positive impact of 4% to our business in 2023 as we see a category mix shift from simple services such as translation and voice over to more complex services such as mobile app development, e-commerce management or financial consulting. In 2023, complex services represented nearly 1/3 of our market base, a significant step-up from 2022. Moreover, there are typically larger projects and longer duration with an average transaction size 30% higher than those of simple services…

…What we’ve identified is there is a difference between what we call simple categories or tasks and more complex ones. And in the complex group, it’s really those categories that require human intervention and human inputs in order to produce a satisfactory results for the customer. And in these categories, we’re seeing growth that goes well beyond the overall growth that we’re seeing. And really, the simple ones are such where technology can actually do a pretty much gen-tie work, which in those cases, they’re usually associated with lower prices and shorter-term engagements…

…So our assumption is that some of the simple paths are going to be — continue to be automated, which, by the way, is nothing new. I mean, it happened before even before AI, automation has been a part of our lives. And definitely, the more complex services is where I think the growth potential definitely lies. This is why we called out the fact that we’re going to double down on these categories and services.

Fiverr’s management believes that the opportunities created by AI will outweigh the jobs that are displaced

We believe that the opportunities created by emerging technologies far outweigh the jobs they replace. Human talent continues to be an essential part of unlocking the potential of new technologies. 

Fiverr’s management believes that AI will be a multiyear tailwind for the company

We are also seeing a shift into more sophisticated, highly skilled and longer-duration categories with bigger addressable market. Data shows our market base is built to benefit from these technologies and labor market changes. Unlike single vertical solutions with higher exposure to disruptive technologies and train changes, Fiverr has developed a proprietary horizontal platform with hundreds of verticals, quickly leaning into the ever-changing industry demand needs and trends. All in all, we believe AI will be a multiyear tailwind for us to drive growth and innovation. In 2023, we also made significant investments in AI that drove improvements in our overall platform. 

A strategic priority for Fiverr’s management in 2024 is to develop AI tools to enhance the overall customer experience of the company’s marketplace

Our recent winter product release in January culminated these efforts in the second half of 2023 and revamped almost every part of our platform with an AI-first approach, from search to personalization from supply quality to seller engagement…

…Our third strategic priority is to continue developing proprietary AI applications unique to our market base to enhance the overall customer experience. The winter product release we discussed just now gives you a flavor of that, but there is so much more to do.

Mastercard (NYSE: MA)

Mastercard’s management is leveraging the company’s work on generative AI to build new services and solutions as well as to increase internal productivity

We also continue to develop new services and solutions, many of which leverage the work we are doing with generative AI. Generative AI brings more opportunity to drive better experiences for our customers, makes it easier to extract insights from our data. It can also help us increase internal productivity. We are working on many Gen AI use cases today to do just that. For example, we recently announced Shopping News. Shopping News uses generative AI to offer a conversational shopping tool that recreates the in-store human experience online, can translate consumers collegially language into tailored recommendations. Another example is Mastercard Small Business AI. The tool will draw on our existing small business resources, along with the content from a newly formed global media coalition to help business owners navigate a range of business challenges. The platform, which is scheduled for pilot launch later this year will leverage AI to provide personalized real-time assistance delivered in a conversational tone.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management launched a number of AI features – including a summary of customer reviews, a summary of product functions, push notifications about items left unpurchased in shopping carts, and capabilities for sellers to create coupons and answer buyer questions quickly – in 2023 for the ecommerce business

In 2023, we launched capabilities that enable sellers to create their own promotional coupons and answer buyer questions more quickly with the assistance of artificial intelligence…

…AI based features are already an integral part of the MELI experience, with many innovations launched in 2023, including: 

  • A summary of customer reviews on the product pages that concentrates the main feedback from buyers of that product.
  • On beauty product pages a summary of product functions and characteristics is automatically created to facilitate buyers choices.
  • Push notifications about items left unpurchased in shopping carts are now highly personalized and remind users why they may have chosen to buy a particular product.
  • We have also added an AI feature that helps sellers to respond to questions by preparing answers that sellers can send immediately, or edit quickly. 

Meta Platforms (NASDAQ: META)

The major goal of Meta’s management is for the company is to have (1) world-class AI assistant for all users, (2) AI-representor for each creator, (3) AI agent for every business, and (4) state-of-the-art open source models for developers

Now moving forward, a major goal, we’ll be building the most popular and most advanced AI products and services. And if we succeed, everyone who uses our services will have a world-class AI assistant to help get things done, every creator will have an AI that their community can engage with, every business will have an AI that their customers can interact with to buy goods and get support, and every developer will have a state-of-the-art open-source model to build with.

Meta’s management thinks consumers will want a new AI-powered computing device that can see and hear what we are seeing and hearing, and this new computing device will be smart glasses, and will require full general intelligence; Meta has been conducting research on general intelligence for more than a decade, but it will now also incorporate general intelligence into product work – management thinks having product-targets when developing general intelligence helps to focus the work

I also think that everyone will want a new category of computing devices that let you frictionlessly interact with AIs that can see what you see and hear what you hear, like smart glasses. And one thing that became clear to me in the last year is that this next generation of services requires building full general intelligence. Previously, I thought that because many of the tools were social-, commerce- or maybe media-oriented that it might be possible to deliver these products by solving only a subset of AI’s challenges. But now it’s clear that we’re going to need our models to be able to reason, plan, code, remember and many other cognitive abilities in order to provide the best versions of the services that we envision. We’ve been working on general intelligence research and FAIR for more than a decade. But now general intelligence will be the theme of our product work as well…

…We’ve worked on general intelligence in our lab, FAIR, for more than a decade, as I mentioned, and we produced a lot of valuable work. But having clear product targets for delivering general intelligence really focuses this work and helps us build the leading research program.

Meta’s management believes the company has world-class compute infrastructure; Meta will end 2024 with 600,000 H100 (NVIDIA’s state-of-the-art AI chip) equivalents of compute; Meta is coming up with new data centre and chip designs customised for its own needs

The first is world-class compute infrastructure. I recently shared that, by the end of this year, we’ll have about 350,000 H100s, and including other GPUs, that will be around 600,000 H100 equivalents of compute…

…In order to build the most advanced clusters, we’re also designing novel data centers and designing our own custom silicons specialized for our workloads.

Meta’s management thinks that future AI models will be even more compute-intensive to train and run inference; management does not know exactly how much the compute this will be, but recognises that the trend has been of AI models requiring 10x more compute for each new generation, so management expects Meta to require growing infrastructure investments in the years ahead for its AI work

Now going forward, we think that training and operating future models will be even more compute-intensive. We don’t have a clear expectation for exactly how much this will be yet, but the trend has been that state-of-the-art large language models have been trained on roughly 10x the amount of compute each year…

…While we are not providing guidance for years beyond 2024, we expect our ambitious long-term AI research and product development efforts will require growing infrastructure investments beyond this year.

Meta’s approach with AI is to open-source its foundation models while keeping product-implementations proprietary; Meta’s management thinks open-sourcing brings a few key benefits, in that open source software (1) is safer and more compute-efficient, (2) can become the industry standard, and (3) attracts talented people; management intends to continue open-sourcing Meta’s AI models 

Our long-standing strategy has been to build an open-source general infrastructure while keeping our specific product implementations proprietary. In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far, as well as industry standard tools like PyTorch that we’ve developed…

…The short version is that open sourcing improves our models. And because there’s still significant work to turn our models into products because there will be other open-source models available anyway, we find that there are mostly advantages to being the open-source leader, and it doesn’t remove differentiation for our products much anyway. And more specifically, there are several strategic benefits.

First, open-source software is typically safer and more secure as well as more compute-efficient to operate due to all the ongoing feedback, scrutiny and development from the community. Now this is a big deal because safety is one of the most important issues in AI. Efficiency improvements and lowering the compute costs also benefit everyone, including us. Second, open-source software often becomes an industry standard. And when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products. That’s subtle, but the ability to learn and improve quickly is a huge advantage. And being an industry standard enables that. Third, open source is hugely popular with developers and researchers. And we know that people want to work on open systems that will be widely adopted. So this helps us recruit the best people at Meta, which is a very big deal for leading in any new technology area…

…This is why our long-standing strategy has been to open source general infrastructure and why I expect it to continue to be the right approach for us going forward.

Meta is already training the next generation of its foundational Llama model, Llama 3, and progress is good; Meta is also working on research for the next generations of Llama models with an eye on developing full general intelligence; Meta’s management thinks that the company’s next few generations of foundational AI models could be in a totally different direction from other AI companies

In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far…

…While we’re working on today’s products and models, we’re also working on the research that we need to advance for Llama 5, 6 and 7 in the coming years and beyond to develop full general intelligence…

…A lot of last year and the work that we’re doing with Llama 3 is basically making sure that we can scale our efforts to really produce state-of-the-art models. But once we get past that, there’s a lot more kind of different research that I think we’re going to be doing that’s going to take our foundation models in potentially different directions than other players in the industry are going to go in because we’re focused on specific vision for what we’re building. So it’s really important as we think about what’s going to be in Llama 5 or 6 or 7 and what cognitive abilities we want in there and what modalities we want to build into future multimodal versions of the models.

Meta’s management sees unique feedback loops for the company’s AI work that involve both data and usage of its products; the feedback loops have been important in how Meta improved its AI systems for Reels and ads

When people think about data, they typically think about the corpus that you might use to train a model upfront. And on Facebook and Instagram, there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the common crawl data set. And people share large numbers of public text posts and comments across our services as well. But even more important in the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products. And this feedback is a big part of how we’ve improved our AI systems so quickly with Reels and Ads, especially over the last couple of years when we had to re-architect it around new rules.

Meta’s management wants hiring-growth in AI-related roles for 2024

AI is a growing area of investment for us in 2024 as we hire to support our road map…

…Second, we anticipate growth in payroll expenses as we work down our current hiring underrun and add incremental talent to support priority areas in 2024, which we expect will further shift our workforce composition toward higher-cost technical roles.

Meta’s management fully rolled out Meta AI Assistant and other AI chat experiences in the US at the end of 2023 and has began testing generative AI features in the company’s Family of Apps; Meta’s focus in 2024 regarding generative AI is on launching Llama3, making Meta AI assistant useful, and improving AI Studio

With generative AI, we fully rolled out our Meta AI assistant and other AI chat experiences in the U.S. at the end of the year and began testing more than 20 GenAI features across our Family of Apps. Our big areas of focus in 2024 will be working towards the launch of Llama 3, expanding the usefulness of our Meta AI assistant and progressing on our AI Studio road map to make it easier for anyone to create an AI. 

Meta has been using AI to improve its marketing performance; Advantage+ is helping advertisers partially or fully automate the creation of ad campaigns; Meta has rolled out generative AI features to help advertisers with changing text and images in their ad campaigns – adoption of the features is strong and test show promising performance gains, and Meta has a big focus in this area in 2024

We continue to leverage AI across our ad systems and product suite. We’re delivering continued performance gains from ranking improvements as we adopt larger and more advanced models, and this will remain an ongoing area of investment in 2024. We’re also building out our Advantage+ portfolio of solutions to help advertisers leverage AI to automate their advertising campaigns. Advertisers can choose to automate part of the campaign creation setup process, such as who to show their ad to with Advantage+ audience, or they can automate their campaign completely using Advantage+ shopping, which continues to see strong growth. We’re also now exploring ways to apply this end-to-end automation to new objectives. On the ads creative side, we completed the global rollout of 2 of our generative AI features in Q4, Text Variations and Image Expansion, and plan to broaden availability of our background generation feature later in Q1. Initial adoption of these features has been strong, and tests are showing promising early performance gains. This will remain a big area of focus for us in 2024…

…So we’re really scaling our Advantage+ suites across all of the different offerings there, which really helped to automate the ads creation process for different types of advertisers. And we’re getting very strong feedback on all of those different features, advantage+ Shopping, obviously, being the first, but Advantage+ Catalog, Advantage+ Creative, Advantage+ Audiences, et cetera. So we feel like these are all really important parts of what has continued to grow improvements in our Ads business and will continue to going forward.

Meta’s management’s guidance for capital expenditure for 2024 is increased slightly from prior guidance (for perspective 2023’s capex is $27.27 billion), driven by increased investments in servers and data centers for AI-related work

Turning now to the CapEx outlook. We anticipate our full year 2024 capital expenditures will be in the range of $30 billion to $37 billion, a $2 billion increase of the high end of our prior range. We expect growth will be driven by investments in servers, including both AI and non-AI hardware, and data centers as we ramp up construction on sites with our previously announced new data center architecture.

Meta’s management thinks AI will make all of the company’s products and services better, but is unsure how the details will play out

I do think that AI is going to make all of the products and services that we use and make better. So it’s hard to know exactly how that will play out. 

Meta’s management does not expect the company’s generative AI products to be a meaningful revenue-driver in the short term, but they expect the products to be huge drivers in the long term

We don’t expect our GenAI products to be a meaningful 2024 driver of revenue. But we certainly expect that they will have the potential to be meaningful contributors over time.

Microsoft (NASDAQ: MSFT)

Microsoft is now applying AI at scale, across its entire tech stack, and this is helping the company win customers

We have moved from talking about AI to applying AI at scale. By infusing AI across every layer of our tech stack, we are winning new customers and helping drive new benefits and productivity gains.

Microsoft’s management thinks that Azure offers (1) the best AI training and inference performance, (2) the widest range of AI chips, including those from AMD, NVIDIA, and Microsoft, and (3) the best selection of foundational models, including LLMs and SLMs (small language models); Azure AI now has 53,000 customers and more than 33% are new to Azure; Azure allows developers to deploy LLMs without managing underlying infrastructure

Azure offers the top performance for AI training and inference and the most diverse selection of AI accelerators, including the latest from AMD and NVIDIA as well as our own first-party silicon, Azure Maia. And with Azure AI, we provide access to the best selection of foundation and open source models, including both LLMs and SLMs all integrated deeply with infrastructure, data and tools on Azure. We now have 53,000 Azure AI customers. Over 1/3 are new to Azure over the past 12 months. Our new models of service offering makes it easy for developers to use LLMs from our partners like Cohere, Meta and Mistral on Azure without having to manage underlying infrastructure.

Azure grew revenue by 30% in 2023 Q4, with six points of growth from AI services; most of the six points of growth from AI services was driven by Azure Open AI

Azure and other cloud services revenue grew 30% and 28% in constant currency, including 6 points of growth from AI services. Both AI and non-AI Azure services drove our outperformance…

…Yes, Azure OpenAI and then OpenAI’s own APIs on top of Azure would be the sort of the major drivers. But there’s a lot of the small batch training that goes on, whether it’s out of [indiscernible] or fine-tuning. And then a lot of people who are starting to use models as a service with all the other new models. But it’s predominantly Azure OpenAI today.

Microsoft’s management believes the company has built the world’s most popular SLMs; the SLMs have similar performance to larger models, but can run on laptops and mobile devices; both startups and established companies are exploring the use of Microsoft’s Phi SLM for applications

We have also built the world’s most popular SLMs, which offer performance comparable to larger models but are small enough to run on a laptop or mobile device. Anchor, Ashley, AT&T, EY and Thomson Reuters, for example, are all already exploring how to use our SLM, Phi, for their applications. 

Microsoft has added Open AI’s latest models to the Azure OpenAI service; Azure Open AI is seeing increased usage from AI-first start ups, and more than 50% of Fortune 500 companies are using it

And we have great momentum with Azure OpenAI Service. This quarter, we added support for OpenAI’s latest models, including GPT-4 Turbo, GPT-4 with Vision, DALL-E 3 as well as fine-tuning. We are seeing increased usage from AI-first start-ups like Moveworks, Poplexity, Symphony AI as well as some of the world’s largest companies. Over half of the Fortune 500 use Azure OpenAI today, including Ally Financial, Coca-Cola and Rockwell Automation. For example, at CES this month, Walmart shared how it’s using Azure OpenAI Service along with its own proprietary data and models to streamline how more than 50,000 associates work and transform how its millions of customers shop. 

Microsoft’s management is integrating AI across the company’s entire data stack; Cosmo DB, which has vector search capabilities, is used by companies as a database for AI apps; KPMG, with the help of Cosmos DB, has seen a 50% increase in productivity for its consultants; Azure AI Search provides hybrid search that goes beyond vector search and Open AI is using it for ChatGPT 

We are integrating the power of AI across the entire data stack. Our Microsoft Intelligent Data Platform brings together operational databases, analytics, governance and AI to help organizations simplify and consolidate their data estates. Cosmos DB is the go-to database to build AI-powered apps at any scale, powering workloads for companies in every industry from AXA and Kohl’s to Mitsubushi and TomTom. KPMG, for example, has used Cosmos DB, including its built-in native vector search capabilities, along with Azure OpenAI Service to power an AI assistant, which it credits with driving an up to 50% increase in productivity for its consultants… And for those organizations who want to go beyond in-database vector search, Azure AI Search offers the best hybrid search solution. OpenAI is using it for retrieval augmented generation as part of ChatGPT. 

There are now more than 1.3 million GitHub Copilot subscribers, up 30% sequentially; more than 50,000 organisations use GitHub Copilot Business and Accenture alone will roll out GitHub Copilot to 50,000 of its developers in 2024; Microsoft’s management thinks GitHub Copilot is a core product for anybody who is working in software development

GitHub revenue accelerated to over 40% year-over-year, driven by all our platform growth and adoption of GitHub Copilot, the world’s most widely deployed AI developer tool. We now have over 1.3 million paid GitHub Copilot subscribers, up 30% quarter-over-quarter. And more than 50,000 organizations use GitHub Copilot Business to supercharge the productivity of their developers from digital natives like Etsy and HelloFresh to leading enterprises like Autodesk, Dell Technologies and Goldman Sachs. Accenture alone will roll out GitHub Copilot to 50,000 of its developers this year…

…Everybody had talked it’s become — it is the 1 place where it’s becoming standard issue for any developer. It’s like if you take away spellcheck from Word, I’ll be unemployable. And similarly, it will be like I think GitHub Copilot becomes core to anybody who is doing software development…

To increase GitHub Copilot’s ARPU (average revenue per user), and ARPUs for other Copilots for the matter, Microsoft’s management will lean on the improvement that the Copilots bring to a company’s operating leverage and ask for a greater share of value

Our ARPUs have been great but they’re pretty low. But frankly, even though we’ve had a lot of success, it’s not like we are a high-priced ARPU company. I think what you’re going to start finding is, whether it’s Sales Copilot or Service Copilot or GitHub Copilot or Security Copilot, they are going to fundamentally capture some of the value they drive in terms of the productivity of the OpEx, right? So it’s like 2 points, 3 points of OpEx leverage would go to some software spend. I think that’s a pretty straightforward value equation. And so that’s the first time. I mean, this is not something we’ve been able to make the case for before, whereas now I think we have that case.

Then even the horizontal Copilot is what Amy was talking about, which is at the Office 365 or Microsoft 365 level. Even there, you can make the same argument. Whatever ARPU we may have with E5, now you can say incrementally as a percentage of the OpEx, how much would you pay for a Copilot to give you more time savings, for example. And so yes, I think all up, I do see this as a new vector for us in what I’ll call the next phase of knowledge work and frontline work even and their productivity and how we participate.

And I think GitHub Copilot, I never thought of the tools business as fundamentally participating in the operating expenses of a company’s spend on, let’s say, development activity. And now you’re seeing that transition. It’s just not tools. It’s about productivity of your dev team.

Microsoft’s own research and external studies show that companies can see up to a 70% increase in productivity by using generative AI for specific tasks; early users of Copilot for Microsoft 365 became 29% faster in a number of tasks

Our own research as well as external studies show as much as 70% improvement in productivity using generative AI for specific work tasks. And overall, early Copilot for Microsoft 365 users were 29% faster in a series of tasks like searching, writing and summarizing.

Microsoft’s management believes that AI will become a first-class part of every personal computer (PC) in 2024

In 2024, AI will become first-class part of every PC. Windows PCs with built-in neural processing units were front and center at CES, unlocking new AI experiences to make what you do on your PC easier and faster, from searching for answers and summarizing e-mails to optimizing performance in battery efficiency. Copilot in Windows is already available on more than 75 million Windows 10 and Windows 11 PCs. And with our new Copilot Key, the first significant change to the Windows Keyboard in 30 years, providing one-click access.

Microsoft’s management thinks that AI is transforming Microsoft’s search and browser experience; Microsoft has created more than 5 billion images and conducted more than 5 billion chats to-date, with both doubling sequentially; Bing and Edge both took share in 2023 Q4

And more broadly, AI is transforming our search and browser experience. We are encouraged by the momentum. Earlier this month, we achieved a new milestone with 5 billion images created and 5 billion chats conducted to date, both doubling quarter-over-quarter and both Bing and Edge took share this quarter.

Microsoft’s management expects the company’s capital expenditure to increase materially in the next quarter because of cloud and AI infrastructure investments; management’s commitment to increase infrastructure investments is guided by customer demand and what they see as a substantial market opportunity; management feels good about where Microsoft is in terms of adding infrastructure capacity to meet AI computing demand

We expect capital expenditures to increase materially on a sequential basis, driven by investments in our cloud and AI infrastructure and the slip of a delivery date from Q2 to Q3 from a third-party provider noted earlier. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-out…

…Our commitment to scaling our cloud and AI investment is guided by customer demand and a substantial market opportunity. As we scale these investments, we remain focused on driving efficiencies across every layer of our tech stack and disciplined cost management across every team…

…I think we feel really good about where we have been in terms of adding capacity. You started to see the acceleration in our capital expense starting almost a year ago, and you’ve seen it scale through that process.

Microsoft’s management is seeing that most of the AI activity taking place on Azure is for inference

[Question] On AI, where are we in the journey from training driving most of the Azure AI usage to inferencing?

[Answer] What you’ve seen for most part is all inferencing. So none of the large model training stuff is in any of our either numbers at all. Small batch training, so somebody is doing fine-tuning or what have you, that will be there but that’s sort of a minor part. So most of what you see in the Azure number is broadly inferencing.

New workloads in AI that happen on Azure starts with selecting a frontier model, fine-tuning that model, then inference

The new workload in AI obviously, in our case, it starts with 1 of the frontier — I mean, starts with the frontier model, Azure OpenAI. But it’s not just about just 1 model, right? So you — first, you take that model, you do all that jazz, you may do some fine-tuning. You do retrieval, which means you’re sort of either getting some storage meter or you’re eating some compute meters. And so — and by the way, there’s still a large model to a small model and that would be a training perhaps, but that’s a small batch training that uses essentially inference infrastructure. So I think that’s what’s happening. 

Microsoft’s management believes that generative AI will change the entire tech stack, down to the core computer architecture; one such change is to separate data storage from compute, as in the case of one of Microsoft’s newer services, Fabric

[Question] Cloud computing changed the tech stack in ways that we could not imagine 10 years back. The nature of the database layer, the operating system, every layer just changed dramatically. How do you foresee generative AI changing the tech stack as we know it?

[Answer] I think it’s going to have a very, very foundational impact. In fact, you could say the core compute architecture itself changes, everything from power density to the data center design to what used to be the accelerator now is that sort of the main CPU, so to speak, or the main compute unit. And so I think — and the network, the memory architecture, all of it. So the core computer architecture changes, I think every workload changes. And so yes, so there’s a full — like take our data layer.

The most exciting thing for me in the last year has been to see how our data layer has evolved to be built for AI, right? If you think about Fabric, one of the genius of Fabric is to be able to say, let’s separate out storage from the compute layer. In compute, we’ll have traditional SQLs, we’ll have spark. And by the way, you can have an Azure AI drop on top of the same data lake, so to speak, or the lake house pattern. And then the business model, you can combine all of those different computes. So that’s the type of compute architecture. So it’s sort of a — so that’s just 1 example…

… I do believe being in the cloud has been very helpful to build AI. But now AI is just redefining what it means to have — what the cloud looks like, both at the infrastructure level and the app model.

Microsoft’s management is seeing a few big use cases emerging within Microsoft 365 Copilot: Summarisation of meetings and documents; “chatting” with documents and texts of past communications; and creation and completion of documents

In terms of what we’re seeing, it’s actually interesting if you look at the data we have, summarization, that’s what it’s like, number one, like I’m doing summarizations of teams, meetings inside of teams during the meeting, after the meeting, Word documents, summarization. I get something in e-mail, I’m summarizing. So summarization has become a big deal. Drafts, right? You’re drafting e-mails, drafting documents. So anytime you want to start something, the blank page thing goes away and you start by prompting and drafting.

Chat. To me, the most powerful feature is now you have the most important database in your company, which happens to be the database of your documents and communications, is now query-able by natural language in a powerful way, right? I can go and say, what are all the things Amy said I should be watching out for next quarter? And it will come out with great detail. And so chat, summarization, draft.

Also, by the way, actions, one of the most used things is, here’s a Word document. Go complete — I mean, create a PowerPoint for me. So those are the stuff that’s also beginning.

Microsoft’s management is seeing strong engagement growth with Microsoft 365 Copilot that gives them optimism

And the other thing I would add, we always talk about in enterprise software, you sell software, then you wait and then it gets deployed. And then after deployment, you want to see usage. And in particular, what we’ve seen and you would expect this in some ways with Copilot, even in the early stages, obviously, deployment happens very quickly. But really what we’re seeing is engagement growth. To Satya’s point on how you learn and your behavior changes, you see engagement grow with time. And so I think those are — just to put a pin on that because it’s an important dynamic when we think about the optimism you hear from us.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that companies are starting to build the next generation of AI data centres; this next generation of AI data centres takes in data and transforms them into tokens, which are the output of AI models

At the same time, companies have started to build the next generation of modern Data Centers, what we refer to as AI factories, purpose-built to refine raw data and produce valuable intelligence in the era of generative AI…

…A whole new industry in the sense that for the very first time, a Data Center is not just about computing data and storing data and serving the employees of the company. We now have a new type of Data Center that is about AI generation, an AI generation factory, and you’ve heard me describe it as AI factories. But basically, it takes raw material, which is data. It transforms it with these AI supercomputers that NVIDIA built, and it turns them into incredibly valuable tokens. These tokens are what people experience on the amazing ChatGPT or Midjourney or search these days are augmented by that. All of your recommender systems are now augmented by that, the hyper-personalization that goes along with it. All of these incredible start-ups in digital biology generating proteins and generating chemicals and the list goes on. And so all of these tokens are generated in a very specialized type of Data Center. And this Data Center, we call it AI supercomputers and AI generation factories.

Nvidia’s management is seeing very strong demand for the company’s Hopper AI chips and expects demand to far outstrip supply

Demand for Hopper remains very strong. We expect our next generation products to be supply constrained as demand far exceeds supply…

…However, whenever we have new products, as you know, it ramps from 0 to a very large number, and you can’t do that overnight. Everything is ramped up. It doesn’t step up. And so whenever we have a new generation of products and right now, we are ramping H200s, there’s no way we can reasonably keep up on demand in the short term as we ramp. 

Nvidia’s outstanding 2023 Q4 growth in Data Center revenue was driven by both training and inference of AI models; management estimates that 40% of Nvidia’s Data Center revenue in 2023 was for AI inference; the 40% estimate might even be understated, because recommendation systems that were driven by CPU approaches are now being driven by GPUs

In the fourth quarter, Data Center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-on-year…

…Fourth quarter Data Center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our Data Center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year, approximately 40% of Data Center revenue was for AI inference…

…The estimate is probably understated and — but we estimated it, and let me tell you why. Whenever — a year ago, a year ago, the recommender systems that people are — when you run the Internet, the news, the videos, the music, the products that are being recommended to you because, as you know, the Internet has trillions — I don’t know how many trillions, but trillions of things out there, and your phone is 3 inches squared. And so the ability for them to fit all of that information down to something such a small real estate is through a system, an amazing system called recommender systems.

These recommender systems used to be all based on CPU approaches. But the recent migration to deep learning and now generative AI has really put these recommender systems now directly into the path of GPU acceleration. It needs GPU acceleration for the embeddings. It needs GPU acceleration for the nearest neighbor search. It needs GPU accelerating for reranking. And it needs GPU acceleration to generate the augmented information for you. So GPUs are in every single step of a recommender system now. And as you know, a recommender system is the single largest software engine on the planet. Almost every major company in the world has to run these large recommender systems. 

Nvidia’s management is seeing that all industries are deploying AI solutions

Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale…

…One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as Automotive, health care, and financial services.

Large cloud providers accounted for more than half of Nvidia’s Data Center revenue in 2023 Q4; Microsoft 

In the fourth quarter, large cloud providers represented more than half of our Data Center revenue, supporting both internal workloads and external public cloud customers. 

Nvidia’s management is finding that consumer internet companies have been early adopters of AI and they are one of Nvidia’s largest customer categories; consumer internet companies are using AI (1) in content recommendation systems to boost user engagement and (2) to generate content for advertising and to help content creators

The consumer Internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-through rates…

… In addition, consumer Internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.

Nvidia’s management is observing that enterprise software companies are using generative AI to help their customers with productivity and they are already seeing commercial success

Enterprise software companies are applying generative AI to help customers realize productivity gains. All the customers we’ve partnered with for both training and inference of generative AI are already seeing notable commercial success. ServiceNow’s generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP, and Snowflake.

There are both enterprises and startups that are building foundational large language models; these models are serving specific cultures, regions, and also industries

The field of foundation of large language models is thriving, Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.AI, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world’s many regions. And others are creating foundation models to address entirely different industries like Recursion, pharmaceuticals and generative biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU-specialized cloud providers.

Nvidia’s AI infrastructure are used for autonomous driving; the automotive vertical accounted for more than $1 billion of Nvidia’s Data Center revenue in 2023, and Nvidia’s management thinks the automotive vertical is a big growth opportunity for the company

We estimate that Data Center revenue contribution of the Automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions include systems and software for the development of autonomous driving, including data ingestion, curation, labeling, and AI training, plus validation through simulation. Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA’s AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. In effect, nearly every Automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA’s automotive Data Center processing demand to grow significantly…

…NVIDIA DRIVE Orin is the AI car computer of choice for software-defined AV fleet. Its successor, NVIDIA DRIVE Thor, designed for vision transformers offers more AI performance and integrates a wide range of intelligent capabilities into a single AI compute platform, including autonomous driving and parking, driver and passenger monitoring, and AI cockpit functionality and will be available next year. There were several automotive customer announcements this quarter. Li Auto, Great Wall Motor, ZEEKR, the premium EV subsidiary of Geely and Xiaomi EV all announced new vehicles built on NVIDIA.

Nvidia is developing AI solutions in the realm of healthcare

In health care, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices. We have built deep domain expertise in health care over the past decade, creating the NVIDIA Clara health care platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery. BioNeMo features a growing collection of pre-trained biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for the proprietary AI model through BioNeMo for the drug discovery ecosystem.

Nvidia’s business in China is affected by the US government’s export restrictions concerning advanced AI chips; Nvidia has been building workarounds and have started shipping alternatives to China; Nvidia’s management expects China to remain a single-digit percentage of Data Center revenue in 2024 Q1; management thinks that while the US government wants to limit China’s access to leading-edge AI technology, it still wants to see Nvidia succeed in China

Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market. China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter…

…At the core, remember, the U.S. government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars, if you will, are the restrictions.

Nvidia’s management is seeing demand for AI infrastructure from countries become an additional growth-driver for the company

In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems…

…So we’re seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West will surely be replicated around the world. 

Nvidia is shipping its Hopper AI chips with Infiniband networking; management believes that a combination of the company’s Hopper AI chips with Infiniband is becoming a de facto standard for AI infrastructure

The vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de facto standard for accelerated computing and AI infrastructure. 

Nvidia is on track to ramp shipments of the latest generation of its most advanced AI chips – the H200 – in 2024 Q2; the H200 chips have double the inference performance of its predecessor

We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubled the inference performance of H100. 

Nvidia’s networking solutions has a revenue run-rate of more than $13 billion and the company’s Quantum Infiniband band solutions grew by more than five times in 2023 Q4 – but Nvidia is also working on its own Ethernet AI networking solution called Spectrum X, which is purpose-built for AI and performs better than traditional Ethernet for AI workloads; Spectrum X has attracted leading OEMs as partners, and Nvidia is on track to ship the solution in 2024 Q1; management still sees Infiniband the standard for AI-dedicated systems

Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year-on-year. NVIDIA Quantum InfiniBand is the standard for the highest-performance AI-dedicated infrastructures. We are now entering the Ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the Data Center. Spectrum-X introduces new technologies over Ethernet that are purpose-built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Leading OEMs, including Dell, HPE, Lenovo and Supermicro with their global sales channels are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter…

…InfiniBand is the standard for AI-dedicated systems. Ethernet with Spectrum-X, Ethernet is just not a very good scale-out system. But with Spectrum-X, we’ve augmented, layered on top of Ethernet, fundamental new capabilities like adaptive routing, congestion control, noise isolation or traffic isolation so that we could optimize Ethernet for AI. And so InfiniBand will be our AI-dedicated infrastructure, Spectrum-X will be our AI-optimized networking

Nvidia’s AI-training-as-a-service-platform, DGX Cloud, has reached a $1 billion annualised revenue run rate, and is now available on all the major cloud service providers; Nvidia’s management believes that the company’s software business will become very significant over time, because of the importance of software when running AI-related hardware

We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. NVIDIA DGX Cloud will expand its list of partners to include Amazon’s AWS, joining Microsoft Azure, Google Cloud, and Oracle Cloud. DGX Cloud is used for NVIDIA’s own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners…

…And the way that we work with CSPs, that’s really easy. We have large teams that are working with their large teams. However, now that generative AI is enabling every enterprise and every enterprise software company to embrace accelerated computing, and when it is now essential to embrace accelerated computing because it is no longer possible, no longer likely anyhow, to sustain improved throughput through just general-purpose computing, all of these enterprise software companies and enterprise companies don’t have large engineering teams to be able to maintain and optimize their software stack to run across all of the world’s clouds and private clouds and on-prem.

So we are going to do the management, the optimization, the patching, the tuning, the installed base optimization for all of their software stacks. And we containerize them into our stack called NVIDIA AI Enterprise. And the way we go to market with it is think of that NVIDIA AI Enterprise now as a run time like an operating system. It’s an operating system for artificial intelligence. And we charge $4,500 per GPU per year. And my guess is that every enterprise in the world, every software enterprise company that are deploying software in all the clouds and private clouds and on-prem will run on NVIDIA AI Enterprise, especially obviously, for our GPUs. And so this is going to likely be a very significant business over time.

Nvidia’s gaming chips also have strong generative AI capabilities, leading to better gaming performance

At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start. NVIDIA AI Tensor Cores and the GPUs deliver up to 836 AI TOPS, perfect for powering AI for gaming, creating and everyday productivity. The rich software stack we offer with our RTX DPUs further accelerates AI. With our DLSS technologies, 7 out of 8 pixels can be AI-generated, resulting up to 4x faster ray tracing and better image quality. And with the TensorRT LLM for Windows, our open-source library that accelerates inference performance for the latest large language models, generative AI can run up to 5x faster on RTX AI PCs.

Nvidia has announced new gaming AI laptops from every major laptop manufacturer; Nvidia has more than 100 million RTX PCs in its installed base, and management thinks the company is in a good position to lead the next wave of generative AI applications that are coming to the personal computer

At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEM. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14-inch and thin and light laptops. With up to 686 TOPS of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best performing AI PC platforms…

…NVIDIA is fueling the next wave of generative AI applications coming to the PC. With over 100 million RTX PCs in the installed base and over 500 AI-enabled PC applications and games, we are on our way.

Nvidia has a service that allows software developers to build state-of-the-art generative AI avatars

At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allow developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards. NVIDIA has an end-to-end platform for building and deploying generative AI applications for RTX PCs and workstations. This includes libraries, SDKs, tools and services developers can incorporate into their generative AI workloads.

Nvidia’s management believes that generative AI cannot be done on traditional general-purpose computing – it has to be done on an accelerated computing framework

With accelerated computing, you can dramatically improve your energy efficiency. You can dramatically improve your cost in data processing by 20:1, huge numbers. And of course, the speed. That speed is so incredible that we enabled a second industry-wide transition called generative AI. In generative AI, I’m sure we’re going to talk plenty about it during the call. But remember, generative AI is a new application. It is enabling a new way of doing software, new types of software being created. It is a new way of computing. You can’t do generative AI on traditional general-purpose computing. You have to accelerate it.

The hardware supply chain of a Nvidia GPU is improving; the components that go into a Nvidia GPU is really complex
Our supply is improving. Overall, our supply chain is just doing an incredible job for us. Everything from, of course, the wafers, the packaging, the memories, all of the power regulators to transceivers and networking and cables, and you name it, the list of components that we ship. As you know, people think that NVIDIA GPUs is like a chip, but the NVIDIA Hopper GPU has 35,000 parts. It weighs 70 pounds. These things are really complicated things we’ve built. People call it an AI supercomputer for good reason. If you ever look at the back of the Data Center, the systems, the cabling system is mind-boggling. It is the most dense, complex cabling system for networking the world has ever seen. Our InfiniBand business grew 5x year-over-year. The supply chain is really doing fantastic supporting us. And so overall, the supply is improving. 

Nvidia’s management is allocating chips fairly to all of the company’s customers

CSPs have a very clear view of our product road map and transitions. And that transparency with our CSPs gives them the confidence of which products to place and where and when. And so they know the timing to the best of our ability, and they know quantities and, of course, allocation. We allocate fairly. We allocate fairly, do the best of our — best we can to allocate fairly and to avoid allocating unnecessarily.

Nvidia’s management is seeing a lot of activity emerging from robotics companies

There’s just a giant suite of robotics companies that are emerging. There are warehouse robotics to surgical robotics to humanoid robotics, all kinds of really interesting robotics companies, agriculture robotics companies.

Nvidia’s installed base of hardware has been able to support every single innovation in AI technology because it is programmable

NVIDIA is the only architecture that has gone from the very, very beginning, literally at the very beginning when CNNs and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through RNNs to LSTMs to every RLs to deep RLs to transformers to every single version and every species that have come along, vision transformers, multi-modality transformers that every single — and now time sequence stuff. And every single variation, every single species of AI that has come along, we’ve been able to support it, optimize our stack for it and deploy it into our installed base…

… We simultaneously have this ability to bring software to the installed base and keep making it better and better and better. So our customers’ installed base is enriched over time with our new software…

…on’t be surprised if in our future generation, all of a sudden, amazing breakthroughs in large language models were made possible. And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand, we make giant breakthroughs on the other hand.

A big difference between accelerated computing and general purpose computing is the importance of software in the former

As you know, accelerated computing is very different than general-purpose computing. You’re not starting from a program like C++. You compile it and things run on all your CPUs. The stacks of software necessary for every domain from data processing, SQL versus SQL structured data versus all the images and text and PDF, which is unstructured, to classical machine learning to computer vision to speech to large language models, all — recommender systems. All of these things require different software stacks. That’s the reason why NVIDIA has hundreds of libraries. If you don’t have software, you can’t open new markets. If you don’t have software, you can’t open and enable new applications. Software is fundamentally necessary for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. And now people understand that software is really key.

Nvidia’s management believes that generative AI has kicked off a massive new investment cycle for AI infrastructure

Generative AI has kicked off a whole new investment cycle to build the next trillion dollars of infrastructure of AI generation factories. We believe these two trends will drive a doubling of the world data center infrastructure installed base in the next 5 years and will represent an annual market opportunity in the hundreds of billions.

PayPal (NASDAQ: PYPL)

PayPal’s management will soon launch a new PayPal app that will utilise AI to personalise the shopping experience for consumers; management hopes to drive engagement with the app

This year, we’re launching and evolving a new PayPal app to create a situation. We will also leverage our merchant relationships and the power of AI to make the entire shopping experience personalized for consumers while giving them control over their data…

…The new checkout and app experiences we are rolling out this year will also create an engagement loop that will drive higher awareness of the various products we offer and drive higher adoption of our portfolio over time.

Shopify (NASDAQ: SHOP)

Shopify’s management launched nearly 12 AI-powered tools through the Shopify Magic product suite in 2023, including tools for AI-generated product descriptions and an AI commerce assistant; in recent weeks, management launched AI product image creating and editing tools within Shopify Magic; management will be introducing new modalities and text-to-image capabilities later this year

In 2023, we brought nearly a dozen AI-enabled tools through our Shopify Magic product suite. We’re one of the first platforms to bring AI-generated product descriptions to market and made solid progress towards building Sidekick, a first of its kind AI-enabled commerce assistant. As part of our winter edition a few weeks ago, we introduced new features to our Shopify Magic suite of AI tools. These new generative AI tools simplify and enhance product image editing directly within the product image editor in the Shopify admin. With Shopify Magic, merchants can now leverage AI to create stunning images and professional edits with just a few clicks or keywords, saving on cost and time. And given the significant advancements in AI in 2023, we plan to seize this enormous opportunity ahead of us and are excited to introduce new modalities and text to image capabilities to Shopify in 2024.

Shopify’s marketing-paybacks have improved by over 30% with the help of AI

In terms of marketing, the 2 areas, in particular, where we are leaning in this quarter are performance marketing and point-of-sale. Within performance marketing, our team has unlocked some opportunities to reach potential customers at highly attractive LTV to CAC and paybacks. In fact, tactics that we’ve implemented on some channels earlier this year including through the enhanced use of AI and automation have improved paybacks by over 30%, enabling us to invest more into these channels while still maintaining our operating discipline on the underlying unit economics. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management has increased the company’s capital expenditure materially over the last few years to capture the growth opportunities associated with AI

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years. In the past few years, we have sharply increased our CapEx spending in preparation to capture or harvest the growth opportunities from HPC, AI and 5G megatrends.

TSMC’s management expects 2024 to be a healthy growth-year for the company with revenue growth in the low-to-mid 20s percentage range, driven by its 3nm technologies, 5nm technologies, and AI

Entering 2024, we forecast fabless semiconductor inventory to have returned to a [ handsome ] level exiting 2023. However, macroeconomic weakness and geopolitical uncertainties persist, potentially further weighing on consumer sentiment and the market demand. Having said that, our business has bottomed out on a year-over-year basis, and we expect 2024 to be a healthy growth year for TSMC, supported by continued strong ramp of our industry-leading 3-nanometer technologies, strong demand for the 5-nanometer technologies and robust AI-related demand.

TSMC’s management sees 2023 as the year that generative AI became important for the semiconductor industry, with TSMC as a key enabler; management thinks that the surge in AI-related demand in 2023 will drive an acceleration in structural demand for energy-efficient computing, and that AI will need to be supported by more powerful semiconductors – these are TSMC’s strengths

2023 was a challenging year for the global semiconductor industry, but we also witnessed the rising emergence of generative AI-related applications with TSMC as a key enabler…

…Despite the near-term challenges, our technology leadership enable TSMC to outperform the foundry industry in 2023, while we are positioning us to capture the future AI and high-performance computing-related growth opportunities…

…The surge in AI-related demand in 2023 supports our already strong conviction that the structural demand for energy-efficient computing will accelerate in an intelligent and connected world. TSMC is a key enabler of AI applications. No matter which approach is taken, AI technology is evolving to use more complex AI models as the amount of computation required for training and inference is increasing. As a result, AI models need to be supported by more powerful semiconductor hardware, which requires use of the most advanced semiconductor process technologies. Thus, the value of TSMC technology position is increasing, and we are all well positioned to capture the major portion of the market in terms of semiconductor component in AI. To address insatiable AI-related demand for energy-efficient computing power, customers rely on TSMC to provide the most leading edge processing technology at scale with a dependable and predictable cadence of technology offering.

Almost everyone important in AI is working with TSMC on its 2nm technologies

As process technology complexity increase, the engagement lead time with customers also started much earlier. Thus, almost all the AI innovators are working with TSMC, and we are observing a much higher level of customer interest and engagement at N2 as compared with N3 at a similar stage from both HPC and the smartphone applications.

TSMC’s management believes that the world has seen only the tip of the iceberg with AI

But on the other hand, AI is only in its nascent stage. Only last November, the first large language data is announced, ChatGPT announced. We only see the tip of the iceberg. 

TSMC’s management believes that the use of AI could accelerate scientific innovation in the field of semiconductor manufacturing

So I want to give the industry an optimistic note that even though 1 nanometer or sub 1 nanometer could be challenging, but we have a new technology capability using AI to accelerate the innovation in science.

TSMC’s management still believes that its narrowly-defined AI business will grow at 50% annually; management also sees AI application process chips making up a high-teens weightage of TSMC’s revenue by 2027, up from a low-teens weightage mentioned in the 2023 second-quarter earnings call, because of a sudden increase in demand

But for TSMC, we look at ours here, the AI’s a CAGR, that’s the growth rate every year, it’s about 50%. And we are confident that we can capture more opportunities in the future. So that’s what we said that up to 2027, we are going to have high teens of the revenue from a very narrow, we defined the AI application process, not to mention about the networking, not to mention about all others, okay?…

…[Question] You mentioned that we have a very narrow definition, we call server AI processor contribution and that you said it can be high teens in 5 years’ time because the last time, we said low teens.

[Answer] The demand suddenly being increased since last — I think, last year, the first quarter up to March or April, when ChatGPT become popular, so customers respond quickly and asked TSMC to prepare the capacity, both in front end and the back end. And that’s why we have confidence that this AI’s revenue will increase. We only narrowed down to the AI application process, by the way. So we look at ours here, that we prepare the technology and the capacities in both our front end and also back end. And so we — it’s in the early stage so far today. We already see the increase, the momentum. And we expect — if you guys continue to track this one, the number will increase. I have confidence to say that, although I don’t know how much.

TSMC’s management is seeing AI chips being placed in edge-devices such as smartphones and PCs 

And to further extend our value, actually, all the edge device, including smartphone, including the PC, they start to put the AI’s application inside. They have some kind of a neural process, for example, so the silicon content will be greatly increased. 

Tesla (NASDAQ: TSLA)

Tesla has released version 12 of its FSD (Full Self Driving) software, which is powered end-to-end by AI (artificial intelligence); Tesla will soon release it to over 400,000 vehicles in North America; FSD v.12 is the first time AI has been used for pathfinding and vehicle controls, and within it, neural nets replaced over 330,000 lines of code

For full self-driving, we’ve released version 12, which is a complete architectural rewrite compared to prior versions. This is end-to-end artificial intelligence. So [ nothing but ] nets basically, photons in and controls out. And it really is quite a profound difference. This is currently just with employees and a few customers, but we will be rolling out to all who — all those customers in the U.S. who request full self-driving in the weeks to come. That’s over 400,000 vehicles in North America. So this is the first time AI has been used not just for object perception but for pathfinding and vehicle controls. We replaced 330,000 lines of C++ code with neural nets. It’s really quite remarkable.

Tesla’s management believes that Tesla is the world’s most efficient company at AI inference because the company, out of necessity, has had to wring the most performance out of 3-year-old hardware

I think Tesla is probably the most efficient company in the world for AI inference. Out of necessity, we’ve actually had to be extremely good at getting the most out of hardware because hardware 3 at this point is several years old. So I don’t — I think we’re quite far ahead of any other company in the world in terms of AI inference efficiency, which is going to be a very important metric in the future in many arenas.

Tesla’s management thinks that the AI technologies the company has developed for vehicles translates well into a humanoid robot (Optimus); Tesla’s vehicles and Optimus both have the same inference computers

And the technologies that we — the AI technologies we’ve developed for the car translate quite well to a humanoid robot because the car is just a robot on 4 wheels. Tesla is arguably already the biggest robot maker in the world. It’s just a 4-wheeled robot. So Optimus is a robot with — a humanoid robot with arms and legs, just by far the most sophisticated humanoid robot that’s being developed anywhere in the world…

…As we improve the technology in the car, we improve the technology in Optimus at the same time. It runs the same AI inference computer that’s on the car, same training technology. I mean we’re really building the future. I mean the Optimus lab looks like the set of Westworld, but admittedly, that was not a super utopian situation.

Tesla’s management is hedging their bets for the company’s FSD-related chips with Nvidia’s GPUs while also pursuing Dojo (Tesla’s own AI chip design)

[Question] As a follow-up, your release does not mention Dojo, so if you could just provide us an update on where Dojo stands and at what point do you expect Dojo to be a resource in improving FSD. Or do you think that you now have sufficient supply of NVIDIA GPUs needed for the training of the system?

[Answer] I mean the AI part of your question is — that is a deep one. So we’re obviously hedging our bets here with significant orders of NVIDIA GPUs…

…And we’re pursuing the dual path of NVIDIA and Dojo.

Tesla’s management believes that Tesla’s progress in self-driving is limited by training and that in AI, the more training is done on the model, the less resources are required for inference

A lot of our progress in self-driving is training limited. Something that’s important with training, it’s much like a human. The more effort you put into training, the less effort you need in inference. So just like a person, if you train in a subject, sort of class, 10,000 hours, the less mental effort it takes to do something. If you remember when you first started to drive how much of your mental capacity it took to drive, it was — you had to be focused completely on driving. And after you’ve been driving for many years, it only takes a little bit of your mind to drive, and you can think about other things and still drive safely. So the more training you do, the more efficient it is at the inference level. So we do need a lot of training. And we’re pursuing the dual path of NVIDIA and Dojo, A 

Tesla’s management thinks that Dojo is a long shot – it has potential, but may not work out

But I would think of Dojo as a long shot. It’s a long shot worth taking because the payoff is potentially very high but it’s not something that is a high probability. It’s not like a sure thing at all. It’s a high risk, high payoff program. Dojo is working, and it is doing training jobs, so — and we are scaling it up. And we have plans for Dojo 1.5, Dojo 2, Dojo 3 and whatnot. So I think it’s got potential. I can’t emphasize enough, high risk, high payoff.

Tesla’s management thinks that Tesla’s AI-inference hardware in its vehicles can enable the company to perhaps possess the largest amount of compute resources for AI tasks in the world at some point in the future

There’s also our inference hardware in the car, so we’re now on what’s called Hardware 4, but it’s actually version 2 of the Tesla-designed AI inference chip. And we’re about to complete design of — the terminology is a bit confusing. About to complete design of Hardware 5, which is actually version 3 of the Tesla-designed chip because the version 1 was Mobileye. Version 2 was NVIDIA, and then version 3 was Tesla. So — and we’re making gigantic improvements from 1 — from Hardware 3 to 4 to 5. I mean there’s a potentially interesting play where when cars are not in use in the future, that the in-car computer can do generalized AI tasks, can run a sort of GPT4 or 3 or something like that. If you’ve got tens of millions of vehicles out there, even in a robotaxi scenario, whether in heavy use, maybe they’re used 50 out of 168 hours, that still leaves well over 100 hours of time available — of compute hours. Like it’s possible with the right architectural decisions that Tesla may, in the future, have more compute than everyone else combined.

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management believes that in a post-cookie world, advertisers will have to depend on authentication, new approaches to identity, first-party data, and AI-driven relevance tools – Trade Desk’s tools help create the best outcome in this world

The post-cookie world is one that will combine authentication, new approaches to identity, first-party data activation and advanced AI-driven relevance tools, all to create a new identity fabric for the Internet that is so much more effective than cookies ever were. The Internet is being replumbed and our product offerings create the best outcome for all of the open Internet. 

AI optimisations are distributed across Kokai, which is Trade Desk’s new platform that recently went live; Kokai helps advertisers understand and score every ad impression, and allows advertisers to use an audience-first approach in campaigns

In particular, Kokai represents a completely new way to understand and score the relevance of every ad impression across all channels. It allows advertisers to use an audience-first approach to their campaigns, targeting their audiences wherever they are on the open Internet. Our AI optimizations, which are now distributed across the platform, help optimize every element of the ad purchase process. Kokai is now live, and similar to Next Wave and Solimar, it will scale over the next year.

Based on Trade Desk’s management’s interactions with customers, the use of AI to forecast the impacts that advertisers’ decisions will have on their ad spending is a part of Kokai that customers love

A big part of what they love, to answer your question about what are they most excited about, is we have streamlined our reporting. We’ve made it way faster. There are some reports that you just have to wait multiple minutes for it because they’re just so robust, and we found ways to accelerate that. We’ve also added AI throughout the platform, especially in forecasting. So it’s a little bit like if you were to make a hypothetical trade in a trading platform for equity and then us tell you what we think is going to happen to the price action in the next 10 minutes. So we’re showing them what the effects of their changes are going to be before they even make them so that they don’t make mistakes. Because sometimes what happens is people put out a campaign. They’ll put tight restrictions on it. They’ll hope that it spends, then they come back a day or 2 or even 3 later and then realize they made it so difficult with their combination of targeting and pricing for us to buy anything that they didn’t spend much money. Or the opposite because they spent more and it wasn’t as effective as they wanted. So helping them see all of that before they do anything helped.

Trade Desk’s management believes that the company is reinforcing itself as the adtech AI leader; Trade Desk has been using AI in its platform since 2016

We are reinforcing our position as the adtech AI leader. We’ve been embedding AI into our platform since 2016, so it’s nothing new to us. But now it’s being distributed across our platform so our clients can make even better choices among the 15 million ad impression opportunities a second and understand which of those ads are most relevant to their audience segments at any given time.

Wix (NASDAQ: WIX)

Wix’s management added new AI features in 2023 to help users create content more easily; the key AI features introduced include a chat bot, code assistant, and text and image creators

This year, we meaningfully extended an already impressive toolkit of AI capabilities to include new AI-powered features that will help Wix users create visual and written web content more easily, optimized design and content layout, right code and manage their website and businesses more efficiently. The key AI product introduced in the last year include an AI chat experience for businesses, responsive AI design, AI code assistant, AI Meta Tag Creators and AI text and image creators among several other AI design tools. 

Wix’s management recently released an AI site generator that can create a full-blown, tailored, ready-to-publish website based on user prompts; management believes that Wix is the first to launch such an AI site generator; the site generator has received fantastic feedback so far, and is a good starting point for creating a new website, but it is only at Version 1

We also recently released our AI site generator and have heard fantastic feedback so far. I believe this will be the first AI tool on the market that creates a full-blown, tailored and ready-to-publish website integrated with relevant business application based on user prompt…

… So we released what I would call version 1. It’s a great way for people to start with the website, meaning that you come in and you say, I’m a Spa in New York City and I specialize in some specific things. And we’ll — and AI will interview you on the — what makes your business unique, where are you located? How many people? Tell us about those people and the staff members. And as a result, we generate a website for you that is — has all the great content, right? And the content will be text and images. The other thing that then will actually get you to this experience where you can choose how you want to have the design look like. And the AI will generate different designs for you. So you can tell why I like this thing, I want a variation on that, I don’t like the colors, please change the colors or I want colors that are more professionals or I want color that are blue and yellow. And there I will do it for you.

On the other hand, you can also say, well, I don’t really like the design, can you generate something very different or generate a small variation of that, in many ways, a bit similar to Midjourney, what Midjourney is doing with the images, we are doing with a full-blown website. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. 

Wix’s management is seeing that the majority of the company’s new users today have adopted at least one AI tool and this has been a positive for Wix’s business

In fact, the majority of new users today are using at least 1 AI tool on the web creation journey. This has resulted in reduced friction and enhanced the creation experience for our users as well as increased conversion and improve monetization. 

Wix’s management expects AI to be a driver of Wix’s growth in 2024 and beyond

We expect our AI technology to be a significant driver of growth in 2024 and beyond…

…Third, as Avishai mentioned, uptick of the milestone AI initiatives of 2023 has been incredible, and we expect to see ramping conversion and monetization benefits from our entire AI toolkit for both self-creators and partners this year…

…But then again, also 2025 will be much better than 2024. I think that the first reason is definitely the launching new products. At the end of the day, we are a technology, a product company, and this is how we drive our growth, mostly from new features, some new products. And this is what we did in the past, and we will continue also to do in the future. So definitely, it’s coming from the partners business with launching Studio. It was a great launch for us. We see the traction in the market. We see the demand. We see how our agencies use it. I think, as you know, we mentioned a few times about the number of new accounts with more than 50% are new. I think that it’s — for us, it’s a great proxy to the fact that we are going to see much more that it would be significantly the major growth driver for us in the next few years. The second one is everything that we’ve done with AI, we see a tremendous results out of it, which we believe that we will continue into the next year. And as you know, as always, the third one is about trying to optimize our pricing strategy. And this is what we’ve done in the past, we’ll continue to do in the future. [indiscernible] both mentioned like a fourth reason, which is the overall demand that we see on a macro basis.

Wix’s management has been driving the company to use AI for internal processes; the internal AI tools include an open internal AI development platform that everyone at Wix can contribute to, and a generative AI conversational assistant for product teams in Wix; the internal AI tools has also helped Wix to save costs and improve its gross margin

We also leverage AI to improve many of our internal processes at Wix, especially research and development velocity. This include an open internal AI deployment platform that allow for everyone at Wix to contribute to building AI-driven user features in tandem. We also have a Gen AI best platform dedicated to conversational assistant, which allow any product team at Wix to develop their own assistant tailored to specific user needs without having to start from scratch. With this platforms, we are able to develop and release high-quality AI-based features and tools efficiently and at scale…

…We ended 2023 with a total gross margin of 68%, an improvement of nearly 500 basis points compared to 2022. Throughout the year, we benefited from improved efficiencies in housing and infrastructure costs and optimization of support cost, partially aided by integrating AI into our workflows. Creative Subscriptions gross margin expanded to 82% in 2023. And Business Solutions gross margin grew to 29% for the full year as we continue to benefit from improving margin and new [indiscernible].

Wix’s management believes that there can be double-digit growth for the company’s self creators business in the long run partly because of AI products

And we mentioned that for self-creators in the long run, we believe that it will be a double-digit growth just because of that because it has the most effect of the macro environment which already started to see that it’s improving. But then again, also the new product and AI is 1 of the examples how we can bring increased conversion and also increase the growth of self-creators.

Zoom Video Communications (NASDAQ: ZM)

Zoom’s management launched Zoom AI Companion, a generative AI assistant, five months ago and it has been expanded to six Zoom products, all included at no extra cost to users; Zoom AI companion now has 510,000 accounts enabled and has created 7.2 million meeting summaries

Zoom AI Companion, our generative AI assistant, empowers customers and employees with enhanced productivity, team effectiveness and skills. Since its launch only five months ago, we expanded AI Companion to six Zoom products, all included at no additional cost to licensed users…

…Zoom AI companion have grown tremendously in just 5 months with over 510,000 accounts enabled and 7.2 million meeting summaries created as of the close of FY ’24. 

Zoom’s future roadmap for AI is guided by driving customer value

Our future roadmap for AI is 100% guided by driving customer value. We are hard at work developing new AI capabilities to help customers achieve their unique business objectives and we’ll have more to share in a month at Enterprise Connect

Zoom’s Contact Center suite is an AI-first solution that includes AI Companion; Contact Center suite is winning in head-to-head competition against legacy incumbents

Our expanding Contact Center suite is a unified, AI-first solution that offers tremendous value to companies of all sizes seeking to strengthen customer relationships and deliver better outcomes. The base product includes AI Companion and our newly launched tiered pricing allows customers to add specialized CX capabilities such as AI Expert Assist, workforce management, quality management, virtual agent, and omnichannel support. Boosted by its expanding features, our contact center suite is beginning to win in head-to-head competition with the legacy incumbents.

Zoom Revenue Accelerator gained recognition from Forrester as an AI-powered tool for sales teams

Zoom Revenue Accelerator was recognized as a “Strong Performer” in The Forrester Wave™ in its first year of being covered – an amazing testament to its value as a powerful AI-enabled tool driving value for sales teams.

A financial services company, Convera, was attracted to Zoom’s products because of AI Companion

Finally, let me thank Convera, the World’s FX payments leader. Zoom Phone was the foundation of their Zoom engagement and from there they adopted the wider Zoom One platform in less than two years. Seeing the benefits of the tight integration of our products underpinned by AI Companion, they recently began to deeply leverage Zoom Team Chat in order to streamline their pre, during and post meeting communication all within the Zoom Platform.

Zoom is monetising AI on many fronts

We are monetizing AI on many fronts. You look at our Zoom AI Companion, right? So first of all, for our existing customers, because they all like the value we created, right, to generate meeting summary, meeting [indiscernible] and so on and so forth, because of that, we really do not — because customers, they’re also trying to reduce the cost. That’s why we do not charge the customers for those features. However, a lot of areas we can monetize. Take our AI Companion, for example. Enterprise customers, how to lever enterprise customer directionally, source data and also to build a tailored — the Zoom AI Companion for those customers, sort of like a customized Zoom AI Companion, we can monetize. And also look at all the services. Maybe I’ll just take Contact Center, for example. We are offering Zoom Virtual Agent, that’s one we can monetize. And recently, we announced 3 tiers of Zoom Contact Center product. The last one is per agent per month, we charge $149. The reason why, there are a few features. One of the feature is Zoom Expert Assist, right? All those features are empowered by AI features.

Zoom’s AI-powered Virtual Agent was deployed internally and has saved Zoom 400,000 agent hours per month, and handled more than 90% of inbound inquiries; Zoom’s management believes that Zoom’s AI features help improve companies’ agent-efficiency in contact centers 

Zoom, we — internally, we deployed our Virtual Agent. Guess what? Every month, we saved 400,000 agent hours. And more than 90% inbound inquiries can be done by our Virtual Agent driven by the AI technology…

…If you look at our Zoom Meeting product, right, customer discovered that Zoom AI Companion to help you with the meeting summary. And after they discovered that feature and they would like to adopt that, right? Contact Center, exact same thing. And like Virtual Agent, Zoom Expert Assist, right, leverage those AI features. Manager kind of knows what’s going on in real time and also — and the agent while can have the AI, to get a real-time in order base and any update about these customers. All those AI features can dramatically improve the agent efficiency, right? That’s the reason why it’s kind of — will not take a much longer time for those agents to realize the value of the AI features because it’s kind of very easy to use. And I think that in terms of adoption rate, I feel like Contact Center AI adoption rate even probably faster than the other — the core features, so — core services.

Zoom’s management is seeing that having AI features at no additional cost to customers helps the company to attract users to Zoom Team Chat

[Question] And for Eric, what’s causing customers to move over to the Zoom chat function and off your main competitor like Teams? Just further consolidation onto one platform? Or is it AI Companion playing a larger role here, especially as you guys are including it as opposed to $30, $35 a month?

[Answer] Customers, they see — using their chat solution, they want to use AI, right? I send you — James, I send you a message. I want to leverage AI, send a long message. However, if you use other solutions, sometimes, other solutions itself, even without AI, it’s not free, right? And in our case, not only do we have core functionalities, but also AI Companion built in also at no additional cost. I can use — for any users, customers, you already have a Meeting license, Zoom Team Chat already built in, right? All the core features, you can use the Zoom AI Companion in order to leverage AI — write a chat message and so on and so forth. It works so well at no additional cost. The total cost of ownership of the Zoom Team Chat is much better than any other team chat solutions.


 Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

Insights From Warren Buffett’s 2023 Shareholder’s Letter

There’s much to learn from Warren Buffett’s latest letter, including his thoughts on oil & gas companies and the electric utility industry.

One document I always look forward to reading around this time of the year is Warren Buffett’s annual Berkshire Hathaway shareholder’s letter. Over the weekend, Buffett published the 2023 edition. This letter is especially poignant because Buffett’s long-time right-hand man, the great Charlie Munger, passed away last November. Besides containing a touching eulogy from Buffett to Munger, the letter also had some fascinating insights from Buffett that I wish to document and share. 

Without further ado (emphases are Buffett’s)…

The actions of a wonderful partner 

Charlie never sought to take credit for his role as creator but instead let me take the bows and receive the accolades. In a way his relationship with me was part older brother, part loving father. Even when he knew he was right, he gave me the reins, and when I blundered he never – never –reminded me of my mistake. 

It’s hard to tell a good business from a bad one

Within capitalism, some businesses will flourish for a very long time while others will prove to be sinkholes. It’s harder than you would think to predict which will be the winners and losers. And those who tell you they know the answer are usually either self-delusional or snake-oil salesmen. 

Holding onto a great business – one that can deploy additional capital at high returns – for a long time is a recipe for building a great fortune

At Berkshire, we particularly favor the rare enterprise that can deploy additional capital at high returns in the future. Owning only one of these companies – and simply sitting tight – can deliver wealth almost beyond measure. Even heirs to such a holding can – ugh! – sometimes live a lifetime of leisure…

…You may be thinking that she put all of her money in Berkshire and then simply sat on it. But that’s not true. After starting a family in 1956, Bertie was active financially for 20 years: holding bonds, putting 1⁄3 of her funds in a publicly-held mutual fund and trading stocks with some frequency. Her potential remained unnoticed. 

Then, in 1980, when 46, and independent of any urgings from her brother, Bertie decided to make her move. Retaining only the mutual fund and Berkshire, she made no new trades during the next 43 years. During that period, she became very rich, even after making large philanthropic gifts (think nine figures). 

Berkshire’s size is now a heavy anchor on the company’s future growth rates

This combination of the two necessities I’ve described for acquiring businesses has for long been our goal in purchases and, for a while, we had an abundance of candidates to evaluate. If I missed one – and I missed plenty – another always came along.

Those days are long behind us; size did us in, though increased competition for purchases was also a factor.

Berkshire now has – by far – the largest GAAP net worth recorded by any American business. Record operating income and a strong stock market led to a yearend figure of $561 billion. The total GAAP net worth for the other 499 S&P companies – a who’s who of American business – was $8.9 trillion in 2022. (The 2023 number for the S&P has not yet been tallied but is unlikely to materially exceed $9.5 trillion.) 

By this measure, Berkshire now occupies nearly 6% of the universe in which it operates. Doubling our huge base is simply not possible within, say, a five-year period, particularly because we are highly averse to issuing shares (an act that immediately juices net worth)…

…All in all, we have no possibility of eye-popping performance…

…Our Japanese purchases began on July 4, 2019. Given Berkshire’s present size, building positions through open-market purchases takes a lot of patience and an extended period of “friendly” prices. The process is like turning a battleship. That is an important disadvantage which we did not face in our early days at Berkshire.  

Are there a dearth of large, great businesses outside of the USA? 

There remain only a handful of companies in this country capable of truly moving the needle at Berkshire, and they have been endlessly picked over by us and by others. Some we can value; some we can’t. And, if we can, they have to be attractively priced. Outside the U.S., there are essentially no candidates that are meaningful options for capital deployment at Berkshire.

Markets can occasionally throw up massive bargains because of external shocks

Occasionally, markets and/or the economy will cause stocks and bonds of some large and fundamentally good businesses to be strikingly mispriced. Indeed, markets can – and will – unpredictably seize up or even vanish as they did for four months in 1914 and for a few days in 2001.

Stock market participants today exhibit even more gambling-like behaviour than in the past

Though the stock market is massively larger than it was in our early years, today’s active participants are neither more emotionally stable nor better taught than when I was in school. For whatever reasons, markets now exhibit far more casino-like behavior than they did when I was young. The casino now resides in many homes and daily tempts the occupants.

Stock buybacks are only sensible if they are done at a discount to business-value

All stock repurchases should be price-dependent. What is sensible at a discount to business-value becomes stupid if done at a premium.

Does Occidental Petroleum play a strategic role in the long-term economic security of the USA?

At yearend, Berkshire owned 27.8% of Occidental Petroleum’s common shares and also owned warrants that, for more than five years, give us the option to materially increase our ownership at a fixed price. Though we very much like our ownership, as well as the option, Berkshire has no interest in purchasing or managing Occidental. We particularly like its vast oil and gas holdings in the United States, as well as its leadership in carbon-capture initiatives, though the economic feasibility of this technique has yet to be proven. Both of these activities are very much in our country’s interest.

Not so long ago, the U.S. was woefully dependent on foreign oil, and carbon capture had no meaningful constituency. Indeed, in 1975, U.S. production was eight million barrels of oil-equivalent per day (“BOEPD”), a level far short of the country’s needs. From the favorable energy position that facilitated the U.S. mobilization in World War II, the country had retreated to become heavily dependent on foreign – potentially unstable – suppliers. Further declines in oil production were predicted along with future increases in usage. 

For a long time, the pessimism appeared to be correct, with production falling to five million BOEPD by 2007. Meanwhile, the U.S. government created a Strategic Petroleum Reserve (“SPR”) in 1975 to alleviate – though not come close to eliminating – this erosion of American self-sufficiency.

And then – Hallelujah! – shale economics became feasible in 2011, and our energy dependency ended. Now, U.S. production is more than 13 million BOEPD, and OPEC no longer has the upper hand. Occidental itself has annual U.S. oil production that each year comes close to matching the entire inventory of the SPR. Our country would be very – very – nervous today if domestic production had remained at five million BOEPD, and it found itself hugely dependent on non-U.S. sources. At that level, the SPR would have been emptied within months if foreign oil became unavailable.

Under Vicki Hollub’s leadership, Occidental is doing the right things for both its country and its owners. 

Nobody knows what the price of oil would do in the short-term and the long-term

No one knows what oil prices will do over the next month, year, or decade.

Nobody can predict the movement of major currencies

Neither Greg nor I believe we can forecast market prices of major currencies. We also don’t believe we can hire anyone with this ability. Therefore, Berkshire has financed most of its Japanese position with the proceeds from ¥1.3 trillion of bonds.

Rail is a very cost-efficient way to move products around America, and railroads should continue to be an important asset for the USA for a long time to come

Rail is essential to America’s economic future. It is clearly the most efficient way – measured by cost, fuel usage and carbon intensity – of moving heavy materials to distant destinations. Trucking wins for short hauls, but many goods that Americans need must travel to customers many hundreds or even several thousands of miles away…

…A century from now, BNSF will continue to be a major asset of the country and of Berkshire. You can count on that.

Railroad companies gobble up capital, such that its owners have to spend way more on annual maintenance capital expenditure than depreciation – but this trait allowed Berkshire to acquire BNSF for far less than its replacement value

BNSF is the largest of six major rail systems that blanket North America. Our railroad carries its 23,759 miles of main track, 99 tunnels, 13,495 bridges, 7,521 locomotives and assorted other fixed assets at $70 billion on its balance sheet. But my guess is that it would cost at least $500 billion to replicate those assets and decades to complete the job.

BNSF must annually spend more than its depreciation charge to simply maintain its present level of business. This reality is bad for owners, whatever the industry in which they have invested, but it is particularly disadvantageous in capital-intensive industries.

At BNSF, the outlays in excess of GAAP depreciation charges since our purchase 14 years ago have totaled a staggering $22 billion or more than $11⁄2 billion annually. Ouch! That sort of gap means BNSF dividends paid to Berkshire, its owner, will regularly fall considerably short of BNSF’s reported earnings unless we regularly increase the railroad’s debt. And that we do not intend to do.

Consequently, Berkshire is receiving an acceptable return on its purchase price, though less than it might appear, and also a pittance on the replacement value of the property. That’s no surprise to me or Berkshire’s board of directors. It explains why we could buy BNSF in 2010 at a small fraction of its replacement value.

Railroad companies are having trouble with hiring because of tough working conditions

An evolving problem is that a growing percentage of Americans are not looking for the difficult, and often lonely, employment conditions inherent in some rail operations. Engineers must deal with the fact that among an American population of 335 million, some forlorn or mentally-disturbed Americans are going to elect suicide by lying in front of a 100-car, extraordinarily heavy train that can’t be stopped in less than a mile or more. Would you like to be the helpless engineer? This trauma happens about once a day in North America; it is far more common in Europe and will always be with us.

American railroad companies are at times at the mercy of the US government when it comes to employees’ wages, and they are also required to carry products they would rather not

Wage negotiations in the rail industry can end up in the hands of the President and Congress. Additionally, American railroads are required to carry many dangerous products every day that the industry would much rather avoid. The words “common carrier” define railroad responsibilities.

Last year BNSF’s earnings declined more than I expected, as revenues fell. Though fuel costs also fell, wage increases, promulgated in Washington, were far beyond the country’s inflation goals. This differential may recur in future negotiations.

Has the electric utility industry in the USA become uninvestable because of a change in the authorities’ stance toward electric utilities?

For more than a century, electric utilities raised huge sums to finance their growth through a state-by-state promise of a fixed return on equity (sometimes with a small bonus for superior performance). With this approach, massive investments were made for capacity that would likely be required a few years down the road. That forward-looking regulation reflected the reality that utilities build generating and transmission assets that often take many years to construct. BHE’s extensive multi-state transmission project in the West was initiated in 2006 and remains some years from completion. Eventually, it will serve 10 states comprising 30% of the acreage in the continental United States. 

With this model employed by both private and public-power systems, the lights stayed on, even if population growth or industrial demand exceeded expectations. The “margin of safety” approach seemed sensible to regulators, investors and the public. Now, the fixed-but-satisfactoryreturn pact has been broken in a few states, and investors are becoming apprehensive that such ruptures may spread. Climate change adds to their worries. Underground transmission may be required but who, a few decades ago, wanted to pay the staggering costs for such construction?

At Berkshire, we have made a best estimate for the amount of losses that have occurred. These costs arose from forest fires, whose frequency and intensity have increased – and will likely continue to increase – if convective storms become more frequent.

It will be many years until we know the final tally from BHE’s forest-fire losses and can intelligently make decisions about the desirability of future investments in vulnerable western states. It remains to be seen whether the regulatory environment will change elsewhere.

Other electric utilities may face survival problems resembling those of Pacific Gas and Electric and Hawaiian Electric. A confiscatory resolution of our present problems would obviously be a negative for BHE, but both that company and Berkshire itself are structured to survive negative surprises. We regularly get these in our insurance business, where our basic product is risk assumption, and they will occur elsewhere. Berkshire can sustain financial surprises but we will not knowingly throw good money after bad.

Whatever the case at Berkshire, the final result for the utility industry may be ominous: Certain utilities might no longer attract the savings of American citizens and will be forced to adopt the public-power model. Nebraska made this choice in the 1930s and there are many public-power operations throughout the country. Eventually, voters, taxpayers and users will decide which model they prefer. 

When the dust settles, America’s power needs and the consequent capital expenditure will be staggering. I did not anticipate or even consider the adverse developments in regulatory returns and, along with Berkshire’s two partners at BHE, I made a costly mistake in not doing so. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.