The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the third quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management recently introduced a personalised welcome tour of the Airbnb app for first-time users; management sees this personalisation feature as the beginning of a more personalised Airbnb

We also introduced 50 upgrades for guests that make Airbnb a more intuitive and personalized app. And some of the features include a personalized welcome tour of the app for first-time guests, suggest a destination when guests tap the search bar, we’ll recommend locations on their search and booking history, and personalized listing highlights. So when a guest views a listing, we will highlight the details that are relevant to their search, and there are dozens of new features just like these. This is quite literally the beginning of a more personalized Airbnb.

Airbnb’s management is seeing great progress on AI-powered customer service; management sees 3 phases to the deployment of AI-powered customer service, where Phase 1 is Airbnb using AI to answer basic questions from customers, Phase 2 is the AI answering questions from customers in a personalised way, and Phase 3 is the AI taking personalised actions on behalf of customers; management thinks that Airbnb has hired some of the best AI talent to develop AI-powered customer service

We are seeing some really great progress on AI-powered customer service. The way we think about customer service, powered by AI is in 3 phases…

…Phase 1 is just answer basic general questions. We’re rolling out a pilot that can answer basic general questions. Phase 2 is personalization, be able to personalize the questions. Phase 3 is to take action…

…So this is where we think customer service can go enabled by AI, and we’ve hired some of the best people in the world to work on this.

Airbnb is currently in Phase 1 of deploying AI-powered customer service; management thinks that the vast majority of customer chats that are received by Airbnb will be handled directly by AI agents in the future

Phase 1 is the phase we’re in right now. If you were to most — first of all, most of our customer context, we get over 10 million contacts a year. Most of the contacts that we anticipate getting in the coming years aren’t going to be phone calls. They’re going to be chatting through the app. I really personally don’t like calling customer service and having to dial them. I want to be able to chat, and chat AI can intercept. And so we think in the future, the vast majority of our chats are going to be intercepted and handled directly by the AI agent.

An example of the 3rd phase of Airbnb’s AI-powered customer service that management has in mind: An AI agent can help customers to cancel bookings and even make rebookings 

So I’ll give you an example. Let me just give you 1 example. Let’s say I were to contact customer service and I say, “how do I cancel reservation?” In Phase 1, what we’re doing now, the AI agent will answer probably even better than the average customer service agent, how to cancel a reservation. So we’ll take you to how to cancel a reservation step by step. Phase 2 personalization, they’ll say, “hey, Brian, I see you have a reservation coming up in Los Angeles next week. Here’s how you cancel that reservation.” And Phase III is taking action. It would say, “hey, Brian, I see you have a reservation come to Los Angeles. Would you like me to cancel it for you? Just tell me, yes, and I’ll do it for you. I can even handle rebooking.”

Alphabet (NASDAQ: GOOG)

Alphabet’s management thinks Alphabet is positioned to lead in AI because of the company’s full-stack approach of a robust AI infrastructure, world-class research team, and broad user-reach

We are uniquely positioned to lead in the era of AI because of our differentiated full stack approach to AI innovation, and we are now seeing this operate at scale. There’s 3 components: first, a robust AI infrastructure that includes data centers, chips and a global fiber network; second, world-class research teams who are advancing our work with deep technical AI research and who are also building the models that power our efforts. And third, a broad global reach through products and platforms that touch billions of people and customers around the world, creating a virtuous cycle.

Alphabet signed the world’s first corporate agreement for energy from multiple small modular nuclear reactors; the reactors will deliver 500 megawatts of carbon-free power 24/7

We are also making bold clean energy investments, including the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors, which will enable up to 500 megawatts of new 24/7 carbon-free power.

Since Alphabet began testing AI overviews 18 months ago, the company has reduced the cost to deliver queries by 90% while doubling the size of its Gemini foundation AI model; AI overview has led to users coming to Search more often; AI overview was recently rolled out to 100 new countries and territories and will reach more than 1 billion users on a monthly basis; there’s strong engagement in AI overview, leading to higher overall search usage and user satisfaction, and users are asking longer questions and exploring more websites; the growth driven by AI overviews is increasing over time; the integration of advertising with AI overviews  is performing well; Alphabet is now showing search and shopping ads within AI overviews for mobile users in the USA; management finds that users find ads within AI overviews to be helpful; management expects Search to evolve significant in 2025, driven by advances in AI; management is seeing the monetisation rate on ads within AI overviews to be the same as the broader Search; reminder from management that Google already introduced an answer-machine 10 years ago and management is aware of changing trends in user behaviours in Search

Since we first began testing AI overviews, we have lowered machine cost per query significantly. In 18 months, we reduced cost by more than 90% for these queries through hardware, engineering and technical breakthroughs while doubling the size of our custom Gemini model…

…In Search, recent advancements, including AI overviews, Circle to Search and new features in :ens are transforming the user experience, expanding what people can search for and how they search for it. This leads to users coming to search more often for more of their information needs driving additional search queries. Just this week, AI overview started rolling out to more than 100 new countries and territories. It will now reach more than 1 billion users on a monthly basis. We are seeing strong engagement, which is increasing overall search usage and user satisfaction. People are asking longer and more complex questions and exploring a wide range of websites. What’s particularly exciting is that this growth actually increases over time as people learn that Google can answer more of their questions.

The integration of ads within AI overviews is also performing well, helping people connect with businesses as they search…

…AI overviews, where we have now started showing search and shopping ads within the overview for mobile users in the U.S. As you remember, we’ve already been running ads above and below AI overviews. We’re now seeing that people find ads directly within AI overview is helpful because they can quickly connect with relevant businesses, products and services to take the next step at the exact moment they need…

… So I expect Search to continue to evolve significantly in 2025, both in the search product and in Gemini…

…We recently launched ads within AI overviews on mobile in the U.S. And this really builds on our previous rollout of ads above and below the AI overviews. So overall, for AI overviews, we see monetization at approximately the same rate, which gives us a really strong base on which we can innovate even more…

…[Question] Why doesn’t it make sense to have 2 completely different search experiences one an agent like answers engine; and then two, a link-based more traditional search engine? 

[Answer] In this moment, people are using a lot of buzz words like answer engines and all that stuff. I mean Google started answering questions about 10 years ago in our search product with featured snippets. So look, I think, ultimately, you are serving users. User expectations are constantly evolving. And and we work hard to stay a step ahead, anticipate and stay a step ahead.

Alphabet uses and offers customers both its own TPUs (tensor processing units) and Nvidia GPUs; Alphabet is now on the 6th generation of its TPUs, known as Trillium; LG AI Research reduced its inference processing time by 50% and operating costs by 72% using Google Cloud’s TPUs and GPUs; Alphabet will be one of the first companies to provide Nvidia’s GB 200s at scale; management thinks TPUs have very attractive pricing for its capability

We use and offer our customers a range of AI accelerator options, including multiple classes of NVIDIA GPUs and our own custom-built GPUs. We are now on the sixth generation of TPUs known as Trillium and continue to drive efficiencies and better performance with them…

…Using a combination of our TPUs and GPUs, LG AI research reduced inference processing time for its multimodal model by more than 50% and operating costs by 72%…

…We have a wonderful partnership with NVIDIA. We are excited for the GB 200s and will be one of the first to provide it at scale…

…On your first part of the question on the TPUs. If you look at the flash pricing, we’ve been able to deliver externally, I think and how much more attractive it is compared to other models of that capability.

Usage of Alphabet’s Gemini foundation AI model is in a period of dramatic growth by any measure; improvements to Gemini will soon come; all 7 of Alphabet’s products that have more than 2 billion monthly users each use Gemini models; Gemini is now available on GitHub Copilot; Gemini API calls were up 14x in a 6-month period; Snap saw a 2.5 times increase in engagement with its MyAI chatbot after choosing Gemini to power the chatbot’s user experiences; Gemini’s integration with Android is improving Android; the latest Samsung Galaxy devices’ Android operating system has Gemini Live for users to converse with the Gemini model; Alphabet’s latest Pixel 9 devices have Gemini Nano within; development of the third generation of the Gemini model is progressing well; see Point 23 for how Gemini is helping advertisers

By any measure, token volume, API calls, consumer usage business adoption, usage of the Gemini models is in a period of dramatic growth, and our teams are actively working on performance improvements and new capabilities for our range of models. Stay tuned…

… Today, all 7 of our products and platforms with more than 2 billion monthly users use Gemini models, that includes the latest product to surpass the 2 billion user milestone Google Maps…

…Today, we shared that Gemini is now available on GitHub Copilot with more to come…

…Gemini API calls have grown nearly 14x in a 6-month period. When Snap was looking to power more innovative experiences within their MyAI chatbot, they chose Gemini’s strong multimodal capabilities. Since then, Snap saw over 2.5x as much engagement with MyAI in the United States…

… Gemini’s deep integration is improving Android. For example, Gemini Live lets you have free-flowing conversations with Gemini. People love it. It’s available on Android, including Samsung Galaxy devices. We continue to work closely with them to deliver innovations across their newest devices with much more to come. At Made by Google, we unveiled our latest Pixel 9 series of devices featuring advanced AI models, including Gemini Nano. We have seen strong demand for these devices, and they’ve already received multiple awards…

…We’ve had 2 generations of Gemini model. We are working on the third generation, which is progressing well.

Alphabet’s Project Astra will allow AI to see and reason about the physical world around users, and management aims to ship it as early as 2025

When they’re building out experiences where AI can see and reason about the world around you, Project Astra is a glimpse of that future. We are working to ship experiences like this as early as 2025.

Alphabet is using AI internally to improve coding productivity and efficiency; a quarter of new code at Google is now generated by AI

We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than 1/4 of all new coated Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. 

Circle to Search is now available on more than 150 million Android devices; a third of users who have tried Circle to Search now use it weekly; Circle to Search has higher engagement with younger users

Circle to Search is now available on over 150 million Android devices with people using it to shop, translate text and learn more about the world around them. 1/3 of the people who have tried Circle to Search now use it weekly, a testament to its helpfulness and potential…

… For example, with Circle to Search, where we see higher engagement from users aged 18 to 24.  

Lens is now used in over 20 billion visual searches per month; Lens is one of the fastest-growing query types management has seen on Search; management started testing product search on Lens in October and found that shoppers are more likely to engage; management is seeing users use Lens for complex multimodal queries; Alphabet has rolled out shopping ads with Lens visual research results to better connect consumers and businesses.

Lens is now used for over 20 billion visual searches per month. Lens is one of the fastest-growing query types we see on search because of its ability to answer complex multimodal questions and help in product discovery and shopping…

…In early October, we announced product search on Google Lens and in testing this feature, we found that shoppers are more likely to engage with content in this new format. We’re also seeing that people are turning to Lens more often to run complex multimodal quarries, voicing a question or inputting text in addition to a visual. Given these new user behaviors earlier this month, we announced the rollout of shopping ads above and alongside relevant Lens visual search results to help better connect consumers and businesses. 

Customers are using Google Cloud’s AI products in 5 different ways: (1) for AI hardware and software infrastructure; (2) for building and customising AI models with Vertex; (3) for combining Google Cloud’s AI platform with its data platform; (4) for AI-powered cybersecurity solutions; and (5) for building AI agents to improve customer engagement

Customers are using our products in 5 different ways. First, our AI infrastructure. which we differentiate with leading performance driven by storage, compute and software advances as well as leading reliability and a leading number of accelerators…

…Second, our enterprise AI platform, Vertex is used to build and customize the best foundation models from Google and the industry…

…Third, customers use our AI platform together with our data platform, Big Query, because we analyze multimodal data no matter where it is stored with ultra low latency access to Gemini…

…Fourth, our AI-powered cybersecurity solutions Google threat intelligence and security operations are helping customers like BBVA and Deloitte, prevent deduct and respond to cybersecurity threats much faster…

… Fifth, in Q3, we broadened our applications portfolio with the introduction of our new customer engagement suite. It’s designed to improve the customer experience online and in mobile apps as well as in call centers, retail stores and more. 

Waymo is the biggest part of Alphabet’s Other Bets portfolio; Alphabet’s management thinks Waymo is the clear technical leader in autonomous vehicles; Waymo is now serving 150,000 paid rides weekly and driving 1 million fully autonomous miles, and is the first autonomous vehicle company to reach these milestones; Waymo is partnering with Uber and Hyundai to deliver autonomous vehicles to more consumers; Waymo is now on its sixth generation system

I want to highlight Waymo, the biggest part of our portfolio. Waymo is now a clear technical leader within the autonomous vehicle industry and creating a growing commercial opportunity. Over the years, Waymo has been infusing cutting edge AI into its work. Now each week, Waymo is driving more than 1 million fully autonomous miles and serves over 150,000 paid rides, the first time any AV company has reached this kind of mainstream use. Through its expanded network and operations partnership with Uber in Austin and Atlanta, plus a new multiyear partnership with Hyundai, Waymo will bring fully autonomous driving to more people and places. By developing a universal driver, Waymo has multiple paths to market. And with its sixth generation system, Waymo significantly reduced unit costs without compromising safety.

Alphabet’s management finds that AI helps Alphabet better understand consumer-intent and connect consumers with advertisers

AI is expanding our ability to understand intent and connect it to our advertisers. This allows us to connect highly relevant users with the most helpful ad and deliver business impact to our customers.

Advertisers are using Gemini to build and test more creatives at scale; Audi worked with Gemini tools to increase website visits by 80% and increase clicks by 2.7 times

Advertisers now use our Gemini power tools to build and test a larger variety of relevant creators at scale. Audi used our AI tools to generate multiple video image and text assets in different links and orientations out of existing long-form videos. They then fed the newly generated creatives into Demand Gen to drive reach, traffic and booking to their driving experience. The campaign increased website visits by 80% and increased clicks by 2.7x, delivering a lift in their sales. 

Alphabet is offering AI-powered campaigns to help advertisers achieve faster feedback on what is working; DoorDash saw a 15x higher conversion rate at a 50% more efficient cost per action

AI-powered campaigns help advertisers get faster feedback on what creatives workwear and redirect the media buying. Using Demand Gen, DoorDash tested a mix of image and video assets to drive more impact across Google and YouTube’s visually immersive surfaces. They saw a 15x higher conversion rate at a 50% more efficient cost per action when compared to video action campaigns alone. 

Alphabet is using AI to help advertisers better measure their advertising results

This quarter, we extended the availability of our open source marketing mix model, Meridian to more customers, helping to scale measurement of cross-channel budgets to drive better business outcomes.

Alphabet’s big jump capex in 2024 Q3 (was $8.1 billion in 2023 Q3) was mostly for technical infrastructure, in the form of servers and data centers; management expects Alphabet’s 2024 Q4 capex to be similar to what was seen in 2024 Q3; Alphabet announced more than $7 billion in planned data center investments in 2024 Q3, with $6 billion in the USA; management expects further growth in capex in 2025, but not at the same percentage increase seen from 2023 to 2024; the use of TPUs at Alphabet helps to drive efficiencies

With respect to CapEx, our reported CapEx in the third quarter was $13 billion, reflecting investment in our technical infrastructure with the largest component being investment in servers, followed by data centers and networking equipment. Looking ahead, we expect quarterly CapEx in the fourth quarter to be at similar levels to Q3…

…In the third quarter alone, we made announcements of over $7 billion in planned data center investments with nearly $6 billion of that in the U.S…

…As you saw in the quarter, we invested $13 billion in CapEx across the company. And as you think about it, it really is divided into 2 categories. One is our technical infrastructure, and that’s the majority of that $13 billion. And the other one goes into areas such as facilities, the bets and other areas across the company. Within TI, we have investments in servers, which includes both TPUs and GPUs. And then the second categories are data centers and networking equipment. This quarter, approximately 60% of that investments in technical infrastructure went towards servers and about 40% towards data center and networking equipment…

…And as you think about the next quarter and going into next year, as I mentioned in my prepared remarks, we will be investing in Q4 at approximately the same level of what we’ve invested in Q3, approximately $13 billion. And as we think into 2025, we do see an increase coming in 2025, and we will provide more color on that on the Q4 call, likely not the same percent step-up that we saw between ’23 and ’24, but additional increase…

…On your first part of the question on the TPUs. If you look at the flash pricing, we’ve been able to deliver externally, I think and how much more attractive it is compared to other models of that capability. I think probably that gives a good sense of the efficiencies we can generate from our architecture. And so — and we are doing the same that for internal use as well. The models for search while they keep going up in capability we’ve been able to really optimize them for the underlying architecture, and that’s where we are seeing a lot of efficiencies as well.  

Amazon (NASDAQ: AMZN)

Amazon’s management believes that AI will be a big piece of the company’s robotics efforts in its fulfilment network

We continue to innovate in robotics to speed delivery, lower cost to serve, and further improve safety in our fulfillment network…

…We really do believe that AI is going to be a big piece of what we do in our robotics network. We had a number of efforts going on there. We just hired a number of people from an incredibly strong robotics AI organization. And I think that will be a very central part of what we do moving forward, too. 

Amazon’s management sees customers focused on new cloud computing efforts again, and the modernisation of their infrastructure, by migrating to the cloud, is important if they want to work on generative AI at scale

Companies are focused on new efforts again, spending energy on modernizing their infrastructure from on-premises to the cloud. This modernization enables companies to save money, innovate more quickly, and get more productivity from their scarce engineering resources. However, it also allows them to organize their data in the right architecture and environment to do generative AI at scale. It’s much harder to be successful and competitive in generative AI if your data is not in the cloud.

AWS has released nearly twice as many AI features in the last 18 months as other leading cloud providers combined; AWS’s AI business is growing at a triple digit rate at a multi-billion revenue run rate; AWS’s AI business is currently growing more than 3x faster than AWS itself grew when AWS was at a similar stage; management sees AI as an unusually large opportunity

In the last 18 months, AWS has released nearly twice as many machine learning and gen AI features as the other leading cloud providers combined. AWS’ AI business is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage and is growing more than 3x faster at this stage of its evolution as AWS itself grew, and we felt like AWS grew pretty quickly…

…It is a really unusually large, maybe once-in-a-lifetime type of opportunity. And I think our customers, the business, and our shareholders will feel good about this long term that we’re aggressively pursuing it.

Amazon has a good relationship with NVIDIA, but management have heard from customers that they want better price performance on their AI workloads, and so AWS developed its own AI chips for training and inference; AWS’s second version of its AI chip for model-training, Trainium 2, will ramp up in the next few weeks; management thinks Trainium 2 have very compelling price performance; management is seeing significant interest in Trainium 2, to the extent they have to increase manufacturing orders much more than originally planned

While we have a deep partnership with NVIDIA, we’ve also heard from customers that they want better price performance on their AI workloads. As customers approach higher scale in their implementations, they realize quickly that AI can get costly. It’s why we’ve invested in our own custom silicon in Trainium for training and Inferentia for inference. The second version of Trainium, Trainium2, is starting to ramp up in the next few weeks and will be very compelling for customers on price performance. We’re seeing significant interest in these chips, and we’ve gone back to our manufacturing partners multiple times to produce much more than we’d originally planned…

…We have a very deep partnership with NVIDIA, we tend to be their lead partner on most of their new chips. We were the first to offer H200s in EC2 instances. And I expect us to have a partnership for a very long time that matters.

Amazon’s management is seeing more model builders standardise on SageMaker,  AWS’s fully-managed AI service; SageMaker’s hyperpod capability helps save model-training time by up to 40%

We also continue to see increasingly more model builders standardize on Amazon SageMaker, our service that makes it much easier to manage your AI data, build models, experiment, and deploy to production. This team continues to add features at a rapid clip punctuated by SageMaker’s unique hyperpod capability, which automatically splits training workloads across more than 1,000 AI accelerators, prevents interruptions by periodically saving checkpoints, and automatically repairing faulty instances from their last saved checkpoint, and saving training time by up to 40%.

Amazon’s management believes Amazon Bedrock, AWS’s AI-models-as-a-service offering for companies that want to leverage existing foundation models for customisation, has the broadest selection of leading foundation models; Bedrock recently added Anthropic’s Claude 3.5 Sonnet model, Meta’s Llama 3.2 models, and more; management is seeing companies use models from different providers within the same application and Bedrock makes it easy to orchestrate the disparate models; Bedrock also helps companies with model-access, prompt engineering, and lowering inference costs

At the middle layer where teams want to leverage an existing foundation model, customized with their data, and then have features to deploy high-quality generative AI applications, Amazon Bedrock has the broadest selection of leading foundation models and most compelling modules for key capabilities like model valuation, guardrails, rag and agents. Recently, we’ve added Anthropic’s Claude 3.5 Sonnet model, Meta’s Llama 3.2 models, Mistral’s Large 2 models and multiple-stability AI models. We also continue to see teams use multiple model types from different model providers and multiple model sizes in the same application.  There’s mucking orchestration required to make this happen. And part of what makes Bedrock so appealing to customers and why it has so much traction is that Bedrock makes this much easier. Customers have many other requests: access to even more models, making prompt management easier, further optimizing inference costs. And our Bedrock team is hard at work making this happen.

Amazon’s management continues to see strong adoption of Amazon Q, Amazon’s generative AI assistant for software development; Amazon Q has the highest reported code acceptance rates in the industry; reminder that Amazon saved $260 million and 4,500 developer years when performing a large Java Development Kit migration through the use of Amazon Q

We’re continuing to see strong adoption of Amazon Q, the most capable generative AI-powered assistant for software development and to leverage your own data. Q has the highest reported code acceptance rates in the industry for multiline code suggestions. The team has added all sorts of capabilities in the last few months, but the very practical use case recently shared where Q Transform saving Amazon’s teams $260 million and 4,500 developer years in migrating over 30,000 applications to new versions of the Java JDK. As excited developers and prompted them to ask how else we could help them with tedious and painful transformations.

Amazon is using generative AI pervasively across its businesses, with hundreds of apps in use or in development; Rufus is a generative AI-powered shopping assistant available in parts of Europe, North America, and India; Amazon is using generative AI to improve personalisation and product-search for consumers when shopping; Project Amelia is an AI system offering tailored business insights to Amazon sellers; Alexa, Amazon’s virtual assistant technology, is being rearchitected with new foundation AI models; the new Kindle Scribe has a built-in AI-powered notebook 

We’re also using generative AI pervasively across Amazon’s other businesses with hundreds of apps in development or launched.

For consumers, we’ve expanded Rufus, our generative AI-powered expert shopping assistant, to the U.K., India, Germany, France, Italy, Spain, and Canada. And in the U.S., we’ve added more personalization, the ability to better narrow customer intent and real-time pricing and deal information. We’ve recently debuted AI shopping guides for consumers, which simplifies product research by using generative AI to pair key factors to consider in a product category with Amazon’s wide selection, making it easier for customers to find the right product for their needs. 

For sellers, we’ve recently launched Project Amelia, an AI system that offers tailored business insights to boost productivity and drive seller growth.

We continue to rearchitect the brain of Alexa with a new set of foundation models that we’ll share with customers in the near future, and we’re increasingly adding more AI into all of our devices. Take the new Kindle Scribe we just announced. The note-taking experience is much more powerful with the new built-in AI-powered notebook, which enables you to quickly summarize pages of notes into concise bullets in a script font that can easily be shared.

Amazon’s management expects capital expenditures of $75 billion for the whole of 2024; most of the capex will be for AWS infrastructure to support demand for AI services; the capex also includes investments in Amazon’s fulfilment and transportation network; management expects capex in 2025 to increase from 2024’s level, with most of the capex for AWS, specifically generative AI; reminder that the faster AWS grows, the faster Amazon needs to invest capital for hardware; many of the assets AWS’s capex is invested in have long, useful lives; management expects to deliver high returns on invested capital with AWS’s generative AI investments; management has a lot of experience, accumulated over the years, in predicting just the right amount of compute capacity to provide for AWS before the generative AI era, and they believe they can do so again for generative AI

Year-to-date capital investments were $51.9 billion. We expect to spend approximately $75 billion in CapEx in 2024. The majority of the spend is to support the growing need for technology infrastructure. This primarily relates to AWS as we invest to support demand for our AI services while also including technology infrastructure to support our North America and international segments. Additionally, we’re continuing to invest in our fulfillment and transportation network to support the growth of the business, improve delivery speeds and lower our cost to serve. This includes investments in same-day delivery facilities, in our inbound network and as well in robotics and automation…

… I’ll take the CapEx part of that. As Brian said in his opening comments, we expect to spend about $75 billion in 2024. I suspect we’ll spend more than that in 2025. And the majority of it is for AWS, and specifically, the increased bumps here are really driven by generative AI…

…The thing to remember about the AWS business is the cash life cycle is such that the faster we grow demand, the faster we have to invest capital in data centers and networking gear and hardware. And of course, in the hardware of AI, the accelerators or the chips are more expensive than the CPU hardware. And so we invest in all of that upfront in advance of when we can monetize it with customers using the resources…

…A lot of these assets are many-year useful life assets. Data centers, for instance, are useful assets for 20 to 30 years…

…I think we’ve proven over time that we can drive enough operating income and free cash flow to make this very successful return on invested capital business. And we expect the same thing will happen here with generative AI…

…One of the least understood parts about AWS over time is that it is a massive logistics challenge. If you think about we have 35-or-so regions around the world, which is an area of the world where we have multiple data centers, and then probably about 130 availability zone through data centers, and then we have thousands of SKUs we have to land in all those facilities. And if you land too little of them, you end up with shortages, which end up in outages for customers. So most don’t end up with too little, they end up with too much. And if you end up with too much, the economics are woefully inefficient. And I think you can see from our economics that we’ve done a pretty good job over time at managing those types of logistics and capacity. And it’s meant that we’ve had to develop very sophisticated models in anticipating how much capacity we need, where, in which SKUs and units.

And so I think that the AI space is, for sure, earlier stage, more fluid and dynamic than our non-AI part of AWS. But it’s also true that people aren’t showing up for 30,000 chips in a day. They’re planning in advance. So we have very significant demand signals giving us an idea about how much we need…

…There are some similarities in the early days here of AI, where the offerings are new and people are very excited about it. It’s moving very quickly and the margins are lower than what I think they will be over time. The same was true with AWS. If you looked at our margins around the time you were citing, in 2010, they were pretty different than they are now. I think as the market matures over time, there are going to be very healthy margins here in the generative AI space.

There are a few hundred million active Alexa devices; management had an initial vision of Alexa being the world’s best personal assistant and they believe now that Alexa’s re-architecture can give it a shot at fulfilling the initial vision

I think we have a really broad number of Alexa devices all over people’s homes and offices and automobiles and hospitality suites. We’ve about 0.5 billion devices out there with a couple of hundred million active end points. And when we first were pursuing Alexa, we had this vision of it being the world’s best personal assistant and people thought that was kind of a crazy idea. And I think if you look at what’s happened in generative AI over the last couple of years, I think you’re kind of missing the boat if you don’t believe that’s going to happen. It absolutely is going to happen. So we have a really broad footprint where we believe if we rearchitect the brains of Alexa with next-generation foundational models, which we’re in the process of doing, we have an opportunity to be the leader in that space.

Amazon’s management believes that AWS’s demand substantially outweighs capacity today; management believes AWS’s rate of growth can improve over time as capacity grows

[Question] On the cloud, are you at all capacity constrained, and will the new Trainium or NVIDIA chips maybe even drive sales growth faster?

[Answer] I believe we have more demand that we could fulfill if we had even more capacity today. I think pretty much everyone today has less capacity than they have demand for, and it’s really primarily chips that are the area where companies could use more supply…

…We’re growing at a very rapid rate and have grown a pretty big business here in the AI space. And it’s early days, but I actually believe that the rate of growth there has a chance to improve over time as we have bigger and bigger capacity.

Apple (NASDAQ: AAPL)

Apple announced Apple Intelligence in June 2024; Apple Intelligence redefines privacy in AI; Apple recently released the first set of Apple Intelligence features in US English for iPhone, iPad, and Mac users, and they include writing tools, an improved version of Siri, a more intelligent Photos App, and notification summaries and priority messages; more Apple Intelligence features will be released in December 2024 and early developer feedback is great; the adoption rate of iOS18 in its first three days is twice as fast as for iOS17, suggesting interest for Apple Intelligence; Apple will release support for additional languages in Apple Intelligence in April 2025

In June, we announced Apple Intelligence, a remarkable personal intelligent system that combines the power of generative models with personal context to deliver intelligence that is incredibly useful and relevant. Apple Intelligence marks the beginning of a new chapter for Apple Innovation and redefines privacy and AI by extending our groundbreaking approach to privacy into the cloud with private cloud compute. Earlier this week, we made the first set of Apple Intelligence features available in U.S. English for iPhone, iPad and Mac users with system-wide writing tools that help you refine your writing, a more natural and conversational Siri, a more intelligent Photos app, including the ability to create movies simply by typing a description, and new ways to prioritize and stay in the moment with notification summaries and priority messages.

And we look forward to additional intelligence features in December with even more powerful writing tools, a new visual intelligence experience that builds on Apple Intelligence and ChatGPT integration as well as localized English in several countries, including the U.K., Australia and Canada. These features have already been provided to developers, and we’re getting great feedback. More features will be rolling out in the coming months as well as support for more languages, and this is just the beginning…

…[Question] I was wondering if you could just expand a little bit on some of the early feedback to Apple Intelligence, both for iOS 18.1 but also the developer beta so far and whether you would attribute Apple Intelligence to any of the strong iPhone performance that we’ve seen to date.

[Answer] We’re getting a lot of positive feedback from developers and customers. And in fact, if you just look at the first 3 days, which is all we have obviously from Monday, the 18.1 adoption is twice as fast as the 17.1 adoption was in the year ago quarter. And so there’s definitely interest out there for Apple Intelligence…

…We started in the — with U.S. English. That started on Monday. There’s another release coming that adds additional features that I had referenced in December in not only U.S. English but also localized for U.K., Australia, Canada, Ireland and New Zealand. And then we will add more languages in April. We haven’t set the specifics yet in terms of the languages, but we’ll add more in April and then more as we step through the year. And so we’re moving just as fast as possible while ensuring quality.

Apple’s management is building the infrastructure to deliver Apple Intelligence, but it does not seem like Apple will need to significantly increase its capex budget from historical norms; management also does not see any significant change to the intensity of research & development (R&D) spending that Apple needs to invest in AI

[Question] Could you just talk a little bit about the CapEx outlook and whether investments in things like private cloud compute could change the historical CapEx range of roughly $10 billion a year?

[Answer] We are rolling out these features, Apple Intelligence features already now. And so we are making all the capacity that is needed available for these features. You will see in our 10-K the amount of CapEx that we’ve incurred during the course of fiscal ’24. And we will — in fiscal ’25, we will continue to make all the investments that are necessary, and of course, the investments in AI-related CapEx will be made…

…[Question] Given how much your tech peers are spending on AI, does this new era of Apple Intelligence actually require Apple to invest more in R&D beyond your current 7% to 8% of sales to capture this opportunity? 

[Answer] We’ve been investing heavily in R&D over the last several years. Our R&D growth has been significant during the last several years. And obviously, as we move through the course of fiscal ’24, we’ve also reallocated some of the existing resources to this new technology, to AI. And so the level of intensity that we’re putting into AI has increased a lot, and you maybe don’t see the full extent of it because we’ve also had some internal reallocation of the base of engineering resources that we have within the company.

Apple’s management thinks the introduction of Apple Intelligence will benefit the entire Apple ecosystem

[Question] I understand Apple Intelligence is a feature on the phone today. But do you think that in the future it could potentially have or benefit the services growth business? Or is that too — are those too bifurcated to even make a call on the — this early in the cycle?

[Answer] Keep in mind that we have released a lot of APIs, and developers will be taking advantage of those APIs. That release has occurred as well, and of course, more are coming. And so I definitely believe that a lot of developers will be taking advantage of Apple Intelligence in a big way. And what that does to services, I’ll not forecast, but I would say that from an ecosystem point of view, I think it will be great for the user and the user experience.

Arista Networks (NYSE: ANET)

Arista Networks’ management is seeing networking for AI gaining a lot of traction; trials that took place in 2023 are becoming pilots in 2024; management expects more production in 2025 and 2026

Networking for AI is gaining a lot of traction as we move from trials in 2023 to more pilots in 2024, connecting to thousands of GPUs, and we expect more production in 2025 and 2026.

AI data traffic is very different from traditional cloud workloads and smooth and consistent data flow is a crucial factor in AI networking

AI traffic differs greatly from cloud workloads in terms of diversity, duration and size of flow. The fidelity of AI traffic flows where the slowest flow matters and one slow flow could slow down the entire job completion time is a crucial factor in networking.

Arista Networks’ management sees the company becoming a pioneer in scale-out Ethernet accelerated networking for large AI workloads; Arista Networks’ new Etherlink portfolio scales well to networks with over 100,000 GPUs and can potentially even handle 1 million GPU clusters; Arista Networks’ latest 77R4 DES platform was launched in close collaboration with Meta Platforms

Our AI centers connect seamlessly from the back end to the front end of compute, storage, WAN and classic cloud networks. Arista is emerging as the a pioneer and scale-out Ethernet accelerated networking for large-scale training and AI workloads. Our new Etherlink portfolio with wire speed 800-gig throughput and non-blocking performance, scales from single tier to efficient 2-tier networks for over 100,000 GPUs, potentially even 1 million AI accelerators with multiple tiers. Our accelerated AI networking portfolio consists of 3 families with over 20 switching products and not just one point switch. At the recent OCP in mid-October 2024, we officially launched a very unique platform that distributed Etherlink 7700 to build 2 tier networks for up to 10,000 GPU clusters. The 77R4 DES platform was developed in close collaboration with Meta. And while it may physically look like and be cable like a 2-tier leaf spine network, DES provides a single-stage forwarding with highly efficient spine fabric, eliminating the need for tuning and encouraging fast failover for large AI accelerator-based clusters. 

Arista Networks’ management believes the company has the broadest set of 800 gigabit per second Ethernet products for AI networks

I’m pleased to report Arista 7700R4 distributed Etherlink switch, the 7800R4 Spine, along with the 7060X6 AI leaf that we announced in June have entered into production providing our customers the broadest set of 800 gigabit per second Ethernet products for their AI networks. Together with 800 gigabit per second parallel optics, our customers are able to connect to 400 gigabit per second GPUs to each port increasing the deployment density over current switching solutions. This broad range of Ethernet platforms allows our customers to optimize density and minimize tiers to best match the requirements of their AI workload.

New AI clusters require high-speed connections to existing backbones

New AI clusters require new high-speed port connections into the existing backbone. These new clusters also increased bandwidth on the backbone to access training data, capture snapshots and deliver results generated by the cluster. This trend is providing increased demand for 7800R3 400-gigabit solution.

Arista Networks’ management sees next-generation AI data centres needing significantly more power while doubling network performance

Next-generation data centers integrating AI will contend with significant increases in power consumption while looking to double network performance.

Arista Networks’ management thinks the adoption of AI networking will rest on specifications that the Ultra Ethernet Consortium (UEC) is expected to soon release; the UEC now has 97 members and Arista Networks is a founding member

Critical to the rapid adoption of AI networking is the Ultra Ethernet consortium specification expected imminently with Arista’s key contributions as a founding member. The UEC ecosystem for AI has evolved to over 97 members.

Arista Networks’ management thinks Ethernet is the only option for open standard space AI networking

In our view, Ethernet is the only long-term viable direction for open standard space AI networking.

Arista Networks’ business growth in 2024 was achieved partly with the help of AI; management is now projecting even more growth in 2025 and is confident of achieving its AI back-end revenue target of US$750 million; the adoption of Arista Networks’ AI back-end products influences the adoption of its front-end AI networking products too; management also expects Arista Networks’ front-end AI networking products to generate around US$750 million in revenue in 2025, but sometimes this gets hard to track; the US$750 million in AI back-end revenue that management expects are brand new for the company

We’ve experienced some pretty amazing growth years with 33.8% growth in ’23 and 2024 appears to be heading at least to 18%, exceeding our prior predictions of 10% to 12%. This is quite a jump in 2024, influenced by faster AI pilots. We are now projecting an annual growth of 15% to 17% next year, translating to approximately $8 billion in 2025 revenue with a healthy expectation of operating margin. Within that $8 billion revenue target, we are quite confident in achieving our campus and AI by back-end networking targets of $750 million each in 2025 that we set way back 1 or 2 years ago. It’s important to recognize though that the back end of AI will influence the front-end AI network and its ratios. This ratio can be anywhere from 30% to 100% and sometimes, we’ve seen it as high as 200% of the back-end network depending on the training requirements. Our comprehensive AI center networking number is therefore likely to be double of our back-end target of $750 million, now aiming for approximately $1.5 billion in 2025…

… I would expect in the back end, any share Arista gets, including that $750 million is incremental. It’s brand new to us. We were never there before…

…I think it all depends on their approach to AI. If they just want to build a back-end cluster and prove something out, then they just look for the highest job training completion and intense training models. And it’s a very narrow use case. But what we’re starting to see more and more, especially with the top 5, like I said, is for every dollar spent in the back end, you could spend 30% more, 100% more, and we’ve even seen a 200% more scenario, which is why our $750 million will carry over to, we believe, next year, another $750 million on front-end traffic that will include AI, but it will include other things as well. It won’t be unique to AI. So I wouldn’t be surprised if that number is anywhere between 30% and 100%, so the average is 100%., which is 2x our back-end number. So feeling pretty good about that. Don’t know how to exactly count that as pure AI, which is why I qualify it by saying increasingly, if you start having inference, training, front end, storage, WAN, classic cloud all come together, the AI — the pure AI number becomes difficult to track.

Arista Networks’ management is stocking up inventory in preparation for a rapid deployment of AI networking products

On the cash front, while we have experienced significant increases in operating cash over the last couple of quarters, we anticipate an increase in working capital requirements in Q4. This is primarily driven by increased inventory in order to respond to the rapid deployment of AI networks and to reduce overall lead times as we move into 2025.

Arista Networks’ management has been surprised by the acceleration of AI pilots by its customers in 2024; management would not be surprised going forward if its AI business grows faster than its classic data center and cloud business (in other words, management would not be surprised if the company’s customers cannibalise some of their classic data center and cloud buildouts for AI)

We were pleasantly surprised with the faster acceleration of AI pilots in 2024. So we definitely see that our large cloud customers are continuing to refresh on the cloud, but are pivoting very aggressively to AI. So it wouldn’t surprise me if we grow faster in AI and faster in campus in the new center markets and slower in our classic markets called that data center and cloud. 

The 4 major AI trials Arista Networks discussed in the 2024 Q1 earnings call have now become 5 trials; 3 of the 5 customers are progressing well and are transitioning from trials to pilots, and they will each grow their GPU clusters from 50,000 to 100,000 in 2025; the customer for the new trial that was started has historically been very focused on Infiniband so management is happy to have won the trial, and management hopes the trail will enter pilot and production in 2025; the last remaining customer is moving slower than management expected with delays in their data center buildout; management has good revenue visibility for 3 of the 5 trials for the next 6-12 months and Arista Networks’ revenue-guide for 2025 does not depend on the remaining 2 trials; a majority of the trials are currently on Arista Networks’ 400-gig products because the customers are waiting for the ecosystem to develop on the 800-gig products, but management expects more adoption of the 800-gig products in 2025; Arista Networks is participating in other smaller AI trials too, but the difference is that management expects the 5 major ones to scale to at least 100,000 GPU clusters 

Arista now believes we’re actually 5 out of 5, not 4 out of 5. We are progressing very well in 4 out of the 5 clusters. 3 of the customers are moving from trials to pilots this year, and we’re expecting those 3 to become 50,000 to 100,000 GPU clusters in 2025. We’re also pleased with the new Ethernet trial in 2024 with our fifth customer. This customer was historically very, very InfiniBand driven. And we are now moving in that particular fifth customer, we are largely in a trial mode in 2024, and we hope to go to pilots and production in 2025. There is one customer who — so 3 are going well. One is starting. The fifth customer is moving slower than we expected. They may get back on their feet. In 2025, they’re awaiting new GPUs, and they’ve got some challenges on power cooling, et cetera. So 3, I would give an A. The fourth one, we’re really glad we won, and we’re getting started and the fifth one, I’d say, steady-state, not quite as great as we would expect them — have expected them to be…

…[Question] I wanted to ask a little bit more about the $750 million in AI for next year. Has your visibility on that improved over the last few months? I wanted to reconcile your comment around the fifth customer not going slower than expected. And it sounds like you’re now in 5 of 5, but wondering if that fifth customer going slower is limiting upside or limiting your visibility there?

[Answer] I think on 3 out of the 5, we have good visibility, at least for the next 6 months, maybe even 12…

…On the fourth one, we are in early trials. We’ve got some improving to do. So let’s see, but we’re not looking for 2025 to be the bang up year on the fourth one. It’s probably 2026. And on the fifth one, we’re a little bit stalled, which may be why we’re being careful about predicting how they’ll do. They may step in nicely in the second half of ’25, in which case, we’ll let you know. But if they don’t, we’re still feeling good about our guide for ’25…

…A majority of the trials and pilots are on 400 because people are still waiting for the ecosystem at 800, including the NICs and the UEC and the packet spring capabilities, et cetera. So while we’re in some early trials on 800, majority of 400 — majority of 2024 is 400 gig. I expect as we go into 2025, we will see a better split between 400 and 800…

… So we’re not saying these 5 are the be-all, end-all, but these are the 5 we predict can go to 100,000 GPUs and more. That’s the way to look at this. So there are the largest AI Titans, if you will. And they can be in the cloud, hyperscaler Titan group, they could be in the Tier 2 as well, by the way, very rarely would they be in a classic enterprise. By the way, we do have at least 10 to 15 trials going on in the classic enterprise too, but they’re much smaller GPU counts, so we don’t talk about it.

Arista Networks’ management sees NVIDIA both as a partner and a competitor in the AI networking market; Arista Networks does see NVIDIA’s Infiniband as a competing solution, but rarely sees NVIDIA’s own Ethernet solution competing; management thinks customers, ranging from those building large GPU clusters to smaller ones, all see Arista Networks as the expert when it comes to AI networking

We view NVIDIA as a good partner. If we didn’t have the ability to connect to their GPUs, we wouldn’t have all this AI networking demand. So thank you, NVIDIA. Thank you, Jensen, for the partnership. Now as you know, NVIDIA sells the full stack and most of the time, it’s with InfiniBand, and with the Mellanox acquisition, they do have some Ethernet capability. We personally do not run into the Ethernet capability very much. We run into it, maybe in 1 or 2 customers. And so generally speaking, Arista has looked upon as the expert there. We have a full portfolio. We have full software. And whether it’s the large scale-out ethernet working customers like the Titans or even the smaller enterprises, we’re seeing a lot of smaller GPU clusters of the enterprise, Arista is looked upon as the expert there. But that’s not to say we’re going to win 100%. We certainly welcome NVIDIA as a partner on the GPU side and a fierce competitor, and we look to compete with them on the Ethernet switching.

The AI back-end market is where Arista Networks natively connects with GPU and where NVIDIA’s Infiniband is the market leader, but Arista Networks’ Ethernet solution is aiming to be the gold standard; for the AI front-end market, Arista Networks’ solutions are the gold standard and management is seeing some customers fail to run their AI application on competing solutions and want to replace them with Arista Networks’ solutions

So since you asked me specifically about AI as opposed to cloud, let me parse this problem into 2 halves, the back end and the front end, right? At the back end, we’re natively connecting to GPUs. And there can be many times, we just don’t say it because somebody just bundles it in the GPU in particular, NVIDIA. And you may remember a year ago, I was saying we’re outside looking in because most of the bundling is happening with InfiniBand…

…So we’ll take all we can get, but we are not claiming to be a market leader there. We’re, in fact, claiming that there are many incumbents there with InfiniBand and smaller versions of Ethernet that Arista is looking to gain more credibility and experience and become the gold standard for the back end.

On the front end, in many ways, we are viewed as the gold standard. So competitively, it’s a much more complex network. You have to build a leaf-spine architecture. John alluded to this, there’s a tremendous amount of scale with L2, L3, EVPN, VXLAN, visibility, telemetry, automation, routing at scale, encryption at scale. And this, what I would call accelerated networking portfolio complements NVIDIA’s accelerated compute portfolio. And compared to all the peers you mentioned, we have the absolute best portfolio of 20 switches and 3 families and the capability and the competitive differentiation is bar none. In fact, I am specifically aware of a couple of situations where the AI applications aren’t even running on some of the industry peers you talked about, and they want to swap theirs for ours. So feeling extremely bullish with the 7800 flagship product, the newly introduced 7700 that we worked closely with Meta, the 7060, this product line running today mostly at 400 gig because a lot of the NIC and the ecosystem isn’t there for 800. But moving forward into 800, this is why John and the team are building the supply chain to get ready for it.

ASML (NASDAQ: ASML)

While ASML’s management has seen the strong performance of AI continue – and expects the performance to continue for some time – other market segments have taken longer to recover than management expected; in the Memory segment, management is seeing limited capacity additions among customers, apart from AI, as the customers embark on technology transition to HBM and DDR5

There have been quite some market dynamics in the past couple of months. Very clearly, the strong performance of AI clearly continues and I think it continues to come with quite some upside. We will also see that in other market segments, it takes longer to recover. Recovery is there, but it’s more gradual than what we anticipated before and it will continue in 2025. That does lead to some customer cautiousness…

…If you look at the Memory business, this customer cautiousness that I talked about, leads to limited capacity additions. While at the same time, we do see a lot of focus and strong demand when it comes to technology transitions and particularly as it is related to High Bandwidth Memory and to DDR5. So again, there anything related to AI is strong, but other than that there are limited capacity additions.

The AI growth-driver is very strong over the long-term and ASML’s management sees that AI is increasing share in ASML’s customers’ business

If you look at the long-term outlook, I believe the growth drivers are still very much intact. The secular growth drivers are clear and they are strong. I think if you look at AI, very, very strong, very clear and undisputed. Taking an increasing share in the business of our customers. So I think that is going very strongly.

ASML’s management is seeing upside on AI because the overall demand for AI applications continues to increase, which has driven a recovery in server demand, but management does not have complete understanding on how the AI market will play out

We also mentioned some upside on AI, because we still believe that the overall demand for those application is there, continue to increase. So if we look at the server demand, we see there a very nice recovery. A lot of that has to do with AI application. So we talk about upside, which also means that the overall dynamic of the market is still playing. And we felt the need to provide an update for next year based on some of the development we have seen. I think in no way we are also saying that there is a complete understanding of how the entire market will continue to play out in the next few months. So I think on the second part of your question, I would say maybe this has not played out fully yet…

…[Question] You would expect to happen then, I guess, to — at some point will happen?

[Answer] Well, I think if everyone — and I think a lot of us still believe in a strong AI demand in the coming years, I think that demand has to be fulfilled. Therefore, yes, I will say mostly, we will see some development also on that front in the coming months.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing next-gen AI customers want to obtain visibility into their AI usage as they continue experimenting with the technology; around 3,000 customers used at least one of Datadog’s AI integrations at the end of 2024 Q3; management is starting to see Datadog’s LLM (large language model) observability products gain traction as AI experiments start becoming production applications; hundreds of customers are already using LLM observability, and some customers have reduced time spent on investigating LLM issues from days or hours to just minutes; management is seeing customers wanting to use APM (Application Performance Monitoring) alongside LLM observability 

In the next-gen AI space, customers continue to experiment with new AI technologies. And as they do, they want to get visibility into their AI use. At the end of Q3, about 3,000 customers used one or more Datadog AI integrations to send us data about their AI, machine learning and LLM usage. As some of these experiments start turning into production AI applications, we are seeing initial signs of traction for our LLM observability products.

Today, hundreds of customers are using LLM observability with more exploring it every day. And some of our first paying customers have told us that they have cut the time spent investigating LLM latency, errors and quality from days or hours to just minutes. Our customers not only want to understand the performance and cost of their LLM applications, they also want to understand the LLM model performance within the context of their entire application. So they are using APM alongside LLM observability to get fully integrated end-to-end visibility across all their applications and tech stacks

AI-native customers accounted for 6% of Datadog’s ARR in 2024 Q3 (was 6% 2024 Q2); AI-native customers contributed 4 percentage points to Datadog’s year-on-year growth in 2024 Q3, compared to 2 percentage points in 2023 Q3; management has seen a very rapid ramp in usage of Datadog among large customers in the AI-native cohort, and management thinks these customers will optimise cloud and observability usage in the future, while also asking for better terms; management is seeing Datadog’s production-minded LLM observability products being used by real paying customers with real volumes in real production workloads; AI-native companies are model providers or AI infrastructure providers that serve as a proxy for the AI industry

AI native customers who this quarter represented more than 6% of our Q3 ARR, up from more than 4% in Q2 and about 2.5% of our ARR in the year ago quarter. AI native customers contributed about 4 percentage points of year-over-year growth in Q3 versus about 2 percentage points in the year ago quarter. While we believe that adoption of AI will continue to benefit Datadog in the long term, we are mindful that some of the large customers in this cohort have ramped extremely rapidly and that these customers may optimize cloud and observability usage and increase their commitments to us over time with better terms. This may create volatility in our revenue growth in future quarters on the backdrop of long-term volume growth…

…We are seeing our production-minded LLM observability products, for example, being used by real paying customers with real volumes and real applications in real production workloads. So that’s exciting and healthy. I think it’s a great trend for the future…

… We have that group of AI, like smaller — relatively small number of AI companies or AI native companies. Many of them are model providers or infrastructure providers for AI that serve the rest of the industry and they are really a proxy for the future growth of the rest of the industry in AI.

Datadog signed a 7-figure expansion deal with a hyperscaler delivering next-gen AI models; the hyperscaler has its homegrown observability solution, but the solution needs time-consuming customisation and manual configuration; the hyperscaler chose Datadog because Datadog’s platform can scale flexibly

We signed a 7-figure annualized expansion with a division of a hyperscaler delivering next-gen AI models. This customer is very technically capable and already has a homegrown observability solution, which requires time-consuming customization and manual configuration. They will be launching new features for their large language models soon and need a platform that can scale flexibly while supporting proactive incident detection. By expanding the use of Datadog, they expect to efficiently onboard new teams and environments and support the rapidly increasing adoption of the LLMs.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business

Overall, we continue to see no change to the multiyear trend towards digital transformation and cloud migration, which we continue to believe are still in early days. We are seeing continued experimentation with new advances such as next-gen AI, and we believe this is one of the many factors that will drive greater use of the cloud and other modern technologies.

Datadog’s management is starting to see more inference AI workloads, but they are still concentrated among API-driven providers and it’s still very early days in terms of customers putting their next-gen AI applications into production; management expects more diversification to occur in the future as more companies enter production with their applications and customise their models 

In terms of the workloads, you’re right that we’re starting to see more inference workloads, but they still tend to be more concentrated across a number of API-driven providers. So there are a few others, both on LLMs and other kinds of models. So this is where I think most of the usage in production at least is today. We expect that to diversify more over time as companies get further into production with their applications and they start to be customizing more on their models…

…We are excited to see what’s happening with the AI innovation as it gets further down the pipe and away from testing and experimenting and more into production applications. And we have some signs that it’s starting to happen. Again, we see that with our LLM observability product. We see that also with some of the workloads we monitor from our customers on the infrastructure side. But I would say it’s still very early days in terms of customers being in production with their next-gen AI applications.

Datadog’s management is seeing a small amount of cloud workloads of companies being cannibalised by their AI initiatives

You’re right that the — where the workloads could have grown maybe instead of growing 20%, they could grow 25%, maybe some of those 5% instead are being invested both in terms of infrastructure budget or innovation — time innovation budget. All that is going into AI, and that’s largely right now in experimentation and model training and that sort of thing. 

Datadog’s Management is working with customers with large inference workloads on how Datadog can be helpful on the GPU profiling side of inference; management is also experimenting with how Datadog can be helpful on side of training; management thinks that in a steady state, 60% of AI workloads will be inference and 40% will be training, so there’s still a lot of value to be found if Datadog can be useful in the training side too

Right now, we’re working with a number of customers that have real-world large inference workloads on how we can help on the GPU profiling side for inference. We’re doing less on the training side, mostly because the training jobs tend to be more bespoke and temporary, and there’s less of an application that’s attached to those that these are just very large clusters of GPUs. So it’s closer to HPC in a way than it is to traditional applications, though we are also experimenting with what we can do there. There is a world where maybe in a durable fashion, 60% of workloads are inference and 40% are training. And if that’s the case, there’s going to be a lot of value to be had by having repeatable training and repeatable tooling for that. So we are also looking into that.

Datadog is not monetising GPU instances as well as CPU instances today, but management thinks that could change in the future

As of today, we really don’t monetize GPU instances all that well compared to the other CPU instances. So GPU instance is many times the cost of a CPU instance, and we charge the same amount for it. That doesn’t have to be the case in the future. If we do things that are particularly interesting and it’s going to have a real impact on — and deliver value and how customers use and make the best of their GPUs and in the end, save money. 

Datadog’s management is seeing Datadog’s AI-native cohort grow faster than its cloud-native cohorts did in the late 2010s and early 2020s

What we’ve seen with cloud native in the late ’10s and early ’20s, where we had these numbers of cloud-native consumer companies that were growing very fast, with 2 differences. The first one is that the AI cohort is growing faster and there are larger individual ACVs [annual contract value] for these customers.

Datadog’s management thinks that workloads on Datadog’s platform could really accelerate when non-AI-native companies start bringing AI applications into production

In terms of the growth of workloads, look, I mean, as we said, we see growth across the customer base pretty much. We see growth of classical workloads in the cloud. We see large growth — very large growth on the AI native side. We think that the one big catalyst for future acceleration will be those AI native applications or those AI applications, I should say, going into production for non-AI native companies for a much broader set of customers than the customers that are deploying these kind of applications to their — in production. And as they do, they will also look less like just large cluster of GPUs and more like traditional applications because the GPU needs a database, it needs [ core ] application in front of it, it needs layers to secure it and authorize it and all the other things. So it’s going to look a lot more like a normal application with some additional more concentrated compute and GPUs.

Datadog’s management does not expect Datadog to make outsized investments in GPU clusters for compute

Unlike many others, we don’t expect at this point to have outsized investments in compute. We’re not building absolutely large GPU clusters.

dLocal (NASDAQ: DLO)

dLocal’s management launched the Smart Requests functionality in 2024 Q3 that improves conversion rates for merchants by 1.22 percentage points on average, which equates to a 1.2% increase in revenue for merchants; Smart Requests relies on localised machine learning models to maximise authorisation rates for merchant

During the quarter, we launched our smart requests functionality, boosting our transaction performance and therefore, improving conversion rates by an average of 1.22 percentage points across the board. It may sound minor, but it isn’t. It actually represents, in practical terms, 1.2% additional revenue to our merchants. Smart requests rely on per country machine learning models that optimize routing and chaining so as to maximize authorization rates for our merchants.

Fiverr (NYSE: FVRR)

Fiverr’s management believes that Fiverr’s next generation of products must empower its community to fully leverage AI, and that the best work will be done in the future by a combination of humans and AI

One thing that became clearer to me in the last year is that with the emergence of GenAI and the promise of AGI, the next generation of products we build must empower our community to fully leverage artificial intelligence. It also became clear to me that in the future, the best work will be done by humans and AI technology together, not humans alone or AI alone.

Fiverr’s management is providing Fiverr’s customers with an AI assistant to help them navigate the company’s platform 

This means that every business that comes to Fiverr will have a world-class AI assistant to help them get things done, from ideation, scoping and briefing to project management and workflow automation. It means that they can seamlessly leverage both human talent and machine intelligence to create the most beautiful results.

Fiverr’s management is building a new search experience on the Fiverr platform for buyers which incorporates Neo, its AI powered smart matching tool; Fiverr has launched Dynamic Matching to allow buyers to put together project briefs with an AI assistant to help them get matched to the most relevant freelancers; these new features have experienced enthusiastic reception in just a few weeks; projects that use these new features are bigger projects than the typical scope of projects on Fiverr

On the buyer side, we are building a new search experience that not only includes more dynamic catalogs but also incorporates Neo, an AI-powered smart matching tool, to help customers match with more contextual information. We launched Dynamic Matching to allow buyers to put together comprehensive briefs with a powerful AI assistant and then get matched with the most relevant freelancer with a tailored proposal…

…Even in the few weeks since we launched these products, we have already seen an enthusiastic reception from our community and promising performance. The projects that come through these products are several times larger than a typical project on Fiverr, and we believe it has a lot more potential down the road as the awareness and trust of these products grow on the platform.

Mastercard (NYSE: MA)

Mastercard acquired Brighterion in 2017 to use AI capabilities for decision intelligence; after boosting the product with generative AI, Mastercard has seen a 20% lift in the product 

One of the more recent ones that we talked about that we invested heavily in using our Brighterion acquisition from back in 2017 to use our AI capabilities is decision intelligence. We’ve now boosted the product with Gen AI and the outcome that we see is tremendous. This is up to a 20% lift that we see.

Meta Platforms (NASDAQ: META)

Meta’s management is seeing rapid adoption of Meta AI and Llama; Meta AI now has more than 500 million monthly actives; Llama token usage has grown exponentially in 2024 so far; Meta released Llama 3.2 in 2024 Q3; the public sector is adopting Llama; management is seeing higher usage of Meta AI as the models improve; Meta AI is built on Llama 3.2; voice functions for Meta AI are now available in English in the USA, Australia, Canada, and New Zealand; image editing through simple text prompts, and the ability to learn about images, are now available in Meta AI in the USA; Meta AI remains on track to be the most-used AI assistant in the world by end-2024; early use cases for Meta AI are for information gathering, help with how-to tasks, explore interests, look for content, and generate images

We’re seeing rapid adoption of Meta AI and Llama, which is quickly becoming a standard across the industry…

…Meta AI now has more than 500 million monthly actives…

…Llama token usage has grown exponentially this year and the more widely that Llama gets adopted and becomes the industry standard the more that the improvements to its quality and efficiency will flow back to all of our products. This quarter, we released Llama 3.2, including the leading small models that run on device and open source multimodal models…

…We’re also working with the public sector to adopt Llama across the U.S. government…

…We’re seeing lifts in usage as we improve our models and have introduced a number of enhancements in recent months to make Meta AI more helpful in engaging. Last month, we began introducing voice, so you can speak with Meta AI more naturally, and it’s now fully available in English to people in the U.S., Australia, Canada and New Zealand. In the U.S., people can now also upload photos to Meta AI to learn more about them, write captions for post and add, remove or change things about their images with a simple text prompt. These are all built with our first multimodal foundation model, Llama 3.2…

…We’re excited about the progress of Meta AI. It’s obviously very early in its journey, but it continues to be on track to be the most used AI assistant in the world by end of year…

… Number of the frequent use cases we’re seeing include information gathering, help with how-to tasks, which is the largest use case. But we also see people using it to go deeper on interests, to look for content on our services, for image generation, that’s also been another pretty popular use case so far.

Meta’s management is seeing AI have a positive impact on nearly all aspects of Meta; improvements to Meta’s AI-driven feed and video recommendations have driven increases in time spent on Facebook this year by 8% and on Instagram by 6%; more than 1 million advertisers are using Meta’s Gen AI tools and advertisers using image generation are enjoying a 7% increase in conversions; management sees plenty of new opportunities for new AI advances to accelerate Meta’s core business, so they want to invest more there

We’re seeing AI have a positive impact on nearly all aspects of our work from our core business engagement and monetization to our long-term road maps for new services and computing platforms…

…Improvements to our AI-driven feed and video recommendations have led to an 8% increase in time spent on Facebook and a 6% increase on Instagram this year alone. More than 1 million advertisers used our Gen AI tools to create more than 15 million ads in the last month. And we estimate that businesses using image generation are seeing a 7% increase in conversions and we believe that there’s a lot more upside here…

…It’s clear that there are a lot of new opportunities to use new AI advances to accelerate our core business that should have strong ROI over the next few years. So I think we should invest more there.

 The development of Llama 4 is progressing well; Llama 4 is being trained on more than 100,000 H100s and it’s the biggest training cluster in the world management is aware of; management expects the smaller Llama 4 models to be ready in early-2025; management thinks Llama 4 will be much faster and will have new modalities, stronger capabilities and reasoning

I’m even more excited about Llama 4, which is now well into its development. We’re training the Llama 4 models on a cluster that is bigger than 100,000 H100s or bigger than anything that I’ve seen reported for what others are doing. I expect that the smaller Llama 4 models will be ready first, and they’ll be ready — we expect sometime early next year. And I think that there are going to be a big deal on several fronts, new modalities, capabilities, stronger reasoning and much faster. 

Meta’s management remains convinced that open source is the way to go for AI development; the more developers use Llama, the more Llama improves in both quality and efficiency; in terms of efficiency, with higher adoption of Llama, management is seeing NVIDIA and AMD optimise their chip designs to better run Llama

It seems pretty clear to me that open source will be the most cost-effective, customizable, trustworthy performance and easiest to use option that is available to developers. And I am proud that Llama is leading the way on this…

…[Question] You said something along the lines of the more standardized Llama becomes the more improvements will flow back to the core meta business. And I guess, could you just dig in a little bit more on that?

[Answer] The improvements to Llama, I’d say come in a couple of flavors. There’s sort of the quality flavor and the efficiency flavor. There are a lot of researchers and independent developers who do work and because Llama is available, they do the work on Llama and they make improvements and then they publish it and it becomes — it’s very easy for us to then incorporate that both back into Llama and into our Meta products like Meta AI or AI Studio or Business AIs because the work — the examples that are being shown are people doing it on our stack.

Perhaps more importantly, is just the efficiency and cost. I mean this stuff is obviously very expensive. When someone figures out a way to run this better if that — if they can run it 20% more effectively, then that will save us a huge amount of money. And that was sort of the experience that we had with open compute and why — part of why we are leaning so much into open source here in the first place, is that we found counterintuitively with open compute that by publishing and sharing the architectures and designs that we had for our compute, the industry standardized around it a bit more. We got some suggestions also that helped us save costs and that just ended up being really valuable for us. Here, one of the big costs is chips — a lot of the infrastructure there. What we’re seeing is that as Llama gets adopted more, you’re seeing folks like NVIDIA and AMD optimize their chips more to run Llama specifically well, which clearly benefits us. 

Meta’s management expects to continue investing seriously into AI infrastructure

Our AI investments continue to require serious infrastructure, and I expect to continue investing significantly there too. We haven’t decided on the final budget yet, but those are some of the directional trends that I’m seeing.

Meta’s management thinks the integration of Meta AI into the Meta Ray-Ban glasses is what truly makes the glasses special; the Meta Ray-Ban glasses can answer questions throughout the day, help wearers remember things, give suggestions to wearers in real-time using multi-modal AI, and translate languages directly into the ear of wearers; management continues to think glasses are the ideal form-factor for AI because glasses lets AI see what you see and hear what you hear; demand for the Meta Ray-Ban glasses continues to be really strong; a recent release of the glasses was sold out almost immediately; Meta has deepened its partnership with EssilorLuxottica to build future generations of the glasses; Meta recently showcased Orion, its first full holographic AR glasses

This quarter, we also had several milestones around Reality Labs and the integration of AI and wearables. Ray-Ban meta glasses are the prime example here. They’re great booking glasses that let you take photos and videos, listen to music and take calls. But what makes them really special is the Meta AI integration. With our new updates, it will be able to not only answer your questions throughout the day, but also help you remember things, give you suggestions as you’re doing things using real-time multi-modal AI and even translate other languages right in your ear for you. I continue to think that glasses are the ideal form factor for AI because you can let your AI see what you see, hear what you hear and talk to you.

Demand for the glasses continues to be very strong. The new clear addition that we released at Connect sold out almost immediately and has been trading online for over $1,000. We’ve deepened our partnership with EssilorLuxottica to build future generations of smart eyewear that deliver both cutting-edge technology and style.

At Connect, we also showed Orion, our first full holographic AR glasses. We’ve been working on this one for about a decade, and it gives you a sense of where this is all going. We’re not too far off from being able to deliver great-looking glasses to let you seamlessly blend the physical and digital worlds so you can feel present with anyone no matter where they are. And we’re starting to see the next computing platform come together and it’s pretty exciting.

Newer scaling laws seen with Meta’s large language models inspired management to develop new ranking model architectures that can learn more effectively from significantly larger data sets; the new ranking model architectures have been deployed to Facebook’s video ranking models, helping to deliver more relevant recommendations; management is exploring the use of the new ranking model architectures on other services and the introduction of cross-surface data to the models, with the view that these moves will unlock more relevant recommendations and lead to better engineering efficiency

Previously, we operated separate ranking and recommendation systems for each of our products because we found that performance did not scale if we expanded the model size and compute power beyond a certain point. However, inspired by the scaling laws we were observing with our large language models, last year, we developed new ranking model architectures capable of learning more effectively from significantly larger data sets.

To start, we have been deploying these new architectures to our Facebook ranking video ranking models, which has enabled us to deliver more relevant recommendations and unlock meaningful gains in launch time. Now we’re exploring whether these new models can unlock similar improvements to recommendations on other services. After that, we will look to introduce cross-surface data to these models, so our systems can learn from what is interesting to someone on one surface of our apps and use it to improve their recommendations on another. This will take time to execute and there are other explorations that we will pursue in parallel. However, over time, we are optimistic that this will unlock more relevant recommendations while also leading to higher engineering efficiency as we operate a smaller number of recommendations.

Meta’s management is using new approaches to AI modelling to allow Meta’s ad systems to consider a person’s sequence of actions before and after seeing an ad, which allow the systems to better predict a person’s response to specific ads; the new approaches to AI modelling have delivered a 2%-4% increase in conversions in tests; Meta is seeing strong user-retention with its generative AI tools for image expansion, background generation, and text generation; Meta has started testing its first generative AI tools for video expansion and image animation and plans to roll them out broadly by early-2025

The second part of improving monetization efficiency is enhancing marketing performance. Similar to organic content ranking, we are finding opportunities to achieve meaningful ads performance gains by adopting new approaches to modeling. For example, we recently deployed new learning and modeling techniques that enable our ad systems to consider the sequence of actions a person takes before and after seeing an ad. Previously, our ad system could only aggregate those actions together without mapping the sequence. This new approach allows our systems to better anticipate how audiences will respond to specific ads. Since we adopted the new models in the first half of this year, we’ve already seen a 2% to 4% increase in conversions based on testing within selected segments…

…Finally, there is continued momentum with our Advantage+ solutions, including our ad creative tools. We’re seeing strong retention with advertisers using our Generative AI-powered image expansion, background generation and text generation tools, and they’re already driving improved performance for advertisers even at this early stage. Earlier this month, we began testing our first video generation features, video expansion and image animation. We expect to make them more broadly available by early next year.

Meta’s management expects to significantly increase Meta’s infrastructure for generative AI while prioritising fungibility

Given the lead time of our longer-term investments, we also continue to maximize our flexibility so that we can react to market developments. Within Reality Labs, this has benefited us as we’ve evolved our road map to respond to the earlier-than-expected success of smart glasses. Within Generative AI, we expect significantly scaling up our infrastructure capacity now while also prioritizing its fungibility will similarly position us well to respond to how the technology and market develop in the years ahead.

Meta’s management continues to develop tools for individuals and businesses to create AI agents easily; management thinks that Meta’s progress with AI agent tools is currently at where Meta was with Meta AI a year ago; management wants the AI agent tools to be widely used in 2025

There are also other new products like that, things around AI Studio. This year, we really focused on rolling out Meta AI as kind of our are kind of single assistant that people can ask any question to, but I think there’s a lot of opportunities that I think we’ll see ramp more over the next year in terms of both consumer and business use cases, for people interacting with a wide variety of different AI agents, consumer ones with AI Studio around whether it’s different creators or kind of different agents that people create for entertainment. Or on the business side, we do want to continue making progress on this vision of making it set any small business or any business over time can with a few clicks stand up in AI agent that can help do customer service and sell things to all of their customers around the world, and I think that’s a huge opportunity. So it’s very broad…

…But I’d say that we’re — today, with AI Studio and business AIs about where we were with Meta AI about a year ago. So I think in the next year, our goal around that is going to be to try to make those pretty widespread use cases, even though there’s going to be a multiyear path to getting kind of the depth of usage and the business results around that we want. 

Meta’s management is not currently sharing quantitative metrics on productivity improvements with the internal use of AI, but management is excited about the internal adoption they are seeing and the future opportunities for doing so

On the use of AI and employee productivity, it’s certainly something that we’re very excited about. I don’t know that we have anything particularly quantitative that we’re sharing right now. I think there are different efficiency opportunities with AI that we’ve been focused on in terms of where we can reduce costs over time and generate savings through increasing internal productivity in areas like coding. For example, it’s early, but we’re seeing a lot of adoption internally of our internal assistant and coding agent, and we continue to make Llama more effective at coding, which should also make this use case increasingly valuable to developers over time.

There are also places where we hope over time that we’ll be able to deploy these tools against a lot of our content moderation efforts to help make the big body of content moderation work that we undertake, to help it make it more efficient and effective for us to do so. And there are lots of other places around the company where I would say we’re relatively early in exploring the way that we can use LLM based tools to make different types of work streams more efficient.

It appears that Meta has achieved more than management expected in terms of developing its own AI infrastructure (in other words, developing its own AI chips)

So I think part of what we’re seeing this year is the infra team is executing quite well. And I think that’s, why over the course of the year, we’ve been able to build out more capacity. I mean going into the year, we had a range for what we thought we could potentially do. And we have been able to do, I think, more than, I think, we’d kind of hoped and expected at the beginning of the year. And while that reflects as higher expenses, it’s actually something that I’m quite happy that the team is executing well on. And I think that will — so that execution makes me somewhat more optimistic that we’re going to be able to keep on building this out at a good pace but that’s part of the whole thing. 

Meta’s management is starting to test the addition of AI-generated or AI-augmented content to users of Instagram and Facebook; management has high confidence that AI-generated and/or AI-augmented content will be an important trend in the future

I think we’re going to add a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way. And I think that, that’s going to be just very exciting for the — for Facebook and Instagram and maybe Threads or other kind of feed experiences over time. It’s something that we’re starting to test different things around this. I don’t know if we know exactly what’s going to work really well yet. Some things are promising. I don’t know that this isn’t going to be a big impact on the business in ’25 would be my guess. But I think that there is I have high confidence that over the next several years, this is going to be an important trend and one of the important applications.

Meta’s management is currently focused on the engagement and user-experience of Meta AI; the monetisation of Meta AI will come later

Right now, we’re really focused on making Meta AI as engaging and valuable a consumer experience as possible. Over time, we think there will be a broadening set of queries that people use it for. And I think that the monetization opportunities will exist when over time as we get there. But right now, I would say we are really focused on the consumer experience above all and this is sort of a playbook for us with products that we put out in the world where we really dial in the consumer experience before we focus on what the monetization could look like.

Microsoft (NASDAQ: MSFT)

Microsoft’s AI business is on track to exceed $10 billion in annual revenue run rate in 2024 Q4 after being started for just 2.5 years; it will be the fastest business in the company’s history to do so; Microsoft’s AI business is nearly all inference (see Point 32 for more)

All up, our AI business is on track to surpass an annual revenue run rate of $10 billion next quarter, which will make it the fastest business in our history to reach this milestone…

…We’re excited that only 2.5 years in, our AI business is on track to surpass $10 billion of annual revenue run rate in Q2…

…If you sort of think about the point we even made that this is going to be the fastest growth to $10 billion of any business in our history, it’s all inference, right? 

Azure took share in 2024 Q3 (FY2025 Q1), driven by AI; Azure grew revenue by 33% in 2024 Q3 (was 29% in 2024 Q2), with 12 points of growth from AI services (was 8 points in 2024 Q2); Azure’s AI business has higher demand than available capacity

Azure took share this quarter…. 

… Azure and other cloud services revenue grew 33% and 34% in constant currency, with healthy consumption trends that were in line with expectations. The better-than-expected result was due to the small benefit from in-period revenue recognition noted earlier. Azure growth included roughly 12 points from AI services similar to last quarter. Demand continues to be higher than our available capacity. 

Microsoft’s management thinks Azure offers the broadest selection of AI chips, from Microsoft’s own Maia 100 chip to AMD and NVIDIA’s latest GPUs; Azure is the first cloud provider to offer NVIDIA’s GB200 chips

We are building out our next-generation AI infrastructure, innovating across the full stack to optimize our fleet for AI workloads. We offer the broadest selection of AI accelerators, including our first-party accelerator, Maia 100 as well as the latest GPUs from AMD and NVIDIA. In fact, we are the first cloud to bring up NVIDIA’s Blackwell system with GB200-powered AI servers.

Azure OpenAI usage more than doubled in the past 6 months, as both startups and enterprises move apps from test to production; GE Aerospace used Azure OpenAI to build a digital assistant for its 52,000 employees and in 3 months, the assistant has processed 500,000 internal queries and 200,000 documents; Azure recently added support for OpenAI’s newest o1 family of AI models; Azure AI is offering industry-specific models, including multi-modal models for medical imaging; Azure AI is increasingly an on-ramp for Azure’s data and analytics services, driving acceleration of Azure Cosmos DB and Azure SQL DB hyperscale usage

More broadly with Azure AI, we are building an end-to-end app platform to help customers build their own copilots and agents. Azure OpenAI usage more than doubled over the past 6 months as both digital natives like Grammarly and Harvey as well as established enterprises like Bajaj Finance, Hitachi, KT and LG move apps from test to production. GE Aerospace, for example, used Azure OpenAI to build a new digital assistant for all 52,000 of its employees. In just 3 months, it has been used to conduct over 500,000 internal queries and process more than 200,000 documents…

…This quarter, we added support for OpenAI’s newest model family, o1. We’re also bringing industry-specific models through Azure AI, including a collection of best-in-class multimodal models for medical imaging…

…Azure AI is also increasingly an on-ramp to our data and analytics services. As developers build new AI apps on Azure, we have seen an acceleration of Azure Cosmos DB and Azure SQL DB hyperscale usage as customers like Air India, Novo Nordisk, Telefonica, Toyota Motor North America and Uniper take advantage of capabilities purpose built for AI applications. 

Azure is offering its full catalog of AI models directly within the GitHub developer workflow; GitHub Copilot enterprise customers grew 55% sequentially in 2024 Q3; GitHub Copilot now has agentic workflows, such as Copilot Autofix, which helps users fix code 3x faster than it would take them on their own

And with the GitHub models, we now provide access to our full model catalog directly within the GitHub developer workflow…

… GitHub Copilot is changing the way the world builds software. Copilot enterprise customers increased 55% quarter-over-quarter as companies like AMD and Flutter Entertainment tailor Copilot to their own code base. And we are introducing the next phase of AI code generation, making GitHub Copilot agentic across the developer workflow. GitHub Copilot Workspace is a developer environment, which leverages agents from start to finish so developers can go from spec to plan to code all in natural language. Copilot Autofix is an AI agent that helps developers at companies like Asurion and Auto Group fix vulnerabilities in their code over 3x faster than it would take them on their own. We’re also continuing to build on GitHub’s open platform ethos by making more models available via GitHub Copilot. And we are expanding the reach of GitHub to a new segment of developers introducing GitHub Spark, which enables anyone to build apps in natural language.

Microsoft 365 Copilot has a new Pages feature, which management thinks is the first new digital artefact for the AI age; Pages helps users brainstorm with AI and collaborate with other users; Microsoft 365 Copilot responses are now 2x faster and 3x better; daily users of Microsoft 365 have more than doubled sequentially; Microsoft 365 copilot saves Vodafone employees 3 hours per person per week, and will be rolled out to 68,000 employees; 70% of the Fortune 500 now use Microsoft 365 Copilot; Microsoft 365 copilot is being adopted at a faster rate than any other new Microsoft 365 feature; with Copilot Studio, organisations can build autonomous agents to connect with Microsoft 365 Copilot; more than 10,000 organisations have used Copilot Studio, up 2x sequentially; monthly active users of Copilot across Microsoft’s CRM and ERP portfolio grew 60% sequentially

We launched the next wave of Microsoft 365 Copilot innovation last month, bringing together web, work, and Pages as the new design system for knowledge work. Pages is the first new digital artifact for the AI age, and it’s designed to help you ideate with AI and collaborate with other people. We’ve also made Microsoft 365 Copilot responses 2x faster and improved response quality by nearly 3x. This innovation is driving accelerated usage, and the number of people using Microsoft 365 daily more than doubled quarter-over-quarter. We are also seeing increased adoption from customers in every industry as they use Microsoft 365 Copilot to drive real business value. Vodafone, for example, will roll out Microsoft 365 Copilot to 68,000 employees after a trial showed that, on average, they save 3 hours per person per week. And UBS will deploy 50,000 seats in our largest finserve deal to date. And we continue to see enterprise customers coming back to buy more seats. All up, nearly 70% of the Fortune 500 now use Microsoft 365 Copilot, and customers continue to adopt it at a faster rate than any other new Microsoft 365 suite…

…With Copilot Studio, organizations can build and connect Microsoft 365 Copilot to autonomous agents, which then delegate to Copilot when there is an exception. More than 100,000 organizations from Nsure, Standard Bank and Thomson Reuters to Virgin Money and Zurich Insurance have used Copilot Studio to date, up over 2x quarter-over-quarter…

…Monthly active users of Copilot across our CRM and ERP portfolio increased over 60% quarter-over-quarter. 

Azure is bringing AI to industry-specific workflows; DAX Copilot is used in over 500 healthcare organisations to document more than 1.3 million physician-patient encounters each month; DAX Copilot is growing revenue faster than GitHub Copilot did in its first year

We’re also bringing AI to industry-specific workflows. One year in, DAX Copilot is now documenting over 1.3 million physician-patient encounters each month at over 500 health care organizations like Baptist Medical Group, Baylor Scott & White, Greater Baltimore Medical Center, Novant Health and Overlake Medical Center. It is showing faster revenue growth than GitHub Copilot did in this first year. And new features extend DAX beyond notes, helping physicians automatically draft referrals, after-visit instructions and diagnostic evidence.

LinkedIn’s AI tools help hirers find qualified candidates faster, and hirers who use AI assistant messages see a 44% higher acceptance rate

LinkedIn’s first agent hiring assistant will help hirers find qualified candidates faster by tackling the most time-consuming task. Already hirers who use AI assistant messages see a 44% higher acceptance rate compared to those who don’t. And our hiring business continues to take share.

In September 2024, Microsoft introduced a new AI companion experience – powered by Copilot – that includes voice and vision capabilities, allowing users to browse and converse with Copilot simultaneously

With Copilot, we are seeing the first step towards creating a new AI companion for everyone with new Copilot experience we introduced earlier this month, includes a refreshed design and tone along with improved speed and fluency across the web and mobile. And it includes advanced capabilities like voice and vision that make it more delightful and useful and feel more natural. You can both browse and converse with Copilot simultaneously because Copilot sees what you see. 

Roughly half of Microsoft’s cloud and AI-related capex in 2024 Q3 (FY2025 Q1) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; the capex spend for CPUs and GPUs are made based on demand signals; management will be looking at inference demand to govern the level of AI capex for training; management sees that growth in capex will eventually slow and revenue growth will increase, but how fast that happens will depend on the pace of adoption of AI; the capex that Microsoft has been committing is a sign of management’s commitment to grow together with OpenAI, and to grow Azure beyond OpenAI; Microsoft is currently not interested at all in selling GPUs for companies to train AI models and has turned such business away, and this gives management conviction about the company’s AI-related capex

Capital expenditures including finance leases were $20 billion, in line with expectations, and cash paid for PP&E was $14.9 billion. Roughly half of our cloud and AI-related spend continues to be for long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend is primarily for servers, both CPUs and GPUs, to serve customers based on demand signals…

…The inference demand ultimately will govern how much we invest in training because that’s, I think, at the end of the day, you’re all subject to ultimately demand…

…I think in some ways, it’s helpful to go back to the cloud transition that we worked on over a decade ago, I think, in the early stages. And what you did see and you’ll see us do in the same time is you have to build to meet demand. Unlike the cloud transition, we’re doing it on a global basis in parallel as opposed to sequential given the nature of the demand. And then as long as we continue to see that demand grow, you’re right, the growth in CapEx will slow and the revenue growth will increase. And those 2 things, to your point, get closer and closer together over time. The pace of that entirely depends really on the pace of adoption…

…[Question] How does Microsoft manage the demands on CapEx from helping OpenAI with its scaling ambitions?

[Answer] I’m thrilled with their success and need for supply from Azure and infrastructure and really what it’s meant in terms of being able to also serve other customers for us. It’s important that we continue to invest capital to meet not only their demand signal and needs for compute but also from our broader customers. That’s partially why you’ve seen us committing the amount of capital we’ve seen over the past few quarters, is our commitment to both grow together and for us to continue to grow the Azure platform for customers beyond them…

…One of the things that may not be as evident is that we’re not actually selling raw GPUs for other people to train. In fact, that’s sort of a business we turn away because we have so much demand on inference that we are not taking what I would — in fact, there’s a huge adverse selection problem today where people — it’s just a bunch of tech companies still using VC money to buy a bunch of GPUs. We kind of really are not even participating in most of that because we are literally going to the real demand, which is in the enterprise space or our own products like GitHub Copilot or M365 Copilot. So I feel the quality of our revenue is also pretty superior in that context. And that’s what gives us even the conviction, to even Amy’s answers previously, about our capital spend, is if this was just all about sort of a bunch of people training large models and that was all we got, then that would be ultimately still waiting, to your point, for someone to actually have demand, which is real. And in our case, the good news here is we have a diversified portfolio. We’re seeing real demand across all of that portfolio.

Microsoft’s management continues to expect Azure’s growth to accelerate in FY2025 H2, driven by increase in AI capacity to meet growing demand

In H2, we still expect Azure growth to accelerate from H1 as our capital investments create an increase in available AI capacity to serve more of the growing demand.

Microsoft’s management thinks that the level of supply and demand for AI compute will match up in FY2025 H2

But I feel pretty good that going into the second half of even this fiscal year, that some of that supply/demand will match up…

…I do, as you heard, have confidence, as we get a good influx of supply across the second half of the year particularly on the AI side, that we’ll be better able to do some supply-demand matching and hence, while we’re talking about acceleration in the back half.

Microsoft’s management sees Microsoft’s partnership with OpenAI as having been super beneficial to both parties; Microsoft provides the infrastructure for OpenAI to innovate on models; Microsoft takes OpenAI’s models and innovates further, through post-training of the models, building smaller models, and building products on top of the models; management developed conviction on the OpenAI partnership after seeing products such as GitHub Copilot and DAX Copilot get built; management feels very good about Microsoft’s investment in OpenAI; Microsoft accounts for OpenAI’s financials under the equity method

The partnership for both sides, that’s OpenAI and Microsoft, has been super beneficial. After all, we were the — we effectively sponsored what is one of the most highest-valued private companies today when we invested in them and really took a bet on them and their innovation 4, 5 years ago. And that has led to great success for Microsoft. That’s led to great success for OpenAI. And we continue to build on it, right? So we serve them with world-class infrastructure on which they do their innovation in terms of models, on top of which we innovate on both the model layer with some of the post-training stuff we do as well as some of the small models we build and then, of course, all of the product innovation, right? One of the things that my own sort of conviction of OpenAI and what they were doing came about when I started seeing something like GitHub Copilot as a product get built or DAX Copilot get built or M365 Copilot get built…

… And the same also, I would say, we are investors. We feel very, very good about sort of our investment stake in OpenAI…

…  I would say, just a reminder, this is under the equity method, which means we just take our percentage of losses every quarter. And those losses, of course, are capped by the amount of investment we make in total, which we did talk about in the Q this quarter as being $13 billion. And so over time, that’s just the constraint, and it’s a bit of a mechanical entry. And so I don’t really think about managing that. That’s the investment and acceleration that OpenAI is making in themselves, and we take a percentage of that.

Microsoft’s management sees Copilot as the UI layer for humans to interact with AI; Copilot Studio is used to build AI agents to connect Copilot to other systems of the user’s choice; Copilot Studio can also be used to create autonomous AI agents but these AI agents are not fully autonomous because at some point, they will need to notify a human or require an input and that is where Copilot comes in again

The system we have built is Copilot, Copilot Studio, agents and autonomous agents. You should think of that as the spectrum of things, right? So ultimately, the way we think about how this all comes together is you need humans to be able to interface with AI. So the UI layer for AI is Copilot. You can then use Copilot Studio to extend Copilot. For example, you want to connect it to your CRM system, to your office system, to your HR system. You do that through Copilot Studio by building agents effectively.

You also build autonomous agents. So you can use even — that’s the announcement we made a couple of weeks ago, is you can even use Copilot Studio to build autonomous agents. Now these autonomous agents are working independently, but from time to time, they need to raise an exception, right? So autonomous agents are not fully autonomous because, at some point, they need to either notify someone or have someone input something. And when they need to do that, they need a UI layer, and that’s where, again, it’s Copilot.

So Copilot, Copilot agents built-in Copilot Studio, autonomous agents built in Copilot Studio, that’s the full system, we think, that comes together.

Netflix (NASDAQ: NFLX)

Within entertainment, Netflix’s management thinks the most important question for AI is whether it can help creators produce even better content; the ability of AI to reduce costs in content creation is of secondary importance

 Lots of hype, good and bad, about how AI is going to impact or transform the entertainment industry. I think that the history has been that entertainment and technology have worked hand-in-hand throughout the history of time. And it’s very important, I think, for creators to be very curious about what these new tools are and what they could do. But AI needs to pass a very important test. Actually, can it help make better shows and better films? That is the test and that’s what they got to figure out. But I’ve said this before and I will say it again. We benefit greatly from improving the quality of the movies and the shows much more so than we do from making them a little cheaper. So any tool that can go to enhance the quality, making them better is something that is going to actually help the industry a great deal.

Paycom Software (NYSE: PAYC)

Paycom’s management developed an AI agent internally for the company’s service team to help the team provide even better service; the AI agent improved Paycom’s immediate response rates by 25% without any additional human interaction; the AI agent was built in house; Paycom is using AI in other areas, such as in several existing and upcoming products

Internally, we developed and deployed an AI agent for our service team. This technology utilizes our own knowledge-based semantic search model and enables us to provide service to help our clients more quickly and consistently than ever before.The AI agent continually improves over time and is having an impact on helping our clients achieve even more value out of their relationship with Paycom. By utilizing our own AI agent, we were able to connect our clients to the right solution faster, improving our immediate response rates by 25% without any additional human interaction…

…[Question] Interesting to hear about using AI in the customer service organization. I’m curious if that’s technology that Paycom has built or if you’re using a third party.

[Answer] So that’s internal. We built it ourselves, and we’ve been using it. And so it gets better and better as we mentioned on the call. It’s sped up our process by 25% as far as being able to connect clients to the solution quicker, whether that be a configuration question, a tax question or what have you. And so that’s really been helpful to us, and it continues to do more and more from that perspective…

…[Question] A follow-up on the AI agent or the AI technology that you’re developing. Do you see an opportunity in the future to productize what you’re developing internally, maybe like in your — in future versions of your recruiting product or other products in your platform?

[Answer] I would say this isn’t the only area in which we’re using AI. We have it in several products that we both have released and will be releasing. And so there’s definitely opportunities to monetize AI. As far as this particular solution, it’s really helping us on the back end and helping our client as well. So I think we’re going to see results and benefits from that in other areas of efficiency across the board within our own organization.

Shopify (NASDAQ: SHOP)

Shopify recently enhanced Shopify Flow, a low-code workflow automation app, with a new admin API connector that provides an additional 304 new automation actions

Let’s start with Shopify Flow. A low-code workflow automation app that empowers merchants to build custom automations and help them run their businesses more efficiently. This includes a new automation trigger based on the merchant’s custom data and newly completed admin API connector that provides an additional 304 new actions to use in their automations. And as a result, Flow has become a much more powerful tool, enabling merchants to update products, process customer form submissions, edit orders and so much more.

The Shopify Inbox feature now uses AI to suggest personalised replies for merchants to respond to customer inquiries; half of merchants’ responses are now using the AI-suggested replies; fast customer response helps lift conversion rates for merchants; the replies feature may not seem like a big deal, but it actually helps free up a lot of time for merchants to focus on building products

Within Shopify Inbox, this product now uses AI to suggest replies based on each merchant’s unique store information making it super easy for merchants to respond quickly and accurately to customer inquiries. In fact, on average, merchants are using the Suggest Replies for about half of their responses, edited or not, showing just how effective this feature has become. Replying can quickly boost conversion rates, which means more sales for our merchants and in turn, for Shopify…

…I mentioned suggest replies in Shopify Inbox, which may not seem like a big deal, but it’s a huge deal because it means merchants can spend more of their time focused on the things that they need to be focused on like building our products.  

The Shop App has a new merchant-focused home feed that is powered by machine learning models to increase shopper engagement; the new home feed has led to an 18% increase in sessions where a buyer engaged with a recommendation; management thinks the combination of search with AI will make the search function on the Shop App a lot more relevant and personalised

This quarter, the Shop App launched a new merchant-focused home feed, showcasing the diversity and the richness of brands on Shop. The experience uses new machine learning models to help buyers keep up with the brands they love and discover new brands based on their preferences. These changes have already led to early success with an 18% increase in sessions where a buyer engaged with a recommendation…

…We also think Search and AI together makes the Shop search way more relevant, way more personalized. That is also very compelling.

Essentially every Shopify internal department is using AI to be more productive

Support engineering, sales, finance, just about every department internally is using AI in some way to get more efficient, more productive.

Shopify’s management thinks the integration of AI in search will change how consumers find merchants and products, but Shopify has helped merchants navigate many similar changes before, and Shopify will continue to help merchants navigate the AI-related changes

In terms of where consumers find merchants or find products, yes, AI and search is going to change. But to be clear, this entire flow and discovery process has been changing for many years. It’s the reason that you saw us integrate with places like YouTube or more recently, Roblox or TikTok or Instagram…

…You can rest assured that when consumers shift their buying preferences, their discovery preferences, their search preferences, and they’re looking for great products from great brands, Shopify will ensure that our merchants are able to do so. And that’s the reason even some of the more nuanced or some of the more — as you know, Shopify has an integration to Spotify. Why? Because some merchants that also have very large followings as a musician have massive followings on their artist profile, the fact that so you can now show Shopify products on your artist profile means for that particular segment of merchants, they can easily — they now have a new surface area in which to conduct business. And that’s the same thing when it comes to AI and search. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management expects TMSC’s business in 2024 Q4 to be supported by strong AI-related demand; management sees very strong AI demand in 2024 H2, leading to higher capacity utilisation rate for TSMC’s leading-edge 3nm and 5nm process technologies; management now expects server AI processors to account for mid-teens percent of TSMC’s total revenue in 2024 (previous expectation was for low-teens percent)

Moving into fourth quarter. We expect our business to continue to be supported by strong demand for our leading-edge process technologies. We continue to observe extremely robust AI-related demand from our customers throughout the second half of 2024, leading to increasing overall capacity utilization rate for our leading-edge 3-nanometer and 5-nanometer process technologies…

…We now forecast the revenue contribution from server AI processors to more than triple this year and account for mid-teens percentage of our total revenue in 2024.

TSMC’s management defines server AI processors as GPUs, AI accelerators, and CPUs for training and inference

At TSMC, we defined server AI processor as GPUs, AI accelerators and CPUs performing training and inference functions and do not including — include networking, edge or on-device AI.

TSMC’s management thinks AI demand is real, based on TSMC’s own experience of using AI and machine learning in its operations; a 1% productivity gain for TSMC is equal to a tangible NT$1 billion return on investment (ROI); management thinks TSMC is not the only company that has benefitted from AI applications

Whether this AI demand is real or not, okay, and my judgment is real, we have talked to our customers all the time, including hyperscaler customers who are building their own chips, and almost every AI innovators is working with TSMC. And so we probably get the deepest and widest look of anyone in this industry. And why I say it’s real? Because we have our real experience. We have using the AI and machine learning in our fab and R&D operations. By using AI, we are able to create more value by driving greater productivity, efficiency, speed, qualities. And think about it, let me use, 1% productivity gain, that was almost equal to about TWD 1 billion to TSMC. And this is a tangible ROI benefit. And I believe we cannot be the only one company that have benefited from this AI application. So I believe a lot of companies right now are using AI and — for their own improving productivity, efficiency and everything.

TSMC’s management thinks AI demand is just at the beginning

[Question] Talk a little bit about what you think about the duration of this current semiconductor up-cycle? Do you think it will continue into the next couple of years? Or are we getting closer to the peak of the cycle?

[Answer] The demand is real and I believe it’s just the beginning of this demand, all right? So one of my key customers said, the demand right now is insane, that it’s just the beginning. It’s [ a form of scientific ] to be engineering, okay? And it will continue for many years.

When TSMC builds fabs to meet AI demand, management has a picture in mind of what the long-term demand picture looks like

[Question] Keen to understand how TSMC gets comfortable with customer demand for AI beyond 2025. And I ask this because it takes a couple of years before you can build a fab, so you need to be taking early — an early view on what does AI look like in 2026, 2027. So how are you specifically cooperating on long-term plans for capacity with these AI customers? And what commitments are these customers giving you?

[Answer]  let me say again that we did talk to a lot of our customers. Almost every AI innovator are working with us and that’s including the hyperscalers. So if you look at the long-term market — long-term structure and market demand profile, I think we have some picture in our mind and we make some judgment, of course, and we work with them on a rolling basis. So how we prepare our capacity, actually, just like Wendell said, we have a disciplined and [ a rollout ] system to plan the appropriate level of capacity. And that — to support our customers’ need, also to maximize our shareholders’ value. That’s what we’re always keeping our mind.

There’s more AI content that goes into the chips in PCs (personal computers) and smartphones; management expects the PC and smartphone business of TSMC to be healthy in the next few years because of AI-related applications 

The unit growth of PC and smartphone is still in the low single digit. But more importantly is the content. The content now we put more AI into that, they are cheap and so the silicon area increased faster than the unit growth. So again, I would like to say that for this PC and the smartphone business, not — is gradually increased and we expect it to be healthy in the next few years because of our AI-related applications.

Advanced packaging is currently a high single-digit percentage of TSMC’s revenue and management expects it to grow faster than TSMC’s overall business over the next 5 years; the margins of advanced packaging are improving, but it’s not at the corporate average level yet

Advanced packaging in the next several years, let’s say, 5 years, will be growing faster than the corporate average. This year, it accounts for about high single digit of our revenue. In terms of margins, yes, it is also improving. However, it’s still — it’s approaching corporate, but not there yet.

Demand for TSMC’s CoWoS (advanced packaging) continues to far exceed supply, even though TSMC has doubled CoWoS capacity compared to a year ago and will double it again

Let me share with you today’s situation is our customer’s demand far exceed our ability to supply. So even we work very hard and increase the capacity by about more than twice, more than 2x as of this year compared with last year and probably double again, but still not enough. And — but anyway, we are working very hard to meet the customers’ requirement.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasingly seeing tangible benefits from deploying AI across the company’s business; management wants to continue investing in AI; the most significant benefits are in content recommendation and targeting, which directly benefits Tencent’s business and advertising revenue; management also sees AI as a productivity tool, as Tencent’s Copilot is being used by Tencent’s software engineers frequently and is helping them generate efficiency gains; management is trying to incorporate AI into a lot of Tencent’s products, but they think it will take a few more quarters before real use cases show up

We are increasingly seeing a tangible benefit of deploying AI across our products and operations, including marketing services and cloud. And we’ll continue investing in AI technology, tools and solutions that assist users and partners…

…I think that the most significant one right now is actually around content recommendation and at targeting because the AI in — the AI engine in those two use cases are generating a significant amount of additional user time and at the same time, it’s generating a higher incremental targeting rate, response rate for our apps and both of them actually are direct benefits to the business and direct benefit to ad revenue. and both of the video accounts and our performance at revenue actually at scale…

… It’s actually a productivity tool that everybody is using on a frequent basis, for example, our Copilot is being used by our engineers across the board on a very frequent basis, and it’s actually generating efficiency gains for our business. and different businesses, a lot of our products are actually testing our Hunyuan and trying to incorporate AI into the — either the production process, right, so that they would gain efficiency or in the user experience use case so that it can actually make their user experience better. So I would say, right now, we are seeing more and more adoption among all our different products and services. It would take probably a few more quarters for us to see some real use cases at scale. 

Tencent’s management used the company’s foundation AI model, Tencent Hunyuan, to facilitate tagging and categorisation of content and advertising materials; Tencent also upgraded its machine learning platforms to deliver better advertising targeting; marketing services revenue from video accounts was up 60% year-on-year; Mini Programs marketing services revenue had robust growth; Tencent used large language models (LLMs) to improve the relevance of Weixin Search results, leading to higher commercial queries and click-through rates, and consequently, an increase in search revenue of more than 100%

Our Marketing Services revenue grew 17% year-on-year. Strength in games and e-commerce categories outweighed weakness in real estate and food and beverage. The Paris Olympics somewhat cushioned industry-wide weakness in brand ad revenue during the third quarter but this positive factor will be absent in the fourth quarter. We leveraged our foundation model, Tencent Hunyuan to facilitate tagging and categorization of content and ad materials. And we upgraded our machine learning platforms to deliver more accurate ad targeting.

By property, video accounts marketing services revenue increased over 60% year-on-year. As we systematically strengthen transaction capabilities in Weixin, advertisers increasingly utilize our marketing tools to boost their exposure and drive sales conversion. Mini Programs marketing services revenue grew robustly year-on-year as our Mini Games and Mini Dramas provided high-value rewarded video ad inventory and generated incremental closed-loop demand. And for Weixin Search, we utilized large language model capabilities to facilitate understanding of complex queries and content, enhancing the relevance of search results. Commercial queries increased and click-through rate improved, and our search revenue more than doubled year-on-year.

Tencent enjoyed swift year-on-year growth in GPU-focused cloud revenue and this revenue stream is now a teens percentage of Tencent’s infrastructure as a services revenue; Tencent has released Tencent Hunyuan Turbo, the new generation of its foundation AI model, which uses a heterogeneous mixture of experts architecture; compared to the previous generation, Hunyuan Turbo’s training and inference efficiency has doubled while its inference costs has halved; Hunyuan Turbo is ranked first for general capabilities among foundation AI models in China; Tencent has open-sourced Hunyuan models; management sees Tencent’s AI revenue being lesser than US cloud companies because China does not have a large enterprise, SaaS, and startup markets for AI services 

Our cloud revenue from GPUs primarily used for AI grew swiftly year-on-year and now represents a teens percentage of our infrastructure as a services revenue. We released Tencent Hunyuan Turbo, which utilizes a heterogeneous mixture of experts architecture, doubling our training and inference efficiency and halving inference cost versus its predecessor Hunyuan Pro. SuperCLUE ranked Hunyuan Turbo first for general capabilities among domestic peers. Last week, we made the Hunyuan large model and the Hunyuan 3D generation models available on an open-source basis. Our international cloud revenue increased significantly year-on-year. We leveraged domain expertise in areas such as games and live streaming and competitive pricing to win international customers…

…The IAS revenue is now in the teens generated by AI. But having said that, we think the amount of AI revenue is actually less than U.S. cloud companies. And the main reason is because, number one, China doesn’t really have a every big enterprise market. And if you look at the U.S., a lot of enterprises are actually sort of fitted in with AI and the — in testing out how AI can do for their business that they’re actually buying a lot of compute, which is not happening in China yet. There’s a very big SaaS ecosystem in the U.S., which everybody is actually trying to add AI to their functionality and thus charge the customers more. And that SaaS ecosystem is not really that vibrant in China. And thirdly, there are also fewer AI start-ups in China, which are actually buying a lot of compute. So as a result, the AI revenue in China on the cloud side is somewhat sort of at scale for us, but I think it will not be exploding like in the U.S. 

Tencent’s management does not want to embed commercial search results into the company’s AI chatbot, YongBao right now; the current focus for YongBao is on growing usage, not monetisation

[Question] Will you ramp up the Gen AI chatbot, would that eventually embed with the commercial sponsor answer as well?

[Answer] In terms of whether YongBao will embed commercial search results, the answer is no. for the current time, we’re focused on making YongBao be as appealing and attractive to users as it can be and we’re not focused on premature monetization.

Tencent’s management plans to invest in capex for AI, but the amount of investment will be small compared to the companies in the USA

If you look at CapEx, right, we believe we have a progressive CapEx plan, especially given that the development of a cloud business and the advent of AI, but at the same time, it’s measured compared to a lot of the U.S. companies. 

Tencent’s management sees the company’s advertising business being driven by 3 factors, namely consumer spending, Tencent’s ability to utilise AI to continue boosting click-through rates from currently low levels, and deployment of more inventory

In terms of the drivers for 2025, the overall macro environment would obviously be important accelerator or decelerator or neutral force for the aggregate advertising market. And that in turn will be a function primarily of consumer confidence. And consumer spending behavior. Now within that overall environment, our relative performance will be a function of, first of all, our advertising technology and our ability to utilize GPUs, utilize neural networks to continue boosting click-through rates from the current very low levels to higher levels that mechanically translates into more revenue. And then secondly, our deployment of specific inventories, in particular, video accounts, in particular, Weixin Search.

Tesla (NASDAQ: TSLA)

Tesla’s management released FSD v12.5 in 2024 Q3, which has increased data and training compute, and 5x increase in parameter count; Tesla also released Actually Smart Summon (your vehicle will autonomously drive to you in parking lots) and FSD for Cybertruck, which includes end-to-end neural nets for highway driving for the first time; version 13 of FSD is coming soon and it is expected to have a 5-6 fold improvement in miles between interventions compared to version 12.5; over the course of 2024, FSD’s improvement in miles between interventions has been at least 3 orders of magnitude; management expects FSD to become safer than human in 2025 Q2; Tesla vehicles on autopilot have 1 crash per 7 million miles, compared to 1 crash per 700,000 miles for the US average; Tesla has earned $236 million in revenue in 2024 Q3 from the release of FSD for Cybertruck and Actually Smart Summon

In Q3, we released the 12.5 series of FSD (Supervised)1 with improved safety and comfort thanks to increased data and training compute, a 5x increase in parameter count, and other architectural choices that we plan to continue scaling in Q4. We released Actually Smart Summon, which enables your vehicle to autonomously drive to you in parking lots, and FSD (Supervised) to Cybertruck customers, including end-to-end neural nets for highway driving for the first time…

…Version 13 of FSD is going out soon… We expect to see roughly a 5- or 6-fold improvement in miles between interventions compared to 12.5. And actually, looking at the year as whole, the improvement in miles between interventions, we think will be at least 3 orders of magnitude. So that’s a very dramatic improvement in the course of the year, and we expect that trend to continue next year.  The current total expectation, internal expectation for the Tesla FSD having longer miles between interventions [indecipherable] is the second quarter of next year, which means it may end up being in the third quarter but it’s next — it seems extremely likely to be next year…

…miles between critical interventions, mentioned by Elon already made 100x improvement with 12.5 from starting of this year and then with v13 release, we expect to be 1,000x from the beginning, from January of this year on production software. And this came in because of technology improvements going to end-to-end, having higher frame rate, partly also helped by hardware force, more capabilities, so on. And we hope that we continue to scale the neural network, the data, the training compute, et cetera. By Q2 next year, we should cross over the average, even in miles per critical intervention [indiscernible] in that case…

…Our internal estimate is Q2 of next year to be safer than human and then to continue with rapid improvements thereafter…

… So we published Q3 vehicle safety report, which shows 1 crash for every 7 million miles on autopilot that compares with the U.S. average of crash roughly every 700,000 miles. So it’s currently showing a 10x safety improvement relative to the U.S. average…

…We released FSD for Cybertruck and other features like actually small [indiscernible] like Elon talked about in North America, which contributed $326 million of revenues in the quarter. 

Tesla has deployed a 29,000 H100 cluster and expects to have a 50,000 H100 cluster by the end of October 2024, to support FSD and Optimus; Tesla is not training compute-constrained; Tesla’s AI has gotten so good that it now takes a long time to decide which version of the software is better because mistakes happen so infrequently and that is the big bottleneck to Tesla’s AI development; management is being very careful with AI-spending

We deployed and are training ahead of schedule on a 29k H100 cluster at Gigafactory Texas – where we expect to have 50k H100 capacity by the end of October…

…We continue to expand our AI training capacity to accommodate the needs of both FSD and Optimus. We are currently not training compute-constrained. [indiscernible] probably the big limiting factors of the FSD is actually getting so good that it takes us a while to actually find mistakes. And when you start getting to where it can take 10,000 miles to find a mistake, it takes a while to actually figure out which it is, is software A better than software B? It actually takes a while to figure it out because neither 1 of them makes the mistakes, would take a long time to make mistakes. So it’s actually the single biggest limiting factor is how long does it take us to figure out which version is better? Sort of a high-class problem…

… One thing which I’d like to elaborate is that we’re being really judicious on our AI compute spend to and saying how best we can utilize the existing infrastructure before making further investments…

…We still got to take which models are performing better. So the validation network to picking the models because as mentioned the miles between intervention is pretty large. We had to drive a lot of miles going close to. We do have simulation and other ways to get those metrics. Those 2 help, but in the end, that’s a big bottleneck. That’s why we’re not training-compete constrained alone. 

In the 10 October 2024 “We, Robot” event by Tesla, the company had showcased 50 autonomous vehicles, including 20 Cybercabs; the Cybercabs had no steering wheel, brake, or accelerator pedals, so they were truly autonomous

On October 10, we laid out a vision for an autonomous and future that I think is very compelling that the Tesla team did a phenomenal job there with actually giving people an option to experience the future, where you have humanoid robots working among the craft, not with a canned video and a presentation or anything but walking among crowd so he drinks and whatnot. And we had 50 autonomous vehicles. There were 20 Cybercabs but there were an additional 30 Model Ys, operating fully autonomously the entire night, carrying thousands of people with no incidents the entire night…

…Worth emphasizing that the Cybercab had no steering wheel or brake or accelerator panels, meaning there was no way for anyone to intervene manually a unit if they wanted to and the whole night went very smoothly.

Tesla is already offering autonomous ridehailing for Tesla employees in the Bay Area; the ridehailing service currently has a safety driver; Tesla has been testing autonomous ridehailing for some time; Elon Musk expects ridehailing to be rolled out to the public in California and Texas in 2025, and maybe other states in the USA; California has a lot of regulations around ridehailing, but there’s still a regulatory pathway; Tesla actually has passed Federal regulations for ridehailing, but it’s the state level where there are problems

We have for Tesla employees in the Bay Area, we already are offering ridehailing capabilities. So you can actually, with the development app, you can request a ride and it will take you anywhere in the Bay Area. We do have a safety driver for now but it’s not required to do that…

… We’ve been testing it for the good part of the year. And the building blocks that we needed in order to build this functionality and deliver it to production, we’ve been thinking about working on for years…

…So it’s not like we’re just starting to think about this stuff right now while we’re building out the early stages of our ridehailing network. We’ve been thinking about this for quite a long time, and we’re excited to get the functionality out there…

…We do expect to roll out ridehailing in California and Texas next year to the public. Now California is somewhere — there’s quite a long regulatory approval process. I think we should get approval next year but it’s contingent upon regulatory approval. Texas is a lot faster so it’s — we’ll definitely have available in Texas and probably have it available in California, subject to regulatory approval. And then — and maybe some other states actually next year as well, but at least California and Texas…

…[Question] Elon mentioned unsupervised FSD in California and Texas next year. Does that mean regulators have agreed to it in the entire state for existing hardware 3 and 4 vehicles?

[Answer] As I said earlier, California loves regulation… here’s a pathway. Obviously, Waymo operates in California so there’s just a lot of forms and a lot of approvals that are required. I mean, I’d be shocked if we don’t get approved next year, but it’s just not something we totally control. But I think we will get approval next year in California and Texas. And towards the Bay Area, branch out beyond California and Texas…

…I think it’s important to reiterate this like on our certifying a vehicle at the federal level in the U.S. is done by meeting FMVSS regulations. Our vehicles today that are produced there capable to meet all those regulations, the Cybercab regulations. And so the deployment of the vehicle to the road is no limitation, but its limitation is what you said at the state level where they control autonomous vehicle deployment. Some states are relatively easy, as you mentioned, for Texas. It’s other ones have always like California that may take a little longer. The other ones hadn’t set up anything yet. 

Tesla’s management acknowledges that there’s a chance that Tesla vehicles with Hardware Version 3 may not support unsupervised full self-dricing, and if so, Tesla will replace the hardware for those vehicle fleets for free into Hardware Version 4

By some measure, Hardware 4 has really several times the capability of Hardware 3. It’s easier to get things to work with then it takes a lot of effort to sort of squeeze that box analyst hat Hardware 3. And there is some chance that Hardware 3 is — does not achieve the safety level that allows for unsupervised FSD. There is some chance of that. And if that turns out to be the case, we will upgrade those group bought Hardware 3 FSD for free. And we have designed the system to be upgradeable so it’s really just to sort of switch out the computer thing, the camera, the cameras are capable. But we don’t actually know the answers of that. But if it does turn out, we’ll make sure we take care of those who are.

Tesla’s management thinks real-world AI in self-driving cars is different from LLMs (large language models) in that (1) real-world AI requires massive amounts of context that needs to be processed with a small amount of compute power and the way around this limitation is to do massive amounts of training so that the amount of inference that needs to be done is tiny, and (2) it’s difficult to sort out what data coming in from the video feed is important for the training

The nature of real world AI is different from LLM in that you have a massive amount of context. So like the — you’ve got a case of Tesla cameras that [indiscernible] if you include tunnel camera that — so you’ve got some context. And that is then distilled down into a small number of control outputs, whereas it’s like it’s very rare to have, in fact, I’m not sure any LLM out there can do gigabytes of context. And then you’ve got to then process that in the car with a very small amount of compute power. It’s all doable and it’s happening, but it is a different problem than what, say, a Gemini or OpenAI is doing.

And now part of the way you can make up for the fact that the inference computer is quite small, it is by spending a lot of effort on training. And just like a human the way you train on something, the less metal work takes when you try to — when you do it, like when the first time like a driving it absorbs your whole mind. But then as you train more and more on driving then the driving becomes a background task. It doesn’t — it only absorbs a small amount of your mental capacity because you have a lot of training. So we can make up for the fact that the inference computers — it’s tiny compared to a 10-kilowatt bank of GPUs because you’ve got a few hundred watts of inference compute. We can make up that with heavy training.

And then there’s also vast amounts to the actual petabytes of data coming in are tremendous. And then sorting out what training is important, of the vast amounts of video data coming in the feed, what is actually most important for training. That’s also quite difficult.

Tesla’s management thinks Elon Musk’s xAI AI-startup has been helpful to Tesla, but the 2 companies are focused on very different kinds of AI problems 

Well, I should say that xAI has been helpful to Tesla AI quite a few times in terms of things like scaling it, like training, just even like recently in the last week or so, improvements in training, where if you’re doing a big training run and it fails, being able to continue training and to recover from a training run, has been pretty helpful. But there are different problems. xAI actually is working on artificial general intelligence or artificial super intelligence. Tesla is autonomous cars and autonomous robots. There are different problems…

…Yes, Tesla is focused on real-world AI. And I was saying earlier, it is quite a bit different from LLM. But you have massive context in the form of video and some amount of audio, that’s going to be distilled like extremely efficient inference compute. I do think Tesla is the most efficient in the world in terms of inference compute because out of necessity, we have to be very good at efficient inference. We can’t put 10 kilowatts of GPUs in a car. We’ve got a couple of hundred watts. And it’s a pretty well designed Tesla AI chip, but it’s still a couple hundred watts. But there are different problems. I mean, the stuff at xAI. We’re running inference. I mean, it is running inference, answering questions on a 10-kilowatt rack. It’s like you can’t put that in a car. It’s a different problem.

Elon Musk created xAI because he thought there wasn’t a truth-seeking AI company being built

xAI is because I felt there wasn’t there wasn’t a truth-seeking digital super intelligence company out there, like that’s what it came down to. There needed to be a truth-seeking AI company that is very [indiscernible] about being truthful. I’m not saying xAI is perfect, but that is truth, but that is at least the explicit aspiration, even if something is politically incorrect, it would still be truhtful. I think this is very important for AI safety. So I think xAI has been helpful to Tesla and will continue to be helpful to Tesla, but they are very different problems.

There are no other car companies that has a world-class AI and chip-design team like Tesla

And like what other car company has a world-class chip design team? Like zero. What other car company has a world-class AI team like Tesla does? 0. Those were all startups that were created from scratch.

The Trade Desk (NASDAQ: TTD)

The incorporation of AI into Kokai, Trade Desk’s ad-buying platform, is encouraging adoption of Trade Desk by CFOs and CMOs

While there has been a lot of macro focus on the reduction in inflation rates, historic highs for stock market indices and growing indications of a soft landing, that’s not necessarily translating to consumer confidence, which is why CMOs are becoming much more closely aligned with their CFOs. CFOs want more evidence than ever that marketing is working. And for CFOs that doesn’t just mean traditional marketing KPIs. It means growing the top line business. All of our AI and data science injection into Kokai, our latest product release, is encouraging CMOs and CFOs to lean more and more on TTD to deliver real, measured growth…

…When CMOs faced pressure to achieve more with less, they turn to platforms like ours for flexibility, precision and measurable results.

Companies need an AI strategy, and Trade Desk’s AI product, Koa, is a great copilot for advertising traders; Trade Desk has plenty of opportunities in an AI-world because of the data assets it has, and management wants to improve all aspects of the company through AI

Every company needs an AI strategy. Our AI product, Koa, is a great copilot for traders. But this is only the beginning. There are endless possibilities for us as we have 1 of the best data assets on the Internet. The learnings that come from buying the global open Internet outside of walled gardens. To win in this new frontier, we’re looking across our entire suite of products, algorithms and features and asking how they all can be advanced by AI.

Visa (NYSE: V)

For Risk and Identity Solutions within value-added services, Visa wants to acquire Featurespace, an AI payments protection tech company that will enable Visa to enhance fraud prevention tools to clients and protect consumers in real time; Worldline, a Visa partner, will be using Decision Manager to provide businesses with AI-based e-commerce fraud detection abilities; Featurespace is a world leader in providing AI solutions to fight fraud

In Risk and Identity Solutions, we recently announced our intent to acquire Featurespace, a developer of real-time artificial intelligence payments protection technology. It will enable Visa to provide enhanced fraud prevention tools to our clients and protect consumers in real-time across various payment methods.  And Worldline, already a Visa partner and leading European acquirer, will soon be launching an optimized fraud management solution, utilizing Decision Manager to provide businesses with AI-based e-commerce fraud detection capabilities…

…Featurespace is a world leader in providing AI-driven solutions to combat that fraud, to reduce that fraud, to enable our clients and partners to continue to serve their customers in a safe way.

Visa’s management sees AI as being a driver of productivity across multiple functions in the company, and as a differentiator in its products and services

[Question] I just wanted to ask how you see AI playing into the business model. Do you see it more as driving VAS or incremental business model, uplift revenue or cost improvement? Or is it more of a competitive differentiator that will just keep you ahead of your competition?

[Answer] As it relates more broadly to especially generative AI at Visa, I see it really in 2 different buckets. The first is we are adopting it aggressively across our company to drive productivity. And we’ve seen some great results from everywhere to our engineering teams, to our accounting teams, to our sales teams, our client service teams. And we’re still in the early stages of, I think, the very significant impact this will have on the productivity of our business. I also see it as a real differentiator to the products and services that we’re putting in market. You’ve heard me talk about some of the new risk capabilities, risk management capabilities, for example, that we’ve deployed in the account-to-account space, which are all enabled with generative AI. You mentioned Featurespace. We’ve had some really good success in other parts of both our value-added services business and the broader consumer payments business as well. And we’ve got a product pipeline that is very heavily tilted towards some, we think, very exciting generative AI capabilities that hopefully you’ll hear more from us on soon.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Fiverr, Mastercard, Meta Platforms, Microsoft, Netflix, Shopify, TSMC, Tesla, and Visa. Holdings are subject to change at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *