More Of The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management introduced multiple generative AI models in the Firefly family in 2024 and now has a generative video model; Adobe’s generative AI models are designed to be safe for commercial usage; the Firefly models are integrated across Adobe’s software products, which brings value to creative professionals across the world; Firefly has powered 16 billion generations (12 billion in 2024 Q2) since its launch in March 2023 and each month in 2024 Q3 has set a new record in generations; the new Firefly video model is in limited beta, but has already gathered massive customer interest (the model has driven a 70% increase in Premier Pro beta users since its introduction) and will be generally available in early-2025; recent improvements to the Firefly models include 4x faster image generation; enterprises such as Tapestry and Pepsi are using Firefly Services to scale content production; Firefly is the foundation of Adobe’s AI-related innovation; management is using Firefly to drive top-of-funnel user-acquisition for Adobe

2024 was also a transformative year of product innovation, where we delivered foundational technology platforms. We introduced multiple generative AI models in the Adobe Firefly family, including imaging, vector design and, most recently, video. Adobe now has a comprehensive set of generative AI models designed to be commercially safe for creative content, offering unprecedented levels of output quality and user control in our applications…

…The deep integration of Firefly across our flagship applications in Creative Cloud, Document Cloud, and Experience Cloud is driving record customer adoption and usage. Firefly-powered generations across our tools surpassed 16 billion, with every month this past quarter setting a new record…

…We have made major strides with our generative AI models with the introduction of Firefly Image Model 3 enhancements to our vector models, richer design models, and the all-new Firefly Video Model. These models are incredibly powerful on their own and their deep integration into our tools like Lightroom, Photoshop, Premiere, InDesign and Express have brought incredible value to millions of creative professionals around the world…

…The launch of the Firefly Video Model and its unique integration in Premier Pro and limited public beta garnered massive customer interest, and we look forward to making it more broadly available in early 2025.  This feature drove a 70% increase in the number of Premier Pro beta users since it was introduced at MAX. Enhancements to Firefly image, vector, and design models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Premiere Pro and Adobe Express…

…Firefly Services adoption continued to ramp as enterprises such as Pepsi and Tapestry use it to scale content production, given the robust APIs and ease of creating custom models that are designed to be commercially safe…

…This year, we introduced Firefly Services. That’s been — that’s off to a great start. We have a lot of customers that are using that. A couple we talked about on the call include Tapestry. They’re using it for scaled content production. Pepsi, for their Gatorade brand, is enabling their customers to personalize any merchandise that they’re buying in particular, starting with Gatorade bottles. And these have been very, very productive for them, and we are seeing this leveraged by a host of other companies for everything from localization at scale to personalization at scale to user engagement or just raw content production at scale as well…

…You’re exactly right in terms of Firefly is a platform and a foundation that we’re leveraging across many different products. As we talked about, everything from Express and Lightroom and even in Acrobat on mobile for a broad-based but then also in our core Creative products, Photoshop, Illustrator, Premiere. And as we’ve alluded to a number of times on this call, with the introduction of video, even a stand-alone offer for Firefly that we think will be more valuable from a tiering perspective there. And then into Firefly Services through APIs in connection to GenStudio. So we are looking at leveraging the power of this AI foundation in all the activities…

…We see that when we invest in mobile and web, we are getting some very positive signals in terms of user adoption and user conversion rate. So we’re using Firefly very actively to do that.

Adobe’s management has combined content and data in Adobe GenStudio to integrate content creation with marketing, leading to an end-to-end content supply chain solution; the Adobe GenStudio portfolio has a new addition in Adobe GenStudio for Performance Marketing, which has seen strong customer demand since becoming generally available recently; management is expanding the go-to-market teams to sell GenStudio solutions that cut across the Digital Media and Digital Experience segments and early success has been found, with management expecting acceleration in this pipeline throughout FY2025 and beyond

We set the stage to drive an AI content revolution by bringing content and data together in Adobe GenStudio integrating high-velocity creative expression with enterprise activation. The release of Adobe GenStudio for performance marketing integrates Creative Cloud, Express, and Experience Cloud and extends our end-to-end content supply chain solution, empowering freelancers, agencies, and enterprises to accelerate the delivery of content, advertising and marketing campaigns…

…We have brought our Creative and Experience Clouds together through the introduction of Firefly Services and GenStudio, addressing the growing need for scaled content production in enterprises…

… GenStudio enables agencies and enterprises to unlock new levels of creativity and efficiency across content creation and production, workflow and planning, asset management, delivery and activation and reporting and insights. 

Adobe GenStudio for Performance Marketing is a great addition to the GenStudio portfolio, offering an integrated application to create paid social ads, display ads, banners, and marketing e-mails by leveraging preapproved on-brand content. It brings together creative teams that define the foundational requirements of a brand, including guidelines around brand voice, channels, and images with marketing teams that need to deliver numerous content variations with speed and agility. We are seeing strong customer demand for Adobe GenStudio for Performance Marketing since its general availability at MAX…

… We’re expanding our enterprise go-to-market teams to sell these integrated solutions that cut across Digital Media and Digital Experience globally under the new GenStudio umbrella. We have seen early success for this strategy that included Express and Firefly Services in Q4. As we enable our worldwide field organization in Q1, we anticipate acceleration of this pipeline throughout the rest of the year and beyond.

Adobe’s management introduced AI Assistant in Acrobat and Reader in FY2024; users of AI Assistant completed their document-tasks 4x faster on average; AI Assistant is now available across desktop, web, and mobile; management introduced specialised AI for specific document-types and tasks in 2024 Q3 (FY2024 Q4); management saw AI Assistant conversations double sequentially in 2024 Q3; AI Assistant is off to an incredibly strong start and management sees it continuing to accelerate; AI Assistant allows users to have conversations with multiple documents, some of which are not even PDFs, and it turns Acrobat into a general-purpose productivity platform; the rollout of AI Assistant in more languages and documents gives Acrobat’s growth more durability

We took a major step forward in FY ’24 with the introduction of AI Assistant in Acrobat and Reader. AI Assistant and other AI features like Liquid Mode and Firefly are accelerating productivity through faster insights, smarter document editing and integrated image generation. A recent productivity study found that users leveraging AI Assistant completed their document-related tasks 4x faster on average. AI Assistant is now available in Acrobat across desktop, web, and mobile and integrated into our Edge, Chrome, and Microsoft Teams extensions. In Q4, we continued to extend its value with specialized AI for contracts and scanned documents, support for additional languages, and the ability to analyze larger documents…

… We saw AI Assistant conversations double quarter-over-quarter, driving deeper customer value…

… AI Assistant for Acrobat is off to an incredibly strong start and we see it continuing to accelerate…

…One of the big things that I think has been unlocked this year is moving, not just by looking at a PDF that you happen to be viewing, but being able to look at and have a conversation with multiple documents, some of which don’t even have to be PDF. So that transition and that gives us the ability to really take Acrobat and make it more of a general purpose productivity platform…

…The thing I’ll add to that is the durability of that, to your point, in languages, as we roll that out in languages, as we roll it out across multiple documents and as we roll it out in enterprises and B2B specifically. So again, significant headroom in terms of the innovation agenda of how Acrobat can be made even more meaningful as a knowledge tool within the enterprise.  

Adobe’s management will soon introduce a new higher-priced Firefly offering that includes the video models; management thinks the higher-priced Firefly offering will help to increase ARPU (average revenue per user); management sees video generation as a high-value activity, which gives Adobe the ability to introduce higher subscription tiers that come with video generation; management sees consumption of AI services adding to Adobe’s ARR (annual recurring revenue) in 2 ways in FY2025, namely, (1) pure consumption-based pricing, and (2) consumption leading to a higher pricing-tier; management has learnt from pricing experiments for AI services and found that the right model for Adobe is a combination of access to features and usage-limits

We will soon introduce a new higher-priced Firefly offering that includes our video models as a comprehensive AI solution for creative professionals. This will allow us to monetize new users, provide additional value to existing customers, and increase ARPU…

…Video generation is a much higher-value activity than image generation. And as a result, it gives us the ability to start to tier Creative Cloud more actively there…

…You’re going to see “consumption” add to ARR in 2 or maybe 3 ways more so in ’25 than in ’24. The first, and David alluded to this, is if you have a video offering and that video offering, that will be a pure consumption pricing associated with it. I think the second is in GenStudio and for enterprises and what they are seeing. With respect to Firefly Services, which, again, I think David touched on how much momentum we are seeing in that business. So that is, in effect, a consumption business as it relates to the enterprise so I think that will also continue to increase. And then I think you’ll see us with perhaps more premium price offering. So the intention is that consumption is what’s driving the increased ARR, but it may be as a result of a tier in the pricing rather than a consumption model where people actually have to monitor it. So it’s just another way, much like AI Assistant is of monetizing it, but it’s not like we’re going to be tracking every single generation for the user, it will just be at a different tier…

… What we’ve done over the last year, there’s been a bit of experimentation, obviously, in the core Creative applications. We’ve done the generative credits model. What we saw with Acrobat was this idea of a separate package and a separate SKU that created a tier that people were able to access the feature through. And as we learn from all of these, we think, as Shantanu had mentioned earlier, that the right tiering model for us is going to be a combination of feature, access to certain features and usage limits on it. So the higher the tier, the more features you get and the more usage you get of it.

The Adobe Experience Platform (AEP) AI Assistant helps marketers automate tasks and generate new audiences and journeys

Adobe Experience Platform AI Assistant empowers marketers to automate tasks and generate new audiences and journeys. Adobe Experience Manager generates variations, provides dynamic and personalized content creation natively through AEM, enabling customers to deliver more compelling and engaging experiences on their websites.

Adobe’s management thinks there are 3 foundational differences in the company’s AI models and what the rest are doing, namely, (1) commercially safe models, (2) incredible control of the models, and (3) the integration of the models into products

The foundational difference between what we do and what everyone else does in the market really comes down to 3 things: one is commercially safe, the way we train the models; two is the incredible control we bake into the model; and three is the integration that we make with these models into our products, increasingly, of course, in our CC flagship applications but also in Express and Legroom and these kinds of applications but also in Anil’s DX products as well. So that set of things is a critical part of the foundation and a durable differentiator for us as we go forward.

Adobe’s management is seeing that users are onboarded to products faster when using generative AI capabilities; management is seeing that users who use generative AI features have higher retention rates

We are seeing in the core Creative business, when people try something like Photoshop, the onboarding experience is faster to success because of the use of generative AI and generative capabilities. So you’ll start to see us continuing to drive more proliferation of those capabilities earlier in the user journeys, and that has been proven very productive. But we also noticed that more people use generative AI. Again, we’ve always had good retention rates, but the more people use generative AI, the longer they retain as well. 

MongoDB (NASDAQ: MDB)

MongoDB’s management is seeing a lot of large customers want to run workloads, even AI workloads, in on-premise format

We definitely see lots of large customers who are very, very committed to running workloads on-prem. We even see some customers want who are on to run AI workloads on-prem…

… I think you have some customers who are very committed to running a big part of the estate on-prem. So by definition, then if they’re going to build an AI workload, it has to be run on-prem, which means that they also need access to GPUs, and they’re doing that. And then other customers are leveraging basically renting GPUs from the cloud providers and building their own AI workloads.    

MongoDB’s initiative to accelerate legacy app modernisation with AI (Relational Migrator) has seen a 50% reduction in the cost to modernise in its early days; customer interest in this initiative is exceeding management’s expectations; management expects modernisation projects to include large services engagements and MongoDB is increasing its professional services delivery capabilities; management is building new tools to accelerate future monetisation of service engagements; management has growing confidence that the monetisation of modernisation capabilities will be a significant growth driver for MongoDB in the long term; there are a confluence of events, including the emergence of generative AI to significantly reduce the time needed for migration of databases, that make the modernisation opportunity attractive for MongoDB; the buildout of MongoDB’s professional services capabilities will impact the company’s gross margin

We are optimistic about the opportunity to accelerate legacy app modernization using AI and are investing more in this area. As you recall, we ran a few successful pilots earlier in this year, demonstrating that AI tooling combined with professional services and our relational migrator product, can significantly reduce the time, cost and risk of migrating legacy applications on to MongoDB. While it’s early days, we have observed a more than 50% reduction in the cost to modernize. On the back of these strong early results, additional customer interest is exceeding our expectations. 

Large enterprises in every industry and geography are experiencing acute pain from their legacy infrastructure and are eager for more agile performance and cost-effective solutions. Not only our customers excited to engage with us, they also want to focus on some of the most important applications in their enterprise further demonstrating the level of interest and size of the long-term opportunity.

As relational applications encompass a wide variety of database types, programming languages, versions and other customer-specific variables, we expect modernization projects to continue to include meaningful services engagements in the short and medium term. Consequently, we are increasing our professional services delivery capabilities, both directly and through partners. In the long run, we expect to automate and simplify large parts of the modernization process. To that end, we are leveraging the learnings from early service engagements to develop new tools to accelerate future monetization efforts. Although it’s early days and scaling our legacy app monetization capabilities will take time, we have increased conviction that this motion will significantly add to our growth in the long term…

…We’re so excited about the opportunity to go after legacy applications is that, one, it seems like there’s a confluence of events happening. One is that the increasing cost and tax of supporting and managing these legacy apps are just going up enough. Second, for many customers who are in regulated industries, the regulators are calling their the fact that they’re running on these legacy apps a systemic risk, so they can no longer kick the can down the road. Third, also because they no longer kick the can around, some vendors are going end of life, So they have to make a decision to migrate those applications to a more modern tech stack. Fourth, because Gen AI is so predicated on data and to build a competitive advantage, you need to leverage your proprietary data. People want to access that data and be able to do so easily. And so that’s another reason for them to want to modernize…

…we always could help them very easily move the data and map the schema from a relational schema to a document schema. The hardest part was essentially rewriting the application. Now with the advent of GenAI, you can now significantly reduce the time. One, you can use GenAI to analyze the existing code. Two, you can use GenAI to reverse engineer tests to test what the code does. And then three, you can use GenAI to build new code and then use this test to ensure that the new code produce the same results as the old code. And so all that time and effort is suddenly cut in a meaningful way…

…We’re really building out that capacity in order to meet the demand that we’re seeing relative to the opportunity. We’re calling it in particular because it has a gross margin impact because that’s where that will typically show up. 

MongoDB’s management thinks that the company’s database is uniquely suited for the query-rich and complex data structures commonly found in AI applications; AI-powered recommendation systems have to consider complex data structures, beyond a customer’s purchase history; MongoDB’s database unifies source data, metadata, operational data and vector data in all 1 platform, providing a better developer experience; management thinks MongoDB is well-positioned for AI agents because AI agents that perform tasks need to interact with complex data structures, and MongoDB’s database is well-suited for this

MongoDB is uniquely equipped to query-rich and complex data structures typical of AI applications. The ability of a database to query-rich and complex data structures is crucial because AI applications often rely on highly detailed, interrelated and nuanced data to make accurate predictions and decisions. For example, a recommendation system doesn’t just analyze a single customer’s purchase but also considers their browsing history, peer group behavior and product categories requiring a database that can query and ensuring these complex data structures. In addition, MongoDB’s architecture unified source data, metadata, operational data and vector data in all 1 platform, updating the need for multiple database systems and complex back-end architectures. This enables a more compelling developer experience than any other alternative…

…When you think about agents, there’s jobs, there’s sorry, there’s a job, this project and then this task. Right now, the agents that are being rolled out are really focused on task, like, say, something from Sierra or some other companies that are rolling out agents. But you’re right, what they deem to do is to deal with being able to create a rich and complex data structure.

Now why is this important for in AI is that AI models don’t just look at isolated data points, but they need to understand relationships, hierarchies and patterns within the data. They need to be able to essentially get real-time insights. For example, if you have a chat bot where someone’s clearing customers kind of trying to get some update on the order they placed 5 minutes ago because they may have not gotten any confirmation, your chatbot needs to be able to deal with real-time information. You need to be able to deal with basically handling very advanced use cases, understanding like do things like fraud detection, to understand behaviors on supply chain, you need to understand intricate data relationships. All these things are consistent with MongoDB offers. And so we believe that at the end of the day, we are well positioned to handle this.

And the other thing that I would say is that we’ve embedded in a very natural way, search and vector search. So we’re just not an OLTP [online transaction processing] database. We do tech search and vector search, and that’s all one experience and no other platform offers that, and we think we have a real advantage. 

In the AI market, MongoDB’s management is seeing most customers still being in the experimental stage, but the number of AI apps in production is increasing; MongoDB has thousands of AI apps on its platform, but only a small number have achieved enterprise-scale; there’s one AI app on MongoDB’s platform that has grown 10x since the start of 2024 and is a 7-figure workload today; management believes that as AI technology matures, there will be more AI apps that attain product-market fit but it’s difficult to predict when this will happen; management remains confident that MongoDB will capture its share of successful AI applications, as MongoDB is popular with developers building sophisticated AI apps; there are no compelling AI models for smartphones at the moment because phones do not have sufficient computing power

From what we see in the AI market today, most customers are still in the experimental stage as they work to understand the effectiveness of the underlying tech stack and build early proof-of-concept applications. However, we are seeing an increasing number of AI apps in production. Today, we have thousands of AI apps on our platform.  What we don’t yet see is many of these apps actually achieving meaningful product-market fit and therefore, significant traction. In fact, as you take a step back and look at the entire universe of AI apps, a very small percentage of them have achieved the type of scale that we commonly see with enterprise-specific applications. We do have some AI apps that are growing quickly, including one that is already a 7-figure workload that has grown 10x since the beginning of the year.

Similar to prior platform shifts as the usefulness of AI tech improves and becomes more cost-effective we will see the emergence of many more AI apps that do nail product market fit, but it’s difficult to predict when that will happen more broadly. We remain confident that we will capture our fair share of these successful AI applications as we see that our platform is popular with developers building more sophisticated AI use cases…

…Today, we don’t have a very compelling model designed for our phones, right? Because today, the phones don’t have the computing horsepower to run complex models. So you don’t see a ton of very, very successful consumer apps besides, say, ChatGPT or Claude.

MongoDB’s management is building enterprise-grade Atlas Vector Search functionality into the company’s platform so that MongoDB will be in an even better position to win AI opportunities; management is bringing vector search into MongoDB’s community and EA (Enterprise Advance, which is the company’s non-Atlas business) offerings

We continue investing in our product capabilities, including enterprise-grade Atlas Vector Search functionality to build on this momentum and even better position MongoDB to capture the AI opportunity. In addition, as previously announced, we are bringing search and vector service to our community and EA offerings, leveraging our run-anywhere competitive advantage in the world of AI…

…We are investing in our what we call our EA business. First, we’re starting by investing with Search and Vector Search and a community product. That does a couple of things for us. One, whenever anyone starts with MongoDB with the open source product, they need get all the benefits of that complete and highly integrated platform. Two, those capabilities will then migrate to EA. So EA for us is an investment strategy.

MongoDB’s management is expanding the MongoDB AI Applications Program (MAAP); the MAAP has signed on new partners, including with Meta; management expects more of the MAAP workloads to happen on Atlas initially

We are expanding our MongoDB AI Applications program, or MAAP, which helps enterprise customers build and bring AI applications into production by providing them with reference architectures, integrations with leading tech providers and coordinated services and support. Last week, we announced a new cohort of partners, including McKinsey, Confluent, CapGemini and Instructure as well as the collaboration with Meta to enable developers to build arenrich applications on MongoDB using Llama…

…[Question] On the MAAP program, are most of those workloads going to wind up in Atlas? Or will that be a healthy combination of EA and Atlas?

[Answer] I think it’s, again, early days. I would say — I would probably say more on the side of Atlas than EA in the early days. I think once we introduce Search and Vector Search into the EA product, you’ll see more of that on-prem. Obviously, people can use MongoDB for AI workloads using other technologies as well in conjunction with MongoDB for on-prem AI use cases. But I would say you’re probably going to see that happen first in Atlas.

Tealbook consolidated from Postgres, PG Vector, and Elastic Search to MongoDB; Tealbook has seen cost efficiencies and increased scalability with Atlas Vector Search for its application that uses generative AI to collect, verify and enrich supplier data across various sources

Tealbook, a supplier intelligence platform migrated from [ Postgres ], [ PG Vector ] and Elastic Search to MongoDB to eliminate technical debt and consolidate their tech stack. The company experienced workload isolation and scalability issues in PG vector, and we’re concerned with the search index inconsistencies, which were all resolved with the migration to MongoDB. With Atlas Vector search and dedicated search notes, Tealbook has realized improved cost efficiency and increase scalability for the supplier data platform, an application that uses GenAI to collect, verify and enrich supplier data across various sources.

MongoDB’s partnerships with all 3 major cloud providers – AWS, Azure, and GCP – for AI workloads are going well; management expects the cloud providers to bundle their own AI-focused database offerings with their other AI offerings, but management also thinks the cloud providers realise that MongoDB has a better offering and it’s better to partner with the company

With AWS, as you said, they just had their Reinventure last week. It remains very, very strong. We closed a ton of deals this past quarter, some of them very, very large deals. We’re doing integrations to some of the new products like Q and Bedrock and the engagement in the field has been really strong.

On Azure, I think we — as I’ve shared in the past, we start off with a little bit of a slower start. But in the words of the person who runs their partner leadership, the Azure MongoDB relationship has never been stronger. — we closed a large number of deals, we’re part of what’s called the Azure-native IC service program and have a bunch of deep integrations with Azure, including Fabric, Power BI, Visual Studio, Symantec Kernel and Azure OpenAI studio. And we’re also one of Azure’s largest marketplace partners.

And GCP does — we’ve actually seen some uptick in terms of co-sales that we’ve done this past quarter. GCP made some comp changes where that were favorable to working with MongoDB that we saw some results in the field and we’re focused on closing a handful of large deals with GCP in Q4. So in general, I would say things are going quite well.

And then in terms of, I guess, implying your question was like the hyperscalers and are they potentially bundling things along with their AI offerings? I mean, candidly, since day 1, the hyperscalers have been bundling their database offerings with every offering that they have. And that’s been their pretty predominant strategy. And we’ve — I think we’ve executed well against strategy because databases are not like a by-the-way decision. It’s an important decision. And I think the hyperscalers are seeing our performance and realize it’s better to partner with us. And as I said, customers understand the importance of the data layer, especially by our applications. And so the partnership across all 3 hyperscalers is strong.

A new MongoDB AI-related capability called Atlas Search Nodes is seeing very high demand; Atlas Search is being used by one of the world’s largest banks to provide a Google-like Search experience on payments data for customers; an AI-powered accounting software provider is using Atlas Search to allow end-users to perform ad-hoc analysis

On search, we introduced a new capability called Atlas Search nodes, which where you can asymmetrically scale your search nodes because if you have a search intensive use case, you don’t have to scale all your nodes because that have become quite expensive. And we’ve seen that this kind of groundbreaking capability really well received. The demand is quite high. And because customers like they can tune the configuration to the unique needs of their search requirements.

One of the world’s largest banks is using Atlas Search to provide like a Google-like search experience on payments data for massive corporate customers. So there’s a customer-facing application, and so performance and scalability are critical. A leading provider of AI-powered accounting software uses Atlas Search to Power’s invoice analytics future, which allows end users on finance teams to perform ad hoc analysis and easily find past due invoices and voices that contain errors.

Vector Search is only in its first full year of being generally available; uptake of Vector Search has been very high; MongoDB released a feature on Atlas Vector Search in 2024 Q3 that reduces memory requirements by up to 96% and this helps Atlas Vector Search support larger vector workloads at a better price-performance ratio; a multinational news organisation used Vector Search to create a generative AI tool to help producers and journalists sift through vast quantities of information; a security firm is using Vector Search for AI fraud; a global media company replaced Elastic Search with Vector Search for a user-recommendation engine

On Vector Search, again, and it’s been our kind of our first full year since going generally available and the product uptake has been actually very, very high. In Q3, we released quantization for Atlas Vector Search, which reduces the memory requirements by up to 96%, allowing us to support larger Vector workloads with vastly improved price performance.

For example, a multinational news organization created a GenAI powered tool designed to help producers and journalists efficiently search, summarize and verify information from vast and varied data sources. A leading security firm is using Atlas Vector certified AI fraud and a leading global media company replaced elastic search with hybrid search and vector search use case for a user recommendation engine that’s built to suggest that’s building to suggest articles to end users.

MongoDB’s management thinks the industry is still in the very early days of shifting towards AI applications

I do think we’re in the very, very early days. They’re still learning experimenting…  I think as people get more sophisticated with AI as the AI technology matures and becomes more and more useful, I think applications will — you’ll start seeing these applications take off. I kind of chuckle that today, I see more senior leaders bragging about the chips they are using versus the Appstore building. So it just tells you that we’re still in the very, very early days of this big platform shift.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue again had incredibly strong growth in 2024 Q3, driven by demand for the Hopper GPU computing platform; Nvidia’s H200 sales achieved the fastest ramp in the company’s history

Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional, and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest prod ramp in our company’s history.

Nvidia’s H200 product has 2x faster inference speed, and 50% lower total cost of ownership (TCO)

The H200 delivers up to 2x faster inference performance and up to 50% improved TCO. 

Cloud service providers (CSPs) were half of Nvidia’s Data Centre revenue in 2024 Q3, and up more than 2x year-on-year; CSPs are installing tens of thousands of GPUs to meet rising demand for AI training and inference; Nvidia Cloud Instances with H200s are now available, or soon-to-be-available, in the major CSPs

Cloud service providers were approximately half of our Data Center sales with revenue increasing more than 2x year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon.

North America, India, and Asia Pacific regions are ramping up Nvidia Cloud Instances and sovereign clouds; management is seeing an increase in momentum of sovereign AI initiatives; India’s CSPs are building data centers containing tens of thousands of GPUs and increasing GPU deployments by 10x in 2024 compared to a year ago; Softbank is building Japan’s most powerful AI supercomputer with Nvidia’s hardware 

Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs…

…Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs include product communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x…

…In Japan, SoftBank is building the nation’s most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA.

Nvidia’s revenue from consumer internet companies more than doubled year-on-year in 2024 Q3

Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. 

Nvidia’s management sees Nvidia as the largest inference platform in the world; Nvidia’s management is seeing inference really starting to scale up for the company; models that are trained on previous generations of Nvidia chips inference really well on those chips; management thinks that as Blackwell proliferates in the AI industry, it will leave behind a large installed base of infrastructure for inference; management’s dream is that plenty of AI inference happens across the world; management thinks that inference is hard because it needs high accuracy, high throughput, and low latency

NVIDIA’s Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements…

…We’re seeing inference really starting to scale up for our company. We are the largest inference platform in the world today because our installed base is so large. And everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we’re seeing inference demand go up…

… Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really exceeded is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing inference 24/7…

…Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high-throughput as well as low latency is incredibly hard to build. 

Nvidia’s management has driven a 5x improvement in Hopper inference throughput in 1 year via advancements in the company’s software; Hopper’s inference performance is set to increase by a further 2.4x shortly because of NIM (Nvidia Inference Microservices)

Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in 1 year and cut time to first token by 5x. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. 

Nvidia’s Blackwell family of chips is now in full production; Nvidia shipped 13,000 Blackwell samples to customers in 2024 Q3; the Blackwell family comes with a wide variety of customisable configurations; management sees all Nvidia customers wanting to be first to market with the Blackwell family; management sees staggering demand for Blackwell, with Oracle announcing the world’s first zetta-scale cluster with more than 131,000 Blackwell GPUs, and Microsoft being the first CSP to offer private-preview Blackwell instances; Blackwell is dominating GPU benchmarks; Blackwell performs 2.2x better than Hopper and is also 4x cheaper; Blackwell with NVLink Switch delivered up to a 30x improvement in inference speed; Nvidia’s management expects the company’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; Blackwell’s production is in full-steam ahead and Nvidia will deliver more Blackwells in 2024 Q4 than expected; demand for Blackwell exceeds supply

Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled. 

Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world’s first zetta-scale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.

Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling, throughput and response time that is excellent for running new reasoning inference applications like OpenAI’s o1 model…

…As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s…

… Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated…

…It is the case that demand exceeds our supply. And that’s expected as we’re in the beginnings of this generative AI revolution as we all know…

…In terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible…

…[Question] Do you think it’s a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar ’25?

[Answer] Yes, I think it is a reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.  

Nvidia’s management is seeing that hundreds of AI-native companies are already delivering AI services and thousands of AI-native startups are building new services

Hundreds of AI-native companies are already delivering AI services with great success. Though Google, Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor and the Bridge are seeing great success while thousands of AI-native start-ups are building new services. 

Nvidia’s management is seeing large enterprises build copilots and AI agents with Nvidia AI; management sees the potential for billions of AI agents being deployed in the years ahead; Accenture has an internal AI agent use case that reduces steps in marketing campaigns by 25%-35%

Industry leaders are using NVIDIA AI to build Copilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years…

… Accenture with over 770,000 employees, is leveraging NVIDIA-powered agentic AI applications internally, including 1 case that cuts manual steps in marketing campaigns by 25% to 35%.

Nearly 1,000 companies are using NIM (Nvidia Inference Microservices); management expects the Nvidia AI Enterprise platform’s revenue in 2024 to be double that from 2023; Nvidia’s software, service, and support revenue now has an annualised revenue run rate of $1.5 billion and management expects the run rate to end 2024 at more than $2 billion

Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over 2x from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.

Nvidia’s management is seeing an acceleration in industrial AI and robotics; Foxconn is using Nvidia Omniverse to improve the performance of its factories, and Foxconn’s management expects a reduction of over 30% in annual kilowatt hour usage in Foxconn’s Mexico facility

Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics…

…Foxconn, the world’s largest electronics manufacturer, is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce — a reduction of over 30% in annual kilowatt hour usage.

Nvidia saw sequential growth in Data Center revenue in China because of export of compliant Hopper products; management expects the Chinese market to be very competitive

Our data center revenue in China grew sequentially due to shipments of export-compliant Hopper products to industries…

…We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.

Nvidia’s networking revenue declined sequentially, but there was sequential growth in Infiniband and Ethernet switches, Smart NICs (network interface controllers), and BlueField DPUs; management expects sequential growth in networking revenue in 2024 Q4; management is seeing CSPs adopting Infiniband for Hopper clusters; Nvidia’s Spectrum-X Ethernet for AI revenue was up 3x year-on-year in 2024 Q3; xAI used Spectrum-X for its 100,000 Hopper GPU cluster and achieved zero application latency degradation and maintained 95% data throughput, compared to 60% for Ethernet

Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters.

NVIDIA Spectrum-X Ethernet for AI revenue increased over 3x year-on-year. And our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI’s Colossus 100,000 Hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet…

…Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we’re going to be right back up in terms of growing. We’re getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems we are providing them to.

Nvidia has begun shipping new GeForce RTX AI PCs

We began shipping new GeForce RTX AI PC with up to 321 AI FLOPS from ASUS and MSI with Microsoft’s Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.

Nvidia’s Automotive revenue had strong growth year-on-year and sequentially in 2024 Q3, driven by self-driving brands of Nvidia Orin; Volvo’s electric SUV will be powered by Nvidia Orin

Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Volvo Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.

Nvidia’s management thinks pre-training scaling of foundation AI models is intact, but it’s not enough; another way of scaling has emerged, which is inference-time scaling; management thinks that the new ways of scaling has resulted in great demand for Nvidia’s chips, but for now, most of Nvidia’s chips are used in pre-training 

Foundation model pretraining scaling is intact and it’s continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough, that we’ve now discovered 2 other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling. And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI’s o1, which does inference time scaling, what’s called test time scaling. The longer it thinks, the better and higher-quality answer it produces. And it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth…

… we now have 3 ways of scaling and we’re seeing all 3 ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pretraining scaling, post-training scaling, and then now very importantly, inference time scaling…

…[Question] Today, how much of the compute goes into each of these buckets? How much for the pretraining? How much for the reinforcement? And how much into inference today?

[Answer] Today, it’s vastly in pretraining a foundation model because, as you know, post-training, the new technologies are just coming online. And whatever you could do in pretraining and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority. And so you’ll always have to do on-the-spot thinking and in context thinking and a reflection. And so I think that the fact that all 3 are scaling is actually very sensible based on where we are. And in the area foundation model, now we have multimodality foundation models and the amount of petabytes video that these foundation models are going to be trained on, it’s incredible. And so my expectation is that for the foreseeable future, we’re going to be scaling pretraining, post-training as well as inference time scaling and which is the reason why I think we’re going to need more and more compute.  

Nvidia’s management thinks the company generates the greatest possible revenue for its customers because its products has much better performance per watt

Most data centers are now 100 megawatts to several hundred megawatts, and we’re planning on gigawatt data centers, it doesn’t really matter how large the data centers are. The power is limited. And when you’re in the power-limited data center, the best — the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual road map reduces cost. But on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues. 

Nvidia’s management sees Hopper demand continuing through 2025

Hopper demand will continue through next year, surely the first several quarters of the next year. 

Nvidia’s management sees 2 fundamental shifts in computing happening today: (1) the movement from code that runs on CPUs to neural networks that run on GPUs and (2) the production of AI from data centres; the fundamental shifts will drive a $1 trillion modernisation of data centres globally

We are really at the beginnings of 2 fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that’s happening is $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.

On the other hand, secondarily, I guess, is that on top of these systems are going to be — we’re going to be creating a new type of capability called AI. And when we say generative AI, we’re essentially saying that these data centers are really AI factories. They’re generating something. Just like we generate electricity, we’re now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we’re going to see this new type of system come online, and I call it an AI factory because that’s really as close to what it is. It’s unlike a data center of the past.

Nvidia’s management does not see any digestion happening for GPUs until the world’s data centre infrastructure is modernised

[Question] My main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase? Or is it just too premature to discuss that because you’re just at the start of Blackwell?

[Answer] I believe that there will be no digestion until we modernize $1 trillion with the data centers.

Okta (NASDAQ: OKTA)

Okta AI is really starting to help newer Okta products

Second thing is that we have Okta AI, which we talked a lot about a couple of years ago, and we continue to work on that. And it’s really starting to help these new products like identity threat protection with Okta AI. The model inside of identity threat protection and how that works is AI is a big part of the product functionality. 

Okta’s management sees the need for authentication for AI agents and has a product called Auth for Gen AI; management thinks authentication of AI agents could be a new area of growth for Okta; management sees the pricing for Auth for Gen AI as driven by a fee per monthly active machine

Some really interesting new areas are we have something we talked about at Oktane called Auth for Gen AI, which is basically authentication platform for agents. Everyone is very excited about agents, as they should be. I mean, we used to call them bots, right? 4, 5 years ago, they’re called bots. Now they’re called agents, like what’s the big deal? How different is it? Well, you can interact with them natural languages and they can do a lot more with these models. So now it’s like bots are real in real time. But the problem is all of these bots and all of these platforms to build bots, they have the equivalent of the monitor sticky notes with passwords on them, they have the equivalent of that inside the bot. So there’s no protocol for single sign-on for bots. They have like stored passwords in the bot. And if that bot gets hacked, guess what? You signed up for that bot and it has access to your calendar and has access to your travel booking and it has access to your company e-mail and your company data, that’s gone because the hacker is going to get all those passwords out there. So Auth for Gen AI automates that and make sure you can have a secure protocol to build a bot around. And so that’s a really interesting area. It’s very new. We just announced it and all these agent frameworks and so forth are new…

… Auth for GenAI, it’s basically like — think about it as a firm machine authentication. So every time — we have this feature called machine-to-machine, which does a similar thing today, and you pay basically by the monthly active machine.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Salesforce is at the edge of the rise of digital labour, which are autonomous AI agents; management thinks the TAM (total addressable market) for digital labour is much larger than the data management market that Salesforce was previously in; management thinks Salesforce is the largest supplier of digital labour right from the get-go; Salesforce’s AgentForce service went into production in 2024 Q3 and Salesforce has already delivered 200 AgentForce deals with more to come; management has never seen anything like AgentForce; management sees AgentForce as the next evolution of Salesforce; management thinks AgentForce will help companies scale productivity independent of workforce growth; management sees AgentForce AI agents manifesting as robots that will supplement human labour; management sees AgentForce, together with robots, as a driving force for future global economic growth even with a stagnant labour force; AgentForce is already delivering tangible value to customers; Salesforce’s customers recently built 10,000 AI agents with AgentForce in 3 days, and thousands more AI agents have been built since then; large enterprises across various industries are building AI agents with AgentForce; management sees AgentForce unlocking a whole new level of operational efficiency; management will be delivering AgentForce 2.0 in December this year

We’re really at the edge of a revolutionary transformation. This is really the rise of digital labor. Now for the last — I would say for the last 25 years at Salesforce, and we’ve been helping companies to manage and share their information…

…But now we’ve really created a whole new market, a new TAM, a TAM that is so much bigger and so much more exciting than the data management market that it’s hard to get our head completely around. This is the market for digital labor. And Salesforce has become, right out of the gate here, the largest supplier of digital labor and this is just the beginning. And it’s all powered by these autonomous AI agents…

…With Salesforce, agent force, we’re not just imagining this future. We’re already delivering it. And you so know that in the last week of the quarter, Agentforce went production. We delivered 200 deals, and our pipeline is incredible for future transactions. We can talk about that with you on the call, but we’ve never seen anything like it. We don’t know how to characterize it. This is really a moment where productivity is no longer tied to workforce growth, but through this intelligent technology that can be scaled without limits. And Agentforce represents this next evolution of Salesforce. This is a platform now, Salesforce as a platform or AI agents work alongside humans in a digital workforce that amplifies and augments human capabilities and delivers with unrivaled speed…

…On top of the agenetic layer, we’ll soon see a robotic layer as well where these agents will manifest into robots…

…These agents are not tools. They are becoming collaborators. They’re working 24/7 to analyze data, make decisions, take action, and we can all start to picture this enterprise managing millions of customer interactions daily with Agentforce seamlessly resolving issues, processing transactions, anticipating customer needs, freeing up humans to focus on the strategic initiatives and building meaningful relationships. And this is going to evolve into customers that we have, whether it could be a large hospital or a large hotel where not only are the agents working 24/7, but robots are also working side-by-side with humans, robots manifestations of agents this idea that it’s all happening before our eyes and that this isn’t just some far-off future. It’s happening right now…

…For decades, economic growth dependent on expanding the human workforce. It was all about getting more labor. But with labor and with the labor force stagnating globally, Agentforce is unlocking a new path forward. It’s a new level of growth for the world and for our GPT and businesses no longer need to choose between scale and efficiency with agents, they can achieve both…

…Our customers are already experiencing this transformation. Agentforce is deflecting service cases and resolving issues, processing, qualifying leads, helping close more deals, creating optimizing marketing campaigns, all at an unprecedented scale, 24/7…

…What was remarkable was the huge thirst that our customers had for this and how they built more than 10,000 agents in 3 days. And I think you know that we then unleashed a world tour of that program, and we have now built thousands and thousands of more agents in these world tours all over the world…

…So companies like FedEx, [indiscernible], Accenture, Ace Hardware, IBM, RBC Wealth Management and many more are now building their digital labor forces on the Salesforce platform with Agentforce. So that is the largest and most important companies in the world across all geographies, across all industries are now building and delivering agents…

…While these legacy chatbots have handled these basic tasks like password resets and other basic mundane things, Agentforce is really unlocking an entirely new level of digital intelligence and operational efficiency at this incredible scale…

…I want to invite all of you to join us for the launch of Agentforce 2.0. And it is incredible what you are going to see the advancements in the technology already are amazing and accuracy and the ability to deliver additional value. And we hope that you’re going to join us in San Francisco. This is going to happen on December 17. You’ll see Agentforce 2.0 for the first time,

Salesforce is customer-zero for AgentForce and the service is live on Salesforce’s help-website; AgentForce is handling 60 million sessions and 2 millions support cases annually on the help-website; the introduction of AgentForce in Salesforce’s help-website has allowed management to rebalance headcount into growth-areas; users of Salesforce’s help-website will experience very high levels of accuracy because AgentForce is grounded with the huge repository of internal and customer data that Salesforce has; management sees Salesforce’s data as a huge competitive advantage for AgentForce; AgentForce can today quickly deliver personalised insights to users of Salesforce’s help-website and hand off users to support engineers for further help; management thinks AgentForce will deflect between a quarter and half of annual case volume; Salesforce is also using AgentForce internally to engage prospects and hand off prospects to SDR (sales development representative) team

We pride ourselves on being customer [ 0 ] for all of our products, and Agentforce is no exception. We’re excited to share that Agentforce is now live on help.salesforce.com…

… Our help portal, help.salesforce.com, which is now live. This portal, this is our primary support mechanism for our customers. It lets them authenticate in, it then becomes grounded with the agent, and that Help portal already is handling 60 million sessions and more than 2 million support cases every year. Now that is 100% on Agentforce…

…From a human resource point of view, where we can really start to look at how are we going to rebalance our headcount into areas that now are fully automated and to into areas that are critical for us to grow like distribution…

…Now when you use help.salesforce.com, especially as authenticated users, as I mentioned, you’re going to see this incredible level of accuracy and responsiveness and you’re going to see remarkably low hallucinogenic performance whether for solving simple queries or navigating complex service issues because Agentforce is not just grounded in our Salesforce data and metadata including the repository of 740,000 documents and 17 languages, it’s also grounded in each customer’s data, their purchases, returns, that data it’s that 200 petabytes or through 200 to 300 petabytes of Salesforce data that we have that gives us this kind of, I would say, almost unfair advantage with Agentforce because our agents are going to be more accurate in the least hallucinogenic of any because they have access to this incredible capability. And Agentforce can instantly reason over this vast amounts of data, deliver precise personalizing [indiscernible] with citations in seconds, and Agentforce can seamlessly hand off to support engineers, delivering them complete summary and recommendation as well. And you can all try this today. This isn’t some fantasy land future idea this is today reality…

…We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company, it is going to deflect between a quarter and half of annual case volume and in optimistic cases, probably much, much more of that…

…We’re also deploying Agentforce to engage our prospects on salesforce.com, answering their questions 24/7 as well as handing them off to our SDR team. You can see it for yourself and test it out on our home page. We’ll use our new Agentforce SDR agent to further automate top of funnel activities when gatherings leads, lead data for providing education and qualifying prospects and booking meetings.

Salesforce’s management thinks AgentForce is much better than Microsoft’s AI Copilots

I just want to compare and contrast that against other companies who say they are doing enterprise AI. You can look at even Microsoft. We all know about Copilot, it’s been out, it’s been touted now for a couple of years. We’ve heard about CoPilot. We’ve seen the demo. In many ways, it’s just repackaged ChatGPT. You can really see the difference where Salesforce now can operate its company on our platform. And I don’t think you’re going to find that on Microsoft’s website, are you?

Vivint is using AgentForce for customer support and for technician scheduling, payment requests, and more; Adecco is using AgentForce to improve the handling of job applicants (Adecco receives 300 million job applications annually); Wiley is resolving cases 40% faster with AgentForce; Heathrow Airport is using AgentForce to respond to thousands of travelers instantly, accurately, and simultaneously; SharkNinja is using AgentForce for personalised 24/7 customer support in 28 geographies; Accenture is using AgentForce to improve deal quality and boost bid coverage by 75%

One of them is the smart home security provider, Vivint. They’ve struggled with this high volume of support calls, a high churn rate for service reps. It’s a common story. But now using the Agentforce, Vivint has created a digital support staff to autonomously provide support through their app, their website, troubleshooting, a broad variety of issues across all their customer touch points. And in addition, Vivint is planning to utilize Agentforce to further automate technician scheduling, payment request, proactive issue resolution, the use of device telemetry because Agentforce is across the entire sales force product line and including Slack…

…Another great customer example that’s already incredible to work they’ve already done to get this running and going in their company Adecco, the world’s leading provider of talent solutions, handling 300 million job applications annually, but historically, they have just not been able to go through or respond in a timely way, of course, to the vast majority of applications that they’re gating, but now the Agentforce is going to operate an incredible scale, sorting through the millions of resumes, 24/7 matching candidates to opportunities proactively prequalifying them for recruiters. And in addition, Agentforce can also assess candidates helping them to refine their resumes, giving them a better chance of qualifying for a role…

…Wiley, an early adopter, is resolving cases over 40% faster with Agentforce than their previous chat bot. Heathrow Airport, one of the busiest airports in the world will be able to respond to thousands of travelers inquiries instantly, accurately and simultaneously. SharkNinja, a new logo in the quarter, chose Agentforce and Commerce Cloud to deliver 24/7 personalized support for customers across 28 international markets and unifying its service operations…

…Accenture chose Agentforce to streamline sales operations and enhance bid management for its 52,000 global sellers. By integrating sales coach and custom AI agents, Agentforce is improving deal quality and targeting a 75% boost in bid coverage. 

College Possible is using AgentForce to build virtual college counsellors as there’s a shortage of labour (for example, California has just 1 counsellor for every 500 students); College Possible built its virtual counsellors with AgentForce in under a week – basically like flipping a switch – because it has been accumulating all its data in Salesforce for years

Another powerful example is a nonprofit, College Possible. College Possible matches eligible students with counselors to help them navigate and become ready for college. And in California, for example, the statewide average stands at slightly over 1 counselor for every 500 students. It just isn’t enough. Where are we going to get all that labor…

…We’re going to get it from Agentforce. This means the vast majority of students are not getting the help they need, and now they are going to get the help they need.

College Possible creates a virtual counselor built on Agentforce in under a week. They already had all the data. They have the metadata, they already knew the students. They already had all of the capabilities built into their whole Salesforce application. It was just a flip of a switch…

…  But why? It’s because all of the work and the data and the capability that College Possible has put into Salesforce over the years and years that they had it. It’s not the week that it took to get them to turn it on. They have done a lot of work.

Salesforce’s management’s initiative to have all of the company’s apps be rewritten into a single core platform is called More Core; the More Core initiative also involves Salesforce’s Data Cloud, which is important for AI to work; Salesforce is now layering the AI agent layer on top of More Core, and management sees this combination as a complete AI system for enterprises that also differentiates Salesforce’s AgentForce product

Over the last few years, we’ve really aggressively invested in integrating all of our apps on a single core platform with shared services for security workflow user interfaces more. We’ve been rewriting all of our acquisitions into that common area. We’re really looking at how do we take all of our applications and all of our acquisitions, everything and delivered into one consistent platform, we call that More Core internally inside Salesforce. And when you look at that More Core initiative, I don’t think there’s anyone who delivers this comprehensive platform, sales, service, marketing, commerce, analytics, Slack, all of it as one piece of code. And then now deeply integrated in that 1 piece of code is also our data cloud. That is a key part of our strategy, which continues to have this phenomenal momentum as well to help customers unify and federate with zero-copy data access across all their data and metadata, which is crucial for AI to work.

And now that third layer is really opening up for us, which is this agenetic layer. We have built this agenetic layer that takes advantage of all the investments in Salesforce for our customers and made it in our platform. It’s really these 3 layers. And in these 3 layers that form a complete AI system for enterprises and really uniquely differentiate Salesforce uniquely differentiate Agentforce from every other AI platform that this is one piece of code. This isn’t like 3 systems. It’s not a bunch of different apps all running independently. This is all one piece of code. That’s why it works so well, by the way, because it is 1 platform.

Salesforce’s management thinks jobs and roles within Salesforce will change because of AI, especially AI agents

The transformation is not without challenges. Jobs are going to evolve, roles are going to shift and businesses will need to adapt. And listen, at Salesforce, jobs are going to evolve and roles will shift and businesses will need to adapt as well. We’re all going to need to rebalance our workforce. This is the agents take on more of the workforce.

Salesforce’s management is hearing that a large customer of Salesforce is targeting 25% more efficiency with AI

This morning, I was on the phone with one of our large customers, and they were telling me how they’re targeting inside their company, 25% more efficiency with artificial intelligence.

Salesforce signed more than 2,000 AI deals in 2024 Q3 (FY2025 Q3), and number of AI deals that are over $1 million more than tripled year-on-year; 75% of Salesforce’s AgentForce deals, and 9 of Salesforce’s top 10 deals, in 2024 Q3 involved Salesforce’s global partners; more than 80,000 system integrators have completed AgentForce training; hundreds of ISVs (independent software vendors) and partners are building and selling AI agents; Salesforce has a new AgentForce partner network that allows customers to deploy customised AI agents using trusted 3rd-party extensions from Salesforce App Exchange; Salesforce’s partnership with AWS Marketplace is progression well as transactions doubled sequentially in 2024 Q3, with 10 deals exceeding $1 million

In Q3, the number of wins greater than $1 million with AI more than tripled year-over-year. and we signed more than 2,000 AI deals, including more than the 200 Agentforce wins that Marc shared…

…We’re also seeing amazing Agentforce energy across the ecosystem with our global partners involved in 75% of our Q3 Agentforce deals and 9 of our top 10 wins in the quarter. Over 80,000 system integrators have completed Agentforce training and hundreds of ISVs and technology partners are building and selling agents…

… We continue to unlock customer spend through new channels, including the Agentforce partner network that launched at Dreamforce, which allows customers to customize and deploy specialized agents using trusted third-party extensions from Salesforce App Exchange. And AWS Marketplace continues to be a growth driver. Our Q3 transactions doubled quarter-over-quarter with 10 deals exceeding $1 million. 

Veeva Systems (NYSE: VEEV)

Veeva Vault CRM has a number of new innovations coming, including two AI capabilities that will be available in late-2025 at no additional charge; one of the AI capabilities leverages Apple Intelligence; Vault CRM’s CRM Bot AI application will see Vault CRM be hooked onto customers’ own large language models, and Veeva will not be incurring compute costs

We just had our European Commercial Summit in Madrid where we announced a number of new innovations coming in Vault CRM, including two new AI capabilities – CRM Bot and Voice Control. CRM Bot is a GenAI assistant in Vault CRM. Voice Control is a voice interface for Vault CRM, leveraging Apple Intelligence. Both are included in Vault CRM for no additional charge and are planned for availability in late 2025…

…For the CRM Bot, that’s where we will hook our CRM system into the customers’ own large language model that they’re running. And that’s where we will not charge for, and we will not incur compute cost…

Veeva has a new AI application, MLR Bot, for Vault PromoMats within Commercial Cloud; MLR Bot helps perform checks on content with a Veeva-hosted large language model (LLM); MLR Bot will be available in late-2025 and will be charged separately; management has been thinking about MLR Bot for some time; management is seeing a lot of excitement over MLR Bot; management is still working through the details of the monetisation of MLR Bot; MLR Bot’s LLM will be from one of the big tech providers but it will be Veeva who’s the one paying for the compute 

We also announced MLR Bot, an AI application in Vault PromoMats to perform content quality and compliance checks with a Veeva-hosted large language model. Planned for availability in late 2025, MLR Bot will require a separate license…

… So I was at our Europe Summit event where we announced MLR Bot, something we’ve been thinking about and evaluating for some time…

…So there’s a lot of excitement. This is a really core process for life sciences companies. So a lot of excitement there…

…In terms of sizing and the monetization, we’re still working through the details on that, but there’s a ton of excitement from our existing customers. We look forward to getting some early customers started on that as we go into next year…

…MLR Bot, we will charge for, and that’s where we will host and run a large language model. Not our own large language model, right? We’ll use one from the big tech providers, but we will be paying for the compute power for that, and so we’ll be charging for that.

CRM Bot, Voice Control, and MLR Bot are part of Veeva’s management’s overall AI strategy to provide AI applications with tangible value; another part of the AI strategy involves opening up data for customers to power all forms of AI; management’s current thinking is to charge for AI applications if Veeva is responsible for paying compute costs

These innovations are part of our overall AI strategy to deliver specific AI applications that provide tangible value and enable customers and partners with the AI Partner Program, as well as the Vault Direct Data API, for the data needed to power all forms of AI…

… So where we have to use significant compute power, we will most likely charge. And where we don’t, we most likely won’t.

Wix (NASDAQ: WIX)

More than 50% of new Wix users are using the company’s AI-powered onboarding process which was launched nearly a year ago; users who onboard using Wix’s AI process are 50% more likely to start selling on Wix and are more likely to become paid subscribers; the AI-powered onboarding process is leading to a 13% uplift in conversion rate for the most recent Self-Creator cohort; the AI website builder is free but it helps with conversions to paid subscribers

Almost one year ago, we launched our AI website builder, which is now available in 20 languages and has been a game changer in our user onboarding strategy. Today, more than 50% of new users are choosing to create their online presence through our AI-powered onboarding process. The tool is resonating particularly well with small businesses and entrepreneurs as paid subscriptions originated from this AI-powered onboarding are 50% more likely to have a business vertical attached and significantly more likely to start selling on Wix by streamlining the website building process while offering a powerful and tailored commerce-enablement solution…

…Cash in our most recent self-created cohort showed a 13% uplift in conversion rate from our AI onboarding tool…

…[Question] A lot of the commentary seems that today, AI Website Builder is helping on conversion. I wanted to ask about specifically, is there an opportunity to directly monetize the AI products within the kind of core website design funnel?

[Answer] So I think that the way we monetize, of course, during the buildup phase of the website, is by making it easier. And our customers are happy with their websites, of course, we convert better. So I don’t think there is any better way to monetize than that, right? The more users finish the website, the better the website, the higher conversion and the high monetization. 

Wix now has 29 AI assistants to support users

Earlier this year, we spoke about our plan to embed AI assistance across our platform and we’re continuing to push that initiative forward. We now have a total of 29 assistants, spanning a wide range of use cases to support users and to service customers throughout their online journeys.

Wix has a number of AI products that are launching in the next few months that are unlike anything in the market and they will be the first AI products that Wix will be monetising directly

We have a number of AI products coming in the next few months that are unlike anything in the market today. These products will transform the way merchants manage their businesses, redefine how users interact with their customers and enhance the content creation experience. Importantly, these will also be the first AI products we plan to monetize directly. We are on the edge of unforeseen innovation, and I’m looking forward to the positive impact it will have on our users.

Zoom Communications (NASDAQ: ZM)

Zoom’s management has a new vision for Zoom, the AI-first Work Platform for Human Connection

In early October, we hosted Zoomtopia, our annual customer and innovation event, and it was an amazing opportunity to showcase all that we have been working on for our customers. We had a record-breaking virtual attendance, and unveiled our new vision, AI-first Work Platform for Human Connection. This update marks an exciting milestone as we extend our strength as a unified communication and collaboration platform into becoming an AI-first work platform. Our goal is to empower customers to navigate today’s work challenges, streamline information, prioritizing tasks and making smarter use of time.

Management has released AI Companion 2.0, which is an agentic AI technology; AI Companion 2.0 is able to see a broader window of context and gather information from internal and external sources; Zoom AI Companion monthly active users grew 59% sequentially in 2024 Q3; Zoom has over 4 million accounts that have enabled AI Companion; management thinks customers really like Zoom AI Companion; customer feedback for AI Companion has been extremely positive; management does not intend to charge customers for AI Companion

At Zoomtopia, we took meaningful steps towards that vision with the release of AI Companion 2.0…

…This release builds upon the awesome quality of Zoom AI Companion 1.0 across features like Meeting Summary, Meeting Query and Smart Compose, and brings it together in a way that evolves beyond task-specific AI towards agentic AI. This major update allows the AI Companion to see a broader window of context, synthesize the information from internal and external sources, and orchestrate action across the platform. AI Companion 2.0 raises the bar for AI and demonstrates to customers that we understand their needs…

…We saw progress towards our AI-first vision with Zoom AI Companion monthly active users growing 59% quarter-over-quarter…

…At Zoomtopia, we mentioned that there are over 4 million accounts who are already enabled AI Companion. Given the quality, ease of use and no additional cost, the customer really like Zoom AI Companion…

…Feedback from our customers at Zoomtopia Zoom AI Companion 2.0 were extremely positive because, first of all, they look at our innovation, the speed, right? And the — a lot of features built into the AI Companion 2.0, again, at no additional cost, right? At the same time, Enterprise customers also want to have some flexibility. That’s why we also introduced customized AI Companion and also AI Companion Studio. And that will be available first half of next year and also we can monetize…

…We are not going to charge the customer for AI Companion, at no additional cost

Zscaler is using Zoom AI Companion to improve productivity across the whole company; large enterprises such as HSBC and Exxon Mobil are also using Zoom AI Companion

Praniti Lakhwara, CIO of Zscaler, provided a great example of how Zoom AI Companion helped democratize AI and enhance productivity across the organization, without sacrificing security and privacy. And it wasn’t just Zscaler. the RealReal, HSBC, ExxonMobil and Lake Flato Architects shared similar stories about Zoom’s secure, easy-to-use solutions, helping them thrive in the age of AI and flexible work.

Zoom’s management recently introduced a road map of AI products that expands Zoom’s market opportunity; Custom AI Companion add-on, including paid add-ons for healthcare and education, will be released in 2025 H1; management built the monetisable parts of AI Companion after gathering customer feedback 

Building on our vision for democratizing AI, we introduced a road map of TAM-expanding AI products that create additional business value through customization, personalization and alignment to specific industries or use cases. 

 Custom AI Companion add-on, which will be released in the first half of next year, aims to meet our customers where they are in their AI journey by plugging into knowledge bases, integrating with third-party apps and personalizing experiences like custom AI avatars and AI coaching. Additionally, we announced that we’ll also have Custom AI Companion paid add-ons for health care and education available as early as the first quarter of next year…

…The reason why we introduced the Customized AI Companion or AI Companion Studio because, a few quarters ago — and we talked to many Enterprise customers. They shared with us feedback, right? So they like AI Companion. Also, they want to make sure, hey, some customers, they already build their own AI large language model. How to [ federate ] that into our federated AI approach. And some customers, they have very large content, like a knowledge base, how to connect with that. Some customers, they have with other beginning systems, right, like a ServiceNow, Atlassian and Workday, a lot of Box and HubSpot, how to connect those data sources, right? And also even from an employee perspective, right, they won’t have a customized avatar, like AI to — as a personal culture as well. So meaning those customers, they have customized requirements. To support those customer requirements, we need to make sure we have AI infrastructure and technology ready, right? That’s the reason why we introduced the AI Companion, the Customized AI Companion. The goal is really working together with integrated customers to tailored for each Enterprise customer. That’s the reason why it’s not free.

I think the feedback from Zoomtopia is very positive because, again, those features are not built by our — just the several product managers, engineers think about let’s build that. We already solicited feedback from our Enterprise content before, those features that I think can truly satisfy their needs.

Zoon’s management thinks that Zoom is very well-positioned because it is providing AI-powered tools to customers at no additional cost, unlike other competitors

Given our strength on the quality plus at no additional cost, Zoom is much better positioned. In particular, customers look at all the vendors when they try to consult and look at — again, the AI cost is not small, right? You look at some of the competitors, per user per month, $30, right? And look at Zoom, better quality at no additional cost. That’s the reason why it comes with a total cost of ownership. Customers look at Zoom, I think, much better positioned…

…Again, almost every business, they subscribe to multiple software services. If each software service vendors they are going to charge the customer with AI, guess what, every business is — they have to spend more. That’s the reason why they trust Zoom, and I think we are much better positioned.

Zoom’s management is seeing some customers find new budgets to invest in AI, whereas some customers are reallocating budgets from other areas towards AI

Every company, I think now they are all thinking about where they should allocate the budget, right? Where should they get more money or fund, right, to support AI? I think every company is different. And some internal customers, and they have a new budget. Some customers, they consolidated into the few vendors and some customers, they just want to say, hey, maybe actually save the money from other areas and to shift the budget towards embracing AI.

Zoom’s management thinks Zoom will need to continue investing in AI, but they are not worried about the costs because the AI features will be monetised

Look at AI, right? So we have to invest more, right? And I think a few areas, right? One is look at our Zoom Workplace platform, right? We have to [ invent ] more talent, deploy more GPUs and also use more of the cloud, basically GPUs, as well as we keep improving the AI quality and innovate on AI features. That’s for Workplace. And at the same time, we are going to introduce the customized AI Companion, also AI Studio next year. Not only do we offer the free service for AI Companion, but those Enterprise customization certainly can help us in terms of monetization. At the same time, we leverage the technology we build for the workplace, apply that to the Contact Center, like Zoom Virtual Agent, right, and also some other Contact Center features. We can share the same AI infrastructure and also a lot of technology components and also can be shared with Zoom Contact Center.

Where AI Companion is not free, the Contact Center is different, right? We also can monetize. Essentially, we build the same common AI infrastructure architecture and Workplace — Customized AI Companion, we can monetize. Contact Center, also, we can monetize. I think more and more — and like today, you see you keep investing more and more, and soon, we can also monetize more as well. That’s why I think we do not worry about the cost in the long run at all, I mean, the AI investment because with the monetization coming in, certainly can help us more. So, so far, we feel very comfortable.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google and GCP), Amazon (parent of AWS), Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Veeva Systems, Wix, and Zoom Video Communications. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the third quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management recently introduced a personalised welcome tour of the Airbnb app for first-time users; management sees this personalisation feature as the beginning of a more personalised Airbnb

We also introduced 50 upgrades for guests that make Airbnb a more intuitive and personalized app. And some of the features include a personalized welcome tour of the app for first-time guests, suggest a destination when guests tap the search bar, we’ll recommend locations on their search and booking history, and personalized listing highlights. So when a guest views a listing, we will highlight the details that are relevant to their search, and there are dozens of new features just like these. This is quite literally the beginning of a more personalized Airbnb.

Airbnb’s management is seeing great progress on AI-powered customer service; management sees 3 phases to the deployment of AI-powered customer service, where Phase 1 is Airbnb using AI to answer basic questions from customers, Phase 2 is the AI answering questions from customers in a personalised way, and Phase 3 is the AI taking personalised actions on behalf of customers; management thinks that Airbnb has hired some of the best AI talent to develop AI-powered customer service

We are seeing some really great progress on AI-powered customer service. The way we think about customer service, powered by AI is in 3 phases…

…Phase 1 is just answer basic general questions. We’re rolling out a pilot that can answer basic general questions. Phase 2 is personalization, be able to personalize the questions. Phase 3 is to take action…

…So this is where we think customer service can go enabled by AI, and we’ve hired some of the best people in the world to work on this.

Airbnb is currently in Phase 1 of deploying AI-powered customer service; management thinks that the vast majority of customer chats that are received by Airbnb will be handled directly by AI agents in the future

Phase 1 is the phase we’re in right now. If you were to most — first of all, most of our customer context, we get over 10 million contacts a year. Most of the contacts that we anticipate getting in the coming years aren’t going to be phone calls. They’re going to be chatting through the app. I really personally don’t like calling customer service and having to dial them. I want to be able to chat, and chat AI can intercept. And so we think in the future, the vast majority of our chats are going to be intercepted and handled directly by the AI agent.

An example of the 3rd phase of Airbnb’s AI-powered customer service that management has in mind: An AI agent can help customers to cancel bookings and even make rebookings 

So I’ll give you an example. Let me just give you 1 example. Let’s say I were to contact customer service and I say, “how do I cancel reservation?” In Phase 1, what we’re doing now, the AI agent will answer probably even better than the average customer service agent, how to cancel a reservation. So we’ll take you to how to cancel a reservation step by step. Phase 2 personalization, they’ll say, “hey, Brian, I see you have a reservation coming up in Los Angeles next week. Here’s how you cancel that reservation.” And Phase III is taking action. It would say, “hey, Brian, I see you have a reservation come to Los Angeles. Would you like me to cancel it for you? Just tell me, yes, and I’ll do it for you. I can even handle rebooking.”

Alphabet (NASDAQ: GOOG)

Alphabet’s management thinks Alphabet is positioned to lead in AI because of the company’s full-stack approach of a robust AI infrastructure, world-class research team, and broad user-reach

We are uniquely positioned to lead in the era of AI because of our differentiated full stack approach to AI innovation, and we are now seeing this operate at scale. There’s 3 components: first, a robust AI infrastructure that includes data centers, chips and a global fiber network; second, world-class research teams who are advancing our work with deep technical AI research and who are also building the models that power our efforts. And third, a broad global reach through products and platforms that touch billions of people and customers around the world, creating a virtuous cycle.

Alphabet signed the world’s first corporate agreement for energy from multiple small modular nuclear reactors; the reactors will deliver 500 megawatts of carbon-free power 24/7

We are also making bold clean energy investments, including the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors, which will enable up to 500 megawatts of new 24/7 carbon-free power.

Since Alphabet began testing AI overviews 18 months ago, the company has reduced the cost to deliver queries by 90% while doubling the size of its Gemini foundation AI model; AI overview has led to users coming to Search more often; AI overview was recently rolled out to 100 new countries and territories and will reach more than 1 billion users on a monthly basis; there’s strong engagement in AI overview, leading to higher overall search usage and user satisfaction, and users are asking longer questions and exploring more websites; the growth driven by AI overviews is increasing over time; the integration of advertising with AI overviews  is performing well; Alphabet is now showing search and shopping ads within AI overviews for mobile users in the USA; management finds that users find ads within AI overviews to be helpful; management expects Search to evolve significant in 2025, driven by advances in AI; management is seeing the monetisation rate on ads within AI overviews to be the same as the broader Search; reminder from management that Google already introduced an answer-machine 10 years ago and management is aware of changing trends in user behaviours in Search

Since we first began testing AI overviews, we have lowered machine cost per query significantly. In 18 months, we reduced cost by more than 90% for these queries through hardware, engineering and technical breakthroughs while doubling the size of our custom Gemini model…

…In Search, recent advancements, including AI overviews, Circle to Search and new features in :ens are transforming the user experience, expanding what people can search for and how they search for it. This leads to users coming to search more often for more of their information needs driving additional search queries. Just this week, AI overview started rolling out to more than 100 new countries and territories. It will now reach more than 1 billion users on a monthly basis. We are seeing strong engagement, which is increasing overall search usage and user satisfaction. People are asking longer and more complex questions and exploring a wide range of websites. What’s particularly exciting is that this growth actually increases over time as people learn that Google can answer more of their questions.

The integration of ads within AI overviews is also performing well, helping people connect with businesses as they search…

…AI overviews, where we have now started showing search and shopping ads within the overview for mobile users in the U.S. As you remember, we’ve already been running ads above and below AI overviews. We’re now seeing that people find ads directly within AI overview is helpful because they can quickly connect with relevant businesses, products and services to take the next step at the exact moment they need…

… So I expect Search to continue to evolve significantly in 2025, both in the search product and in Gemini…

…We recently launched ads within AI overviews on mobile in the U.S. And this really builds on our previous rollout of ads above and below the AI overviews. So overall, for AI overviews, we see monetization at approximately the same rate, which gives us a really strong base on which we can innovate even more…

…[Question] Why doesn’t it make sense to have 2 completely different search experiences one an agent like answers engine; and then two, a link-based more traditional search engine? 

[Answer] In this moment, people are using a lot of buzz words like answer engines and all that stuff. I mean Google started answering questions about 10 years ago in our search product with featured snippets. So look, I think, ultimately, you are serving users. User expectations are constantly evolving. And and we work hard to stay a step ahead, anticipate and stay a step ahead.

Alphabet uses and offers customers both its own TPUs (tensor processing units) and Nvidia GPUs; Alphabet is now on the 6th generation of its TPUs, known as Trillium; LG AI Research reduced its inference processing time by 50% and operating costs by 72% using Google Cloud’s TPUs and GPUs; Alphabet will be one of the first companies to provide Nvidia’s GB 200s at scale; management thinks TPUs have very attractive pricing for its capability

We use and offer our customers a range of AI accelerator options, including multiple classes of NVIDIA GPUs and our own custom-built GPUs. We are now on the sixth generation of TPUs known as Trillium and continue to drive efficiencies and better performance with them…

…Using a combination of our TPUs and GPUs, LG AI research reduced inference processing time for its multimodal model by more than 50% and operating costs by 72%…

…We have a wonderful partnership with NVIDIA. We are excited for the GB 200s and will be one of the first to provide it at scale…

…On your first part of the question on the TPUs. If you look at the flash pricing, we’ve been able to deliver externally, I think and how much more attractive it is compared to other models of that capability.

Usage of Alphabet’s Gemini foundation AI model is in a period of dramatic growth by any measure; improvements to Gemini will soon come; all 7 of Alphabet’s products that have more than 2 billion monthly users each use Gemini models; Gemini is now available on GitHub Copilot; Gemini API calls were up 14x in a 6-month period; Snap saw a 2.5 times increase in engagement with its MyAI chatbot after choosing Gemini to power the chatbot’s user experiences; Gemini’s integration with Android is improving Android; the latest Samsung Galaxy devices’ Android operating system has Gemini Live for users to converse with the Gemini model; Alphabet’s latest Pixel 9 devices have Gemini Nano within; development of the third generation of the Gemini model is progressing well; see Point 23 for how Gemini is helping advertisers

By any measure, token volume, API calls, consumer usage business adoption, usage of the Gemini models is in a period of dramatic growth, and our teams are actively working on performance improvements and new capabilities for our range of models. Stay tuned…

… Today, all 7 of our products and platforms with more than 2 billion monthly users use Gemini models, that includes the latest product to surpass the 2 billion user milestone Google Maps…

…Today, we shared that Gemini is now available on GitHub Copilot with more to come…

…Gemini API calls have grown nearly 14x in a 6-month period. When Snap was looking to power more innovative experiences within their MyAI chatbot, they chose Gemini’s strong multimodal capabilities. Since then, Snap saw over 2.5x as much engagement with MyAI in the United States…

… Gemini’s deep integration is improving Android. For example, Gemini Live lets you have free-flowing conversations with Gemini. People love it. It’s available on Android, including Samsung Galaxy devices. We continue to work closely with them to deliver innovations across their newest devices with much more to come. At Made by Google, we unveiled our latest Pixel 9 series of devices featuring advanced AI models, including Gemini Nano. We have seen strong demand for these devices, and they’ve already received multiple awards…

…We’ve had 2 generations of Gemini model. We are working on the third generation, which is progressing well.

Alphabet’s Project Astra will allow AI to see and reason about the physical world around users, and management aims to ship it as early as 2025

When they’re building out experiences where AI can see and reason about the world around you, Project Astra is a glimpse of that future. We are working to ship experiences like this as early as 2025.

Alphabet is using AI internally to improve coding productivity and efficiency; a quarter of new code at Google is now generated by AI

We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than 1/4 of all new coated Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. 

Circle to Search is now available on more than 150 million Android devices; a third of users who have tried Circle to Search now use it weekly; Circle to Search has higher engagement with younger users

Circle to Search is now available on over 150 million Android devices with people using it to shop, translate text and learn more about the world around them. 1/3 of the people who have tried Circle to Search now use it weekly, a testament to its helpfulness and potential…

… For example, with Circle to Search, where we see higher engagement from users aged 18 to 24.  

Lens is now used in over 20 billion visual searches per month; Lens is one of the fastest-growing query types management has seen on Search; management started testing product search on Lens in October and found that shoppers are more likely to engage; management is seeing users use Lens for complex multimodal queries; Alphabet has rolled out shopping ads with Lens visual research results to better connect consumers and businesses.

Lens is now used for over 20 billion visual searches per month. Lens is one of the fastest-growing query types we see on search because of its ability to answer complex multimodal questions and help in product discovery and shopping…

…In early October, we announced product search on Google Lens and in testing this feature, we found that shoppers are more likely to engage with content in this new format. We’re also seeing that people are turning to Lens more often to run complex multimodal quarries, voicing a question or inputting text in addition to a visual. Given these new user behaviors earlier this month, we announced the rollout of shopping ads above and alongside relevant Lens visual search results to help better connect consumers and businesses. 

Customers are using Google Cloud’s AI products in 5 different ways: (1) for AI hardware and software infrastructure; (2) for building and customising AI models with Vertex; (3) for combining Google Cloud’s AI platform with its data platform; (4) for AI-powered cybersecurity solutions; and (5) for building AI agents to improve customer engagement

Customers are using our products in 5 different ways. First, our AI infrastructure. which we differentiate with leading performance driven by storage, compute and software advances as well as leading reliability and a leading number of accelerators…

…Second, our enterprise AI platform, Vertex is used to build and customize the best foundation models from Google and the industry…

…Third, customers use our AI platform together with our data platform, Big Query, because we analyze multimodal data no matter where it is stored with ultra low latency access to Gemini…

…Fourth, our AI-powered cybersecurity solutions Google threat intelligence and security operations are helping customers like BBVA and Deloitte, prevent deduct and respond to cybersecurity threats much faster…

… Fifth, in Q3, we broadened our applications portfolio with the introduction of our new customer engagement suite. It’s designed to improve the customer experience online and in mobile apps as well as in call centers, retail stores and more. 

Waymo is the biggest part of Alphabet’s Other Bets portfolio; Alphabet’s management thinks Waymo is the clear technical leader in autonomous vehicles; Waymo is now serving 150,000 paid rides weekly and driving 1 million fully autonomous miles, and is the first autonomous vehicle company to reach these milestones; Waymo is partnering with Uber and Hyundai to deliver autonomous vehicles to more consumers; Waymo is now on its sixth generation system

I want to highlight Waymo, the biggest part of our portfolio. Waymo is now a clear technical leader within the autonomous vehicle industry and creating a growing commercial opportunity. Over the years, Waymo has been infusing cutting edge AI into its work. Now each week, Waymo is driving more than 1 million fully autonomous miles and serves over 150,000 paid rides, the first time any AV company has reached this kind of mainstream use. Through its expanded network and operations partnership with Uber in Austin and Atlanta, plus a new multiyear partnership with Hyundai, Waymo will bring fully autonomous driving to more people and places. By developing a universal driver, Waymo has multiple paths to market. And with its sixth generation system, Waymo significantly reduced unit costs without compromising safety.

Alphabet’s management finds that AI helps Alphabet better understand consumer-intent and connect consumers with advertisers

AI is expanding our ability to understand intent and connect it to our advertisers. This allows us to connect highly relevant users with the most helpful ad and deliver business impact to our customers.

Advertisers are using Gemini to build and test more creatives at scale; Audi worked with Gemini tools to increase website visits by 80% and increase clicks by 2.7 times

Advertisers now use our Gemini power tools to build and test a larger variety of relevant creators at scale. Audi used our AI tools to generate multiple video image and text assets in different links and orientations out of existing long-form videos. They then fed the newly generated creatives into Demand Gen to drive reach, traffic and booking to their driving experience. The campaign increased website visits by 80% and increased clicks by 2.7x, delivering a lift in their sales. 

Alphabet is offering AI-powered campaigns to help advertisers achieve faster feedback on what is working; DoorDash saw a 15x higher conversion rate at a 50% more efficient cost per action

AI-powered campaigns help advertisers get faster feedback on what creatives workwear and redirect the media buying. Using Demand Gen, DoorDash tested a mix of image and video assets to drive more impact across Google and YouTube’s visually immersive surfaces. They saw a 15x higher conversion rate at a 50% more efficient cost per action when compared to video action campaigns alone. 

Alphabet is using AI to help advertisers better measure their advertising results

This quarter, we extended the availability of our open source marketing mix model, Meridian to more customers, helping to scale measurement of cross-channel budgets to drive better business outcomes.

Alphabet’s big jump capex in 2024 Q3 (was $8.1 billion in 2023 Q3) was mostly for technical infrastructure, in the form of servers and data centers; management expects Alphabet’s 2024 Q4 capex to be similar to what was seen in 2024 Q3; Alphabet announced more than $7 billion in planned data center investments in 2024 Q3, with $6 billion in the USA; management expects further growth in capex in 2025, but not at the same percentage increase seen from 2023 to 2024; the use of TPUs at Alphabet helps to drive efficiencies

With respect to CapEx, our reported CapEx in the third quarter was $13 billion, reflecting investment in our technical infrastructure with the largest component being investment in servers, followed by data centers and networking equipment. Looking ahead, we expect quarterly CapEx in the fourth quarter to be at similar levels to Q3…

…In the third quarter alone, we made announcements of over $7 billion in planned data center investments with nearly $6 billion of that in the U.S…

…As you saw in the quarter, we invested $13 billion in CapEx across the company. And as you think about it, it really is divided into 2 categories. One is our technical infrastructure, and that’s the majority of that $13 billion. And the other one goes into areas such as facilities, the bets and other areas across the company. Within TI, we have investments in servers, which includes both TPUs and GPUs. And then the second categories are data centers and networking equipment. This quarter, approximately 60% of that investments in technical infrastructure went towards servers and about 40% towards data center and networking equipment…

…And as you think about the next quarter and going into next year, as I mentioned in my prepared remarks, we will be investing in Q4 at approximately the same level of what we’ve invested in Q3, approximately $13 billion. And as we think into 2025, we do see an increase coming in 2025, and we will provide more color on that on the Q4 call, likely not the same percent step-up that we saw between ’23 and ’24, but additional increase…

…On your first part of the question on the TPUs. If you look at the flash pricing, we’ve been able to deliver externally, I think and how much more attractive it is compared to other models of that capability. I think probably that gives a good sense of the efficiencies we can generate from our architecture. And so — and we are doing the same that for internal use as well. The models for search while they keep going up in capability we’ve been able to really optimize them for the underlying architecture, and that’s where we are seeing a lot of efficiencies as well.  

Amazon (NASDAQ: AMZN)

Amazon’s management believes that AI will be a big piece of the company’s robotics efforts in its fulfilment network

We continue to innovate in robotics to speed delivery, lower cost to serve, and further improve safety in our fulfillment network…

…We really do believe that AI is going to be a big piece of what we do in our robotics network. We had a number of efforts going on there. We just hired a number of people from an incredibly strong robotics AI organization. And I think that will be a very central part of what we do moving forward, too. 

Amazon’s management sees customers focused on new cloud computing efforts again, and the modernisation of their infrastructure, by migrating to the cloud, is important if they want to work on generative AI at scale

Companies are focused on new efforts again, spending energy on modernizing their infrastructure from on-premises to the cloud. This modernization enables companies to save money, innovate more quickly, and get more productivity from their scarce engineering resources. However, it also allows them to organize their data in the right architecture and environment to do generative AI at scale. It’s much harder to be successful and competitive in generative AI if your data is not in the cloud.

AWS has released nearly twice as many AI features in the last 18 months as other leading cloud providers combined; AWS’s AI business is growing at a triple digit rate at a multi-billion revenue run rate; AWS’s AI business is currently growing more than 3x faster than AWS itself grew when AWS was at a similar stage; management sees AI as an unusually large opportunity

In the last 18 months, AWS has released nearly twice as many machine learning and gen AI features as the other leading cloud providers combined. AWS’ AI business is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage and is growing more than 3x faster at this stage of its evolution as AWS itself grew, and we felt like AWS grew pretty quickly…

…It is a really unusually large, maybe once-in-a-lifetime type of opportunity. And I think our customers, the business, and our shareholders will feel good about this long term that we’re aggressively pursuing it.

Amazon has a good relationship with NVIDIA, but management have heard from customers that they want better price performance on their AI workloads, and so AWS developed its own AI chips for training and inference; AWS’s second version of its AI chip for model-training, Trainium 2, will ramp up in the next few weeks; management thinks Trainium 2 have very compelling price performance; management is seeing significant interest in Trainium 2, to the extent they have to increase manufacturing orders much more than originally planned

While we have a deep partnership with NVIDIA, we’ve also heard from customers that they want better price performance on their AI workloads. As customers approach higher scale in their implementations, they realize quickly that AI can get costly. It’s why we’ve invested in our own custom silicon in Trainium for training and Inferentia for inference. The second version of Trainium, Trainium2, is starting to ramp up in the next few weeks and will be very compelling for customers on price performance. We’re seeing significant interest in these chips, and we’ve gone back to our manufacturing partners multiple times to produce much more than we’d originally planned…

…We have a very deep partnership with NVIDIA, we tend to be their lead partner on most of their new chips. We were the first to offer H200s in EC2 instances. And I expect us to have a partnership for a very long time that matters.

Amazon’s management is seeing more model builders standardise on SageMaker,  AWS’s fully-managed AI service; SageMaker’s hyperpod capability helps save model-training time by up to 40%

We also continue to see increasingly more model builders standardize on Amazon SageMaker, our service that makes it much easier to manage your AI data, build models, experiment, and deploy to production. This team continues to add features at a rapid clip punctuated by SageMaker’s unique hyperpod capability, which automatically splits training workloads across more than 1,000 AI accelerators, prevents interruptions by periodically saving checkpoints, and automatically repairing faulty instances from their last saved checkpoint, and saving training time by up to 40%.

Amazon’s management believes Amazon Bedrock, AWS’s AI-models-as-a-service offering for companies that want to leverage existing foundation models for customisation, has the broadest selection of leading foundation models; Bedrock recently added Anthropic’s Claude 3.5 Sonnet model, Meta’s Llama 3.2 models, and more; management is seeing companies use models from different providers within the same application and Bedrock makes it easy to orchestrate the disparate models; Bedrock also helps companies with model-access, prompt engineering, and lowering inference costs

At the middle layer where teams want to leverage an existing foundation model, customized with their data, and then have features to deploy high-quality generative AI applications, Amazon Bedrock has the broadest selection of leading foundation models and most compelling modules for key capabilities like model valuation, guardrails, rag and agents. Recently, we’ve added Anthropic’s Claude 3.5 Sonnet model, Meta’s Llama 3.2 models, Mistral’s Large 2 models and multiple-stability AI models. We also continue to see teams use multiple model types from different model providers and multiple model sizes in the same application.  There’s mucking orchestration required to make this happen. And part of what makes Bedrock so appealing to customers and why it has so much traction is that Bedrock makes this much easier. Customers have many other requests: access to even more models, making prompt management easier, further optimizing inference costs. And our Bedrock team is hard at work making this happen.

Amazon’s management continues to see strong adoption of Amazon Q, Amazon’s generative AI assistant for software development; Amazon Q has the highest reported code acceptance rates in the industry; reminder that Amazon saved $260 million and 4,500 developer years when performing a large Java Development Kit migration through the use of Amazon Q

We’re continuing to see strong adoption of Amazon Q, the most capable generative AI-powered assistant for software development and to leverage your own data. Q has the highest reported code acceptance rates in the industry for multiline code suggestions. The team has added all sorts of capabilities in the last few months, but the very practical use case recently shared where Q Transform saving Amazon’s teams $260 million and 4,500 developer years in migrating over 30,000 applications to new versions of the Java JDK. As excited developers and prompted them to ask how else we could help them with tedious and painful transformations.

Amazon is using generative AI pervasively across its businesses, with hundreds of apps in use or in development; Rufus is a generative AI-powered shopping assistant available in parts of Europe, North America, and India; Amazon is using generative AI to improve personalisation and product-search for consumers when shopping; Project Amelia is an AI system offering tailored business insights to Amazon sellers; Alexa, Amazon’s virtual assistant technology, is being rearchitected with new foundation AI models; the new Kindle Scribe has a built-in AI-powered notebook 

We’re also using generative AI pervasively across Amazon’s other businesses with hundreds of apps in development or launched.

For consumers, we’ve expanded Rufus, our generative AI-powered expert shopping assistant, to the U.K., India, Germany, France, Italy, Spain, and Canada. And in the U.S., we’ve added more personalization, the ability to better narrow customer intent and real-time pricing and deal information. We’ve recently debuted AI shopping guides for consumers, which simplifies product research by using generative AI to pair key factors to consider in a product category with Amazon’s wide selection, making it easier for customers to find the right product for their needs. 

For sellers, we’ve recently launched Project Amelia, an AI system that offers tailored business insights to boost productivity and drive seller growth.

We continue to rearchitect the brain of Alexa with a new set of foundation models that we’ll share with customers in the near future, and we’re increasingly adding more AI into all of our devices. Take the new Kindle Scribe we just announced. The note-taking experience is much more powerful with the new built-in AI-powered notebook, which enables you to quickly summarize pages of notes into concise bullets in a script font that can easily be shared.

Amazon’s management expects capital expenditures of $75 billion for the whole of 2024; most of the capex will be for AWS infrastructure to support demand for AI services; the capex also includes investments in Amazon’s fulfilment and transportation network; management expects capex in 2025 to increase from 2024’s level, with most of the capex for AWS, specifically generative AI; reminder that the faster AWS grows, the faster Amazon needs to invest capital for hardware; many of the assets AWS’s capex is invested in have long, useful lives; management expects to deliver high returns on invested capital with AWS’s generative AI investments; management has a lot of experience, accumulated over the years, in predicting just the right amount of compute capacity to provide for AWS before the generative AI era, and they believe they can do so again for generative AI

Year-to-date capital investments were $51.9 billion. We expect to spend approximately $75 billion in CapEx in 2024. The majority of the spend is to support the growing need for technology infrastructure. This primarily relates to AWS as we invest to support demand for our AI services while also including technology infrastructure to support our North America and international segments. Additionally, we’re continuing to invest in our fulfillment and transportation network to support the growth of the business, improve delivery speeds and lower our cost to serve. This includes investments in same-day delivery facilities, in our inbound network and as well in robotics and automation…

… I’ll take the CapEx part of that. As Brian said in his opening comments, we expect to spend about $75 billion in 2024. I suspect we’ll spend more than that in 2025. And the majority of it is for AWS, and specifically, the increased bumps here are really driven by generative AI…

…The thing to remember about the AWS business is the cash life cycle is such that the faster we grow demand, the faster we have to invest capital in data centers and networking gear and hardware. And of course, in the hardware of AI, the accelerators or the chips are more expensive than the CPU hardware. And so we invest in all of that upfront in advance of when we can monetize it with customers using the resources…

…A lot of these assets are many-year useful life assets. Data centers, for instance, are useful assets for 20 to 30 years…

…I think we’ve proven over time that we can drive enough operating income and free cash flow to make this very successful return on invested capital business. And we expect the same thing will happen here with generative AI…

…One of the least understood parts about AWS over time is that it is a massive logistics challenge. If you think about we have 35-or-so regions around the world, which is an area of the world where we have multiple data centers, and then probably about 130 availability zone through data centers, and then we have thousands of SKUs we have to land in all those facilities. And if you land too little of them, you end up with shortages, which end up in outages for customers. So most don’t end up with too little, they end up with too much. And if you end up with too much, the economics are woefully inefficient. And I think you can see from our economics that we’ve done a pretty good job over time at managing those types of logistics and capacity. And it’s meant that we’ve had to develop very sophisticated models in anticipating how much capacity we need, where, in which SKUs and units.

And so I think that the AI space is, for sure, earlier stage, more fluid and dynamic than our non-AI part of AWS. But it’s also true that people aren’t showing up for 30,000 chips in a day. They’re planning in advance. So we have very significant demand signals giving us an idea about how much we need…

…There are some similarities in the early days here of AI, where the offerings are new and people are very excited about it. It’s moving very quickly and the margins are lower than what I think they will be over time. The same was true with AWS. If you looked at our margins around the time you were citing, in 2010, they were pretty different than they are now. I think as the market matures over time, there are going to be very healthy margins here in the generative AI space.

There are a few hundred million active Alexa devices; management had an initial vision of Alexa being the world’s best personal assistant and they believe now that Alexa’s re-architecture can give it a shot at fulfilling the initial vision

I think we have a really broad number of Alexa devices all over people’s homes and offices and automobiles and hospitality suites. We’ve about 0.5 billion devices out there with a couple of hundred million active end points. And when we first were pursuing Alexa, we had this vision of it being the world’s best personal assistant and people thought that was kind of a crazy idea. And I think if you look at what’s happened in generative AI over the last couple of years, I think you’re kind of missing the boat if you don’t believe that’s going to happen. It absolutely is going to happen. So we have a really broad footprint where we believe if we rearchitect the brains of Alexa with next-generation foundational models, which we’re in the process of doing, we have an opportunity to be the leader in that space.

Amazon’s management believes that AWS’s demand substantially outweighs capacity today; management believes AWS’s rate of growth can improve over time as capacity grows

[Question] On the cloud, are you at all capacity constrained, and will the new Trainium or NVIDIA chips maybe even drive sales growth faster?

[Answer] I believe we have more demand that we could fulfill if we had even more capacity today. I think pretty much everyone today has less capacity than they have demand for, and it’s really primarily chips that are the area where companies could use more supply…

…We’re growing at a very rapid rate and have grown a pretty big business here in the AI space. And it’s early days, but I actually believe that the rate of growth there has a chance to improve over time as we have bigger and bigger capacity.

Apple (NASDAQ: AAPL)

Apple announced Apple Intelligence in June 2024; Apple Intelligence redefines privacy in AI; Apple recently released the first set of Apple Intelligence features in US English for iPhone, iPad, and Mac users, and they include writing tools, an improved version of Siri, a more intelligent Photos App, and notification summaries and priority messages; more Apple Intelligence features will be released in December 2024 and early developer feedback is great; the adoption rate of iOS18 in its first three days is twice as fast as for iOS17, suggesting interest for Apple Intelligence; Apple will release support for additional languages in Apple Intelligence in April 2025

In June, we announced Apple Intelligence, a remarkable personal intelligent system that combines the power of generative models with personal context to deliver intelligence that is incredibly useful and relevant. Apple Intelligence marks the beginning of a new chapter for Apple Innovation and redefines privacy and AI by extending our groundbreaking approach to privacy into the cloud with private cloud compute. Earlier this week, we made the first set of Apple Intelligence features available in U.S. English for iPhone, iPad and Mac users with system-wide writing tools that help you refine your writing, a more natural and conversational Siri, a more intelligent Photos app, including the ability to create movies simply by typing a description, and new ways to prioritize and stay in the moment with notification summaries and priority messages.

And we look forward to additional intelligence features in December with even more powerful writing tools, a new visual intelligence experience that builds on Apple Intelligence and ChatGPT integration as well as localized English in several countries, including the U.K., Australia and Canada. These features have already been provided to developers, and we’re getting great feedback. More features will be rolling out in the coming months as well as support for more languages, and this is just the beginning…

…[Question] I was wondering if you could just expand a little bit on some of the early feedback to Apple Intelligence, both for iOS 18.1 but also the developer beta so far and whether you would attribute Apple Intelligence to any of the strong iPhone performance that we’ve seen to date.

[Answer] We’re getting a lot of positive feedback from developers and customers. And in fact, if you just look at the first 3 days, which is all we have obviously from Monday, the 18.1 adoption is twice as fast as the 17.1 adoption was in the year ago quarter. And so there’s definitely interest out there for Apple Intelligence…

…We started in the — with U.S. English. That started on Monday. There’s another release coming that adds additional features that I had referenced in December in not only U.S. English but also localized for U.K., Australia, Canada, Ireland and New Zealand. And then we will add more languages in April. We haven’t set the specifics yet in terms of the languages, but we’ll add more in April and then more as we step through the year. And so we’re moving just as fast as possible while ensuring quality.

Apple’s management is building the infrastructure to deliver Apple Intelligence, but it does not seem like Apple will need to significantly increase its capex budget from historical norms; management also does not see any significant change to the intensity of research & development (R&D) spending that Apple needs to invest in AI

[Question] Could you just talk a little bit about the CapEx outlook and whether investments in things like private cloud compute could change the historical CapEx range of roughly $10 billion a year?

[Answer] We are rolling out these features, Apple Intelligence features already now. And so we are making all the capacity that is needed available for these features. You will see in our 10-K the amount of CapEx that we’ve incurred during the course of fiscal ’24. And we will — in fiscal ’25, we will continue to make all the investments that are necessary, and of course, the investments in AI-related CapEx will be made…

…[Question] Given how much your tech peers are spending on AI, does this new era of Apple Intelligence actually require Apple to invest more in R&D beyond your current 7% to 8% of sales to capture this opportunity? 

[Answer] We’ve been investing heavily in R&D over the last several years. Our R&D growth has been significant during the last several years. And obviously, as we move through the course of fiscal ’24, we’ve also reallocated some of the existing resources to this new technology, to AI. And so the level of intensity that we’re putting into AI has increased a lot, and you maybe don’t see the full extent of it because we’ve also had some internal reallocation of the base of engineering resources that we have within the company.

Apple’s management thinks the introduction of Apple Intelligence will benefit the entire Apple ecosystem

[Question] I understand Apple Intelligence is a feature on the phone today. But do you think that in the future it could potentially have or benefit the services growth business? Or is that too — are those too bifurcated to even make a call on the — this early in the cycle?

[Answer] Keep in mind that we have released a lot of APIs, and developers will be taking advantage of those APIs. That release has occurred as well, and of course, more are coming. And so I definitely believe that a lot of developers will be taking advantage of Apple Intelligence in a big way. And what that does to services, I’ll not forecast, but I would say that from an ecosystem point of view, I think it will be great for the user and the user experience.

Arista Networks (NYSE: ANET)

Arista Networks’ management is seeing networking for AI gaining a lot of traction; trials that took place in 2023 are becoming pilots in 2024; management expects more production in 2025 and 2026

Networking for AI is gaining a lot of traction as we move from trials in 2023 to more pilots in 2024, connecting to thousands of GPUs, and we expect more production in 2025 and 2026.

AI data traffic is very different from traditional cloud workloads and smooth and consistent data flow is a crucial factor in AI networking

AI traffic differs greatly from cloud workloads in terms of diversity, duration and size of flow. The fidelity of AI traffic flows where the slowest flow matters and one slow flow could slow down the entire job completion time is a crucial factor in networking.

Arista Networks’ management sees the company becoming a pioneer in scale-out Ethernet accelerated networking for large AI workloads; Arista Networks’ new Etherlink portfolio scales well to networks with over 100,000 GPUs and can potentially even handle 1 million GPU clusters; Arista Networks’ latest 77R4 DES platform was launched in close collaboration with Meta Platforms

Our AI centers connect seamlessly from the back end to the front end of compute, storage, WAN and classic cloud networks. Arista is emerging as the a pioneer and scale-out Ethernet accelerated networking for large-scale training and AI workloads. Our new Etherlink portfolio with wire speed 800-gig throughput and non-blocking performance, scales from single tier to efficient 2-tier networks for over 100,000 GPUs, potentially even 1 million AI accelerators with multiple tiers. Our accelerated AI networking portfolio consists of 3 families with over 20 switching products and not just one point switch. At the recent OCP in mid-October 2024, we officially launched a very unique platform that distributed Etherlink 7700 to build 2 tier networks for up to 10,000 GPU clusters. The 77R4 DES platform was developed in close collaboration with Meta. And while it may physically look like and be cable like a 2-tier leaf spine network, DES provides a single-stage forwarding with highly efficient spine fabric, eliminating the need for tuning and encouraging fast failover for large AI accelerator-based clusters. 

Arista Networks’ management believes the company has the broadest set of 800 gigabit per second Ethernet products for AI networks

I’m pleased to report Arista 7700R4 distributed Etherlink switch, the 7800R4 Spine, along with the 7060X6 AI leaf that we announced in June have entered into production providing our customers the broadest set of 800 gigabit per second Ethernet products for their AI networks. Together with 800 gigabit per second parallel optics, our customers are able to connect to 400 gigabit per second GPUs to each port increasing the deployment density over current switching solutions. This broad range of Ethernet platforms allows our customers to optimize density and minimize tiers to best match the requirements of their AI workload.

New AI clusters require high-speed connections to existing backbones

New AI clusters require new high-speed port connections into the existing backbone. These new clusters also increased bandwidth on the backbone to access training data, capture snapshots and deliver results generated by the cluster. This trend is providing increased demand for 7800R3 400-gigabit solution.

Arista Networks’ management sees next-generation AI data centres needing significantly more power while doubling network performance

Next-generation data centers integrating AI will contend with significant increases in power consumption while looking to double network performance.

Arista Networks’ management thinks the adoption of AI networking will rest on specifications that the Ultra Ethernet Consortium (UEC) is expected to soon release; the UEC now has 97 members and Arista Networks is a founding member

Critical to the rapid adoption of AI networking is the Ultra Ethernet consortium specification expected imminently with Arista’s key contributions as a founding member. The UEC ecosystem for AI has evolved to over 97 members.

Arista Networks’ management thinks Ethernet is the only option for open standard space AI networking

In our view, Ethernet is the only long-term viable direction for open standard space AI networking.

Arista Networks’ business growth in 2024 was achieved partly with the help of AI; management is now projecting even more growth in 2025 and is confident of achieving its AI back-end revenue target of US$750 million; the adoption of Arista Networks’ AI back-end products influences the adoption of its front-end AI networking products too; management also expects Arista Networks’ front-end AI networking products to generate around US$750 million in revenue in 2025, but sometimes this gets hard to track; the US$750 million in AI back-end revenue that management expects are brand new for the company

We’ve experienced some pretty amazing growth years with 33.8% growth in ’23 and 2024 appears to be heading at least to 18%, exceeding our prior predictions of 10% to 12%. This is quite a jump in 2024, influenced by faster AI pilots. We are now projecting an annual growth of 15% to 17% next year, translating to approximately $8 billion in 2025 revenue with a healthy expectation of operating margin. Within that $8 billion revenue target, we are quite confident in achieving our campus and AI by back-end networking targets of $750 million each in 2025 that we set way back 1 or 2 years ago. It’s important to recognize though that the back end of AI will influence the front-end AI network and its ratios. This ratio can be anywhere from 30% to 100% and sometimes, we’ve seen it as high as 200% of the back-end network depending on the training requirements. Our comprehensive AI center networking number is therefore likely to be double of our back-end target of $750 million, now aiming for approximately $1.5 billion in 2025…

… I would expect in the back end, any share Arista gets, including that $750 million is incremental. It’s brand new to us. We were never there before…

…I think it all depends on their approach to AI. If they just want to build a back-end cluster and prove something out, then they just look for the highest job training completion and intense training models. And it’s a very narrow use case. But what we’re starting to see more and more, especially with the top 5, like I said, is for every dollar spent in the back end, you could spend 30% more, 100% more, and we’ve even seen a 200% more scenario, which is why our $750 million will carry over to, we believe, next year, another $750 million on front-end traffic that will include AI, but it will include other things as well. It won’t be unique to AI. So I wouldn’t be surprised if that number is anywhere between 30% and 100%, so the average is 100%., which is 2x our back-end number. So feeling pretty good about that. Don’t know how to exactly count that as pure AI, which is why I qualify it by saying increasingly, if you start having inference, training, front end, storage, WAN, classic cloud all come together, the AI — the pure AI number becomes difficult to track.

Arista Networks’ management is stocking up inventory in preparation for a rapid deployment of AI networking products

On the cash front, while we have experienced significant increases in operating cash over the last couple of quarters, we anticipate an increase in working capital requirements in Q4. This is primarily driven by increased inventory in order to respond to the rapid deployment of AI networks and to reduce overall lead times as we move into 2025.

Arista Networks’ management has been surprised by the acceleration of AI pilots by its customers in 2024; management would not be surprised going forward if its AI business grows faster than its classic data center and cloud business (in other words, management would not be surprised if the company’s customers cannibalise some of their classic data center and cloud buildouts for AI)

We were pleasantly surprised with the faster acceleration of AI pilots in 2024. So we definitely see that our large cloud customers are continuing to refresh on the cloud, but are pivoting very aggressively to AI. So it wouldn’t surprise me if we grow faster in AI and faster in campus in the new center markets and slower in our classic markets called that data center and cloud. 

The 4 major AI trials Arista Networks discussed in the 2024 Q1 earnings call have now become 5 trials; 3 of the 5 customers are progressing well and are transitioning from trials to pilots, and they will each grow their GPU clusters from 50,000 to 100,000 in 2025; the customer for the new trial that was started has historically been very focused on Infiniband so management is happy to have won the trial, and management hopes the trail will enter pilot and production in 2025; the last remaining customer is moving slower than management expected with delays in their data center buildout; management has good revenue visibility for 3 of the 5 trials for the next 6-12 months and Arista Networks’ revenue-guide for 2025 does not depend on the remaining 2 trials; a majority of the trials are currently on Arista Networks’ 400-gig products because the customers are waiting for the ecosystem to develop on the 800-gig products, but management expects more adoption of the 800-gig products in 2025; Arista Networks is participating in other smaller AI trials too, but the difference is that management expects the 5 major ones to scale to at least 100,000 GPU clusters 

Arista now believes we’re actually 5 out of 5, not 4 out of 5. We are progressing very well in 4 out of the 5 clusters. 3 of the customers are moving from trials to pilots this year, and we’re expecting those 3 to become 50,000 to 100,000 GPU clusters in 2025. We’re also pleased with the new Ethernet trial in 2024 with our fifth customer. This customer was historically very, very InfiniBand driven. And we are now moving in that particular fifth customer, we are largely in a trial mode in 2024, and we hope to go to pilots and production in 2025. There is one customer who — so 3 are going well. One is starting. The fifth customer is moving slower than we expected. They may get back on their feet. In 2025, they’re awaiting new GPUs, and they’ve got some challenges on power cooling, et cetera. So 3, I would give an A. The fourth one, we’re really glad we won, and we’re getting started and the fifth one, I’d say, steady-state, not quite as great as we would expect them — have expected them to be…

…[Question] I wanted to ask a little bit more about the $750 million in AI for next year. Has your visibility on that improved over the last few months? I wanted to reconcile your comment around the fifth customer not going slower than expected. And it sounds like you’re now in 5 of 5, but wondering if that fifth customer going slower is limiting upside or limiting your visibility there?

[Answer] I think on 3 out of the 5, we have good visibility, at least for the next 6 months, maybe even 12…

…On the fourth one, we are in early trials. We’ve got some improving to do. So let’s see, but we’re not looking for 2025 to be the bang up year on the fourth one. It’s probably 2026. And on the fifth one, we’re a little bit stalled, which may be why we’re being careful about predicting how they’ll do. They may step in nicely in the second half of ’25, in which case, we’ll let you know. But if they don’t, we’re still feeling good about our guide for ’25…

…A majority of the trials and pilots are on 400 because people are still waiting for the ecosystem at 800, including the NICs and the UEC and the packet spring capabilities, et cetera. So while we’re in some early trials on 800, majority of 400 — majority of 2024 is 400 gig. I expect as we go into 2025, we will see a better split between 400 and 800…

… So we’re not saying these 5 are the be-all, end-all, but these are the 5 we predict can go to 100,000 GPUs and more. That’s the way to look at this. So there are the largest AI Titans, if you will. And they can be in the cloud, hyperscaler Titan group, they could be in the Tier 2 as well, by the way, very rarely would they be in a classic enterprise. By the way, we do have at least 10 to 15 trials going on in the classic enterprise too, but they’re much smaller GPU counts, so we don’t talk about it.

Arista Networks’ management sees NVIDIA both as a partner and a competitor in the AI networking market; Arista Networks does see NVIDIA’s Infiniband as a competing solution, but rarely sees NVIDIA’s own Ethernet solution competing; management thinks customers, ranging from those building large GPU clusters to smaller ones, all see Arista Networks as the expert when it comes to AI networking

We view NVIDIA as a good partner. If we didn’t have the ability to connect to their GPUs, we wouldn’t have all this AI networking demand. So thank you, NVIDIA. Thank you, Jensen, for the partnership. Now as you know, NVIDIA sells the full stack and most of the time, it’s with InfiniBand, and with the Mellanox acquisition, they do have some Ethernet capability. We personally do not run into the Ethernet capability very much. We run into it, maybe in 1 or 2 customers. And so generally speaking, Arista has looked upon as the expert there. We have a full portfolio. We have full software. And whether it’s the large scale-out ethernet working customers like the Titans or even the smaller enterprises, we’re seeing a lot of smaller GPU clusters of the enterprise, Arista is looked upon as the expert there. But that’s not to say we’re going to win 100%. We certainly welcome NVIDIA as a partner on the GPU side and a fierce competitor, and we look to compete with them on the Ethernet switching.

The AI back-end market is where Arista Networks natively connects with GPU and where NVIDIA’s Infiniband is the market leader, but Arista Networks’ Ethernet solution is aiming to be the gold standard; for the AI front-end market, Arista Networks’ solutions are the gold standard and management is seeing some customers fail to run their AI application on competing solutions and want to replace them with Arista Networks’ solutions

So since you asked me specifically about AI as opposed to cloud, let me parse this problem into 2 halves, the back end and the front end, right? At the back end, we’re natively connecting to GPUs. And there can be many times, we just don’t say it because somebody just bundles it in the GPU in particular, NVIDIA. And you may remember a year ago, I was saying we’re outside looking in because most of the bundling is happening with InfiniBand…

…So we’ll take all we can get, but we are not claiming to be a market leader there. We’re, in fact, claiming that there are many incumbents there with InfiniBand and smaller versions of Ethernet that Arista is looking to gain more credibility and experience and become the gold standard for the back end.

On the front end, in many ways, we are viewed as the gold standard. So competitively, it’s a much more complex network. You have to build a leaf-spine architecture. John alluded to this, there’s a tremendous amount of scale with L2, L3, EVPN, VXLAN, visibility, telemetry, automation, routing at scale, encryption at scale. And this, what I would call accelerated networking portfolio complements NVIDIA’s accelerated compute portfolio. And compared to all the peers you mentioned, we have the absolute best portfolio of 20 switches and 3 families and the capability and the competitive differentiation is bar none. In fact, I am specifically aware of a couple of situations where the AI applications aren’t even running on some of the industry peers you talked about, and they want to swap theirs for ours. So feeling extremely bullish with the 7800 flagship product, the newly introduced 7700 that we worked closely with Meta, the 7060, this product line running today mostly at 400 gig because a lot of the NIC and the ecosystem isn’t there for 800. But moving forward into 800, this is why John and the team are building the supply chain to get ready for it.

ASML (NASDAQ: ASML)

While ASML’s management has seen the strong performance of AI continue – and expects the performance to continue for some time – other market segments have taken longer to recover than management expected; in the Memory segment, management is seeing limited capacity additions among customers, apart from AI, as the customers embark on technology transition to HBM and DDR5

There have been quite some market dynamics in the past couple of months. Very clearly, the strong performance of AI clearly continues and I think it continues to come with quite some upside. We will also see that in other market segments, it takes longer to recover. Recovery is there, but it’s more gradual than what we anticipated before and it will continue in 2025. That does lead to some customer cautiousness…

…If you look at the Memory business, this customer cautiousness that I talked about, leads to limited capacity additions. While at the same time, we do see a lot of focus and strong demand when it comes to technology transitions and particularly as it is related to High Bandwidth Memory and to DDR5. So again, there anything related to AI is strong, but other than that there are limited capacity additions.

The AI growth-driver is very strong over the long-term and ASML’s management sees that AI is increasing share in ASML’s customers’ business

If you look at the long-term outlook, I believe the growth drivers are still very much intact. The secular growth drivers are clear and they are strong. I think if you look at AI, very, very strong, very clear and undisputed. Taking an increasing share in the business of our customers. So I think that is going very strongly.

ASML’s management is seeing upside on AI because the overall demand for AI applications continues to increase, which has driven a recovery in server demand, but management does not have complete understanding on how the AI market will play out

We also mentioned some upside on AI, because we still believe that the overall demand for those application is there, continue to increase. So if we look at the server demand, we see there a very nice recovery. A lot of that has to do with AI application. So we talk about upside, which also means that the overall dynamic of the market is still playing. And we felt the need to provide an update for next year based on some of the development we have seen. I think in no way we are also saying that there is a complete understanding of how the entire market will continue to play out in the next few months. So I think on the second part of your question, I would say maybe this has not played out fully yet…

…[Question] You would expect to happen then, I guess, to — at some point will happen?

[Answer] Well, I think if everyone — and I think a lot of us still believe in a strong AI demand in the coming years, I think that demand has to be fulfilled. Therefore, yes, I will say mostly, we will see some development also on that front in the coming months.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing next-gen AI customers want to obtain visibility into their AI usage as they continue experimenting with the technology; around 3,000 customers used at least one of Datadog’s AI integrations at the end of 2024 Q3; management is starting to see Datadog’s LLM (large language model) observability products gain traction as AI experiments start becoming production applications; hundreds of customers are already using LLM observability, and some customers have reduced time spent on investigating LLM issues from days or hours to just minutes; management is seeing customers wanting to use APM (Application Performance Monitoring) alongside LLM observability 

In the next-gen AI space, customers continue to experiment with new AI technologies. And as they do, they want to get visibility into their AI use. At the end of Q3, about 3,000 customers used one or more Datadog AI integrations to send us data about their AI, machine learning and LLM usage. As some of these experiments start turning into production AI applications, we are seeing initial signs of traction for our LLM observability products.

Today, hundreds of customers are using LLM observability with more exploring it every day. And some of our first paying customers have told us that they have cut the time spent investigating LLM latency, errors and quality from days or hours to just minutes. Our customers not only want to understand the performance and cost of their LLM applications, they also want to understand the LLM model performance within the context of their entire application. So they are using APM alongside LLM observability to get fully integrated end-to-end visibility across all their applications and tech stacks

AI-native customers accounted for 6% of Datadog’s ARR in 2024 Q3 (was 6% 2024 Q2); AI-native customers contributed 4 percentage points to Datadog’s year-on-year growth in 2024 Q3, compared to 2 percentage points in 2023 Q3; management has seen a very rapid ramp in usage of Datadog among large customers in the AI-native cohort, and management thinks these customers will optimise cloud and observability usage in the future, while also asking for better terms; management is seeing Datadog’s production-minded LLM observability products being used by real paying customers with real volumes in real production workloads; AI-native companies are model providers or AI infrastructure providers that serve as a proxy for the AI industry

AI native customers who this quarter represented more than 6% of our Q3 ARR, up from more than 4% in Q2 and about 2.5% of our ARR in the year ago quarter. AI native customers contributed about 4 percentage points of year-over-year growth in Q3 versus about 2 percentage points in the year ago quarter. While we believe that adoption of AI will continue to benefit Datadog in the long term, we are mindful that some of the large customers in this cohort have ramped extremely rapidly and that these customers may optimize cloud and observability usage and increase their commitments to us over time with better terms. This may create volatility in our revenue growth in future quarters on the backdrop of long-term volume growth…

…We are seeing our production-minded LLM observability products, for example, being used by real paying customers with real volumes and real applications in real production workloads. So that’s exciting and healthy. I think it’s a great trend for the future…

… We have that group of AI, like smaller — relatively small number of AI companies or AI native companies. Many of them are model providers or infrastructure providers for AI that serve the rest of the industry and they are really a proxy for the future growth of the rest of the industry in AI.

Datadog signed a 7-figure expansion deal with a hyperscaler delivering next-gen AI models; the hyperscaler has its homegrown observability solution, but the solution needs time-consuming customisation and manual configuration; the hyperscaler chose Datadog because Datadog’s platform can scale flexibly

We signed a 7-figure annualized expansion with a division of a hyperscaler delivering next-gen AI models. This customer is very technically capable and already has a homegrown observability solution, which requires time-consuming customization and manual configuration. They will be launching new features for their large language models soon and need a platform that can scale flexibly while supporting proactive incident detection. By expanding the use of Datadog, they expect to efficiently onboard new teams and environments and support the rapidly increasing adoption of the LLMs.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business

Overall, we continue to see no change to the multiyear trend towards digital transformation and cloud migration, which we continue to believe are still in early days. We are seeing continued experimentation with new advances such as next-gen AI, and we believe this is one of the many factors that will drive greater use of the cloud and other modern technologies.

Datadog’s management is starting to see more inference AI workloads, but they are still concentrated among API-driven providers and it’s still very early days in terms of customers putting their next-gen AI applications into production; management expects more diversification to occur in the future as more companies enter production with their applications and customise their models 

In terms of the workloads, you’re right that we’re starting to see more inference workloads, but they still tend to be more concentrated across a number of API-driven providers. So there are a few others, both on LLMs and other kinds of models. So this is where I think most of the usage in production at least is today. We expect that to diversify more over time as companies get further into production with their applications and they start to be customizing more on their models…

…We are excited to see what’s happening with the AI innovation as it gets further down the pipe and away from testing and experimenting and more into production applications. And we have some signs that it’s starting to happen. Again, we see that with our LLM observability product. We see that also with some of the workloads we monitor from our customers on the infrastructure side. But I would say it’s still very early days in terms of customers being in production with their next-gen AI applications.

Datadog’s management is seeing a small amount of cloud workloads of companies being cannibalised by their AI initiatives

You’re right that the — where the workloads could have grown maybe instead of growing 20%, they could grow 25%, maybe some of those 5% instead are being invested both in terms of infrastructure budget or innovation — time innovation budget. All that is going into AI, and that’s largely right now in experimentation and model training and that sort of thing. 

Datadog’s Management is working with customers with large inference workloads on how Datadog can be helpful on the GPU profiling side of inference; management is also experimenting with how Datadog can be helpful on side of training; management thinks that in a steady state, 60% of AI workloads will be inference and 40% will be training, so there’s still a lot of value to be found if Datadog can be useful in the training side too

Right now, we’re working with a number of customers that have real-world large inference workloads on how we can help on the GPU profiling side for inference. We’re doing less on the training side, mostly because the training jobs tend to be more bespoke and temporary, and there’s less of an application that’s attached to those that these are just very large clusters of GPUs. So it’s closer to HPC in a way than it is to traditional applications, though we are also experimenting with what we can do there. There is a world where maybe in a durable fashion, 60% of workloads are inference and 40% are training. And if that’s the case, there’s going to be a lot of value to be had by having repeatable training and repeatable tooling for that. So we are also looking into that.

Datadog is not monetising GPU instances as well as CPU instances today, but management thinks that could change in the future

As of today, we really don’t monetize GPU instances all that well compared to the other CPU instances. So GPU instance is many times the cost of a CPU instance, and we charge the same amount for it. That doesn’t have to be the case in the future. If we do things that are particularly interesting and it’s going to have a real impact on — and deliver value and how customers use and make the best of their GPUs and in the end, save money. 

Datadog’s management is seeing Datadog’s AI-native cohort grow faster than its cloud-native cohorts did in the late 2010s and early 2020s

What we’ve seen with cloud native in the late ’10s and early ’20s, where we had these numbers of cloud-native consumer companies that were growing very fast, with 2 differences. The first one is that the AI cohort is growing faster and there are larger individual ACVs [annual contract value] for these customers.

Datadog’s management thinks that workloads on Datadog’s platform could really accelerate when non-AI-native companies start bringing AI applications into production

In terms of the growth of workloads, look, I mean, as we said, we see growth across the customer base pretty much. We see growth of classical workloads in the cloud. We see large growth — very large growth on the AI native side. We think that the one big catalyst for future acceleration will be those AI native applications or those AI applications, I should say, going into production for non-AI native companies for a much broader set of customers than the customers that are deploying these kind of applications to their — in production. And as they do, they will also look less like just large cluster of GPUs and more like traditional applications because the GPU needs a database, it needs [ core ] application in front of it, it needs layers to secure it and authorize it and all the other things. So it’s going to look a lot more like a normal application with some additional more concentrated compute and GPUs.

Datadog’s management does not expect Datadog to make outsized investments in GPU clusters for compute

Unlike many others, we don’t expect at this point to have outsized investments in compute. We’re not building absolutely large GPU clusters.

dLocal (NASDAQ: DLO)

dLocal’s management launched the Smart Requests functionality in 2024 Q3 that improves conversion rates for merchants by 1.22 percentage points on average, which equates to a 1.2% increase in revenue for merchants; Smart Requests relies on localised machine learning models to maximise authorisation rates for merchant

During the quarter, we launched our smart requests functionality, boosting our transaction performance and therefore, improving conversion rates by an average of 1.22 percentage points across the board. It may sound minor, but it isn’t. It actually represents, in practical terms, 1.2% additional revenue to our merchants. Smart requests rely on per country machine learning models that optimize routing and chaining so as to maximize authorization rates for our merchants.

Fiverr (NYSE: FVRR)

Fiverr’s management believes that Fiverr’s next generation of products must empower its community to fully leverage AI, and that the best work will be done in the future by a combination of humans and AI

One thing that became clearer to me in the last year is that with the emergence of GenAI and the promise of AGI, the next generation of products we build must empower our community to fully leverage artificial intelligence. It also became clear to me that in the future, the best work will be done by humans and AI technology together, not humans alone or AI alone.

Fiverr’s management is providing Fiverr’s customers with an AI assistant to help them navigate the company’s platform 

This means that every business that comes to Fiverr will have a world-class AI assistant to help them get things done, from ideation, scoping and briefing to project management and workflow automation. It means that they can seamlessly leverage both human talent and machine intelligence to create the most beautiful results.

Fiverr’s management is building a new search experience on the Fiverr platform for buyers which incorporates Neo, its AI powered smart matching tool; Fiverr has launched Dynamic Matching to allow buyers to put together project briefs with an AI assistant to help them get matched to the most relevant freelancers; these new features have experienced enthusiastic reception in just a few weeks; projects that use these new features are bigger projects than the typical scope of projects on Fiverr

On the buyer side, we are building a new search experience that not only includes more dynamic catalogs but also incorporates Neo, an AI-powered smart matching tool, to help customers match with more contextual information. We launched Dynamic Matching to allow buyers to put together comprehensive briefs with a powerful AI assistant and then get matched with the most relevant freelancer with a tailored proposal…

…Even in the few weeks since we launched these products, we have already seen an enthusiastic reception from our community and promising performance. The projects that come through these products are several times larger than a typical project on Fiverr, and we believe it has a lot more potential down the road as the awareness and trust of these products grow on the platform.

Mastercard (NYSE: MA)

Mastercard acquired Brighterion in 2017 to use AI capabilities for decision intelligence; after boosting the product with generative AI, Mastercard has seen a 20% lift in the product 

One of the more recent ones that we talked about that we invested heavily in using our Brighterion acquisition from back in 2017 to use our AI capabilities is decision intelligence. We’ve now boosted the product with Gen AI and the outcome that we see is tremendous. This is up to a 20% lift that we see.

Meta Platforms (NASDAQ: META)

Meta’s management is seeing rapid adoption of Meta AI and Llama; Meta AI now has more than 500 million monthly actives; Llama token usage has grown exponentially in 2024 so far; Meta released Llama 3.2 in 2024 Q3; the public sector is adopting Llama; management is seeing higher usage of Meta AI as the models improve; Meta AI is built on Llama 3.2; voice functions for Meta AI are now available in English in the USA, Australia, Canada, and New Zealand; image editing through simple text prompts, and the ability to learn about images, are now available in Meta AI in the USA; Meta AI remains on track to be the most-used AI assistant in the world by end-2024; early use cases for Meta AI are for information gathering, help with how-to tasks, explore interests, look for content, and generate images

We’re seeing rapid adoption of Meta AI and Llama, which is quickly becoming a standard across the industry…

…Meta AI now has more than 500 million monthly actives…

…Llama token usage has grown exponentially this year and the more widely that Llama gets adopted and becomes the industry standard the more that the improvements to its quality and efficiency will flow back to all of our products. This quarter, we released Llama 3.2, including the leading small models that run on device and open source multimodal models…

…We’re also working with the public sector to adopt Llama across the U.S. government…

…We’re seeing lifts in usage as we improve our models and have introduced a number of enhancements in recent months to make Meta AI more helpful in engaging. Last month, we began introducing voice, so you can speak with Meta AI more naturally, and it’s now fully available in English to people in the U.S., Australia, Canada and New Zealand. In the U.S., people can now also upload photos to Meta AI to learn more about them, write captions for post and add, remove or change things about their images with a simple text prompt. These are all built with our first multimodal foundation model, Llama 3.2…

…We’re excited about the progress of Meta AI. It’s obviously very early in its journey, but it continues to be on track to be the most used AI assistant in the world by end of year…

… Number of the frequent use cases we’re seeing include information gathering, help with how-to tasks, which is the largest use case. But we also see people using it to go deeper on interests, to look for content on our services, for image generation, that’s also been another pretty popular use case so far.

Meta’s management is seeing AI have a positive impact on nearly all aspects of Meta; improvements to Meta’s AI-driven feed and video recommendations have driven increases in time spent on Facebook this year by 8% and on Instagram by 6%; more than 1 million advertisers are using Meta’s Gen AI tools and advertisers using image generation are enjoying a 7% increase in conversions; management sees plenty of new opportunities for new AI advances to accelerate Meta’s core business, so they want to invest more there

We’re seeing AI have a positive impact on nearly all aspects of our work from our core business engagement and monetization to our long-term road maps for new services and computing platforms…

…Improvements to our AI-driven feed and video recommendations have led to an 8% increase in time spent on Facebook and a 6% increase on Instagram this year alone. More than 1 million advertisers used our Gen AI tools to create more than 15 million ads in the last month. And we estimate that businesses using image generation are seeing a 7% increase in conversions and we believe that there’s a lot more upside here…

…It’s clear that there are a lot of new opportunities to use new AI advances to accelerate our core business that should have strong ROI over the next few years. So I think we should invest more there.

 The development of Llama 4 is progressing well; Llama 4 is being trained on more than 100,000 H100s and it’s the biggest training cluster in the world management is aware of; management expects the smaller Llama 4 models to be ready in early-2025; management thinks Llama 4 will be much faster and will have new modalities, stronger capabilities and reasoning

I’m even more excited about Llama 4, which is now well into its development. We’re training the Llama 4 models on a cluster that is bigger than 100,000 H100s or bigger than anything that I’ve seen reported for what others are doing. I expect that the smaller Llama 4 models will be ready first, and they’ll be ready — we expect sometime early next year. And I think that there are going to be a big deal on several fronts, new modalities, capabilities, stronger reasoning and much faster. 

Meta’s management remains convinced that open source is the way to go for AI development; the more developers use Llama, the more Llama improves in both quality and efficiency; in terms of efficiency, with higher adoption of Llama, management is seeing NVIDIA and AMD optimise their chip designs to better run Llama

It seems pretty clear to me that open source will be the most cost-effective, customizable, trustworthy performance and easiest to use option that is available to developers. And I am proud that Llama is leading the way on this…

…[Question] You said something along the lines of the more standardized Llama becomes the more improvements will flow back to the core meta business. And I guess, could you just dig in a little bit more on that?

[Answer] The improvements to Llama, I’d say come in a couple of flavors. There’s sort of the quality flavor and the efficiency flavor. There are a lot of researchers and independent developers who do work and because Llama is available, they do the work on Llama and they make improvements and then they publish it and it becomes — it’s very easy for us to then incorporate that both back into Llama and into our Meta products like Meta AI or AI Studio or Business AIs because the work — the examples that are being shown are people doing it on our stack.

Perhaps more importantly, is just the efficiency and cost. I mean this stuff is obviously very expensive. When someone figures out a way to run this better if that — if they can run it 20% more effectively, then that will save us a huge amount of money. And that was sort of the experience that we had with open compute and why — part of why we are leaning so much into open source here in the first place, is that we found counterintuitively with open compute that by publishing and sharing the architectures and designs that we had for our compute, the industry standardized around it a bit more. We got some suggestions also that helped us save costs and that just ended up being really valuable for us. Here, one of the big costs is chips — a lot of the infrastructure there. What we’re seeing is that as Llama gets adopted more, you’re seeing folks like NVIDIA and AMD optimize their chips more to run Llama specifically well, which clearly benefits us. 

Meta’s management expects to continue investing seriously into AI infrastructure

Our AI investments continue to require serious infrastructure, and I expect to continue investing significantly there too. We haven’t decided on the final budget yet, but those are some of the directional trends that I’m seeing.

Meta’s management thinks the integration of Meta AI into the Meta Ray-Ban glasses is what truly makes the glasses special; the Meta Ray-Ban glasses can answer questions throughout the day, help wearers remember things, give suggestions to wearers in real-time using multi-modal AI, and translate languages directly into the ear of wearers; management continues to think glasses are the ideal form-factor for AI because glasses lets AI see what you see and hear what you hear; demand for the Meta Ray-Ban glasses continues to be really strong; a recent release of the glasses was sold out almost immediately; Meta has deepened its partnership with EssilorLuxottica to build future generations of the glasses; Meta recently showcased Orion, its first full holographic AR glasses

This quarter, we also had several milestones around Reality Labs and the integration of AI and wearables. Ray-Ban meta glasses are the prime example here. They’re great booking glasses that let you take photos and videos, listen to music and take calls. But what makes them really special is the Meta AI integration. With our new updates, it will be able to not only answer your questions throughout the day, but also help you remember things, give you suggestions as you’re doing things using real-time multi-modal AI and even translate other languages right in your ear for you. I continue to think that glasses are the ideal form factor for AI because you can let your AI see what you see, hear what you hear and talk to you.

Demand for the glasses continues to be very strong. The new clear addition that we released at Connect sold out almost immediately and has been trading online for over $1,000. We’ve deepened our partnership with EssilorLuxottica to build future generations of smart eyewear that deliver both cutting-edge technology and style.

At Connect, we also showed Orion, our first full holographic AR glasses. We’ve been working on this one for about a decade, and it gives you a sense of where this is all going. We’re not too far off from being able to deliver great-looking glasses to let you seamlessly blend the physical and digital worlds so you can feel present with anyone no matter where they are. And we’re starting to see the next computing platform come together and it’s pretty exciting.

Newer scaling laws seen with Meta’s large language models inspired management to develop new ranking model architectures that can learn more effectively from significantly larger data sets; the new ranking model architectures have been deployed to Facebook’s video ranking models, helping to deliver more relevant recommendations; management is exploring the use of the new ranking model architectures on other services and the introduction of cross-surface data to the models, with the view that these moves will unlock more relevant recommendations and lead to better engineering efficiency

Previously, we operated separate ranking and recommendation systems for each of our products because we found that performance did not scale if we expanded the model size and compute power beyond a certain point. However, inspired by the scaling laws we were observing with our large language models, last year, we developed new ranking model architectures capable of learning more effectively from significantly larger data sets.

To start, we have been deploying these new architectures to our Facebook ranking video ranking models, which has enabled us to deliver more relevant recommendations and unlock meaningful gains in launch time. Now we’re exploring whether these new models can unlock similar improvements to recommendations on other services. After that, we will look to introduce cross-surface data to these models, so our systems can learn from what is interesting to someone on one surface of our apps and use it to improve their recommendations on another. This will take time to execute and there are other explorations that we will pursue in parallel. However, over time, we are optimistic that this will unlock more relevant recommendations while also leading to higher engineering efficiency as we operate a smaller number of recommendations.

Meta’s management is using new approaches to AI modelling to allow Meta’s ad systems to consider a person’s sequence of actions before and after seeing an ad, which allow the systems to better predict a person’s response to specific ads; the new approaches to AI modelling have delivered a 2%-4% increase in conversions in tests; Meta is seeing strong user-retention with its generative AI tools for image expansion, background generation, and text generation; Meta has started testing its first generative AI tools for video expansion and image animation and plans to roll them out broadly by early-2025

The second part of improving monetization efficiency is enhancing marketing performance. Similar to organic content ranking, we are finding opportunities to achieve meaningful ads performance gains by adopting new approaches to modeling. For example, we recently deployed new learning and modeling techniques that enable our ad systems to consider the sequence of actions a person takes before and after seeing an ad. Previously, our ad system could only aggregate those actions together without mapping the sequence. This new approach allows our systems to better anticipate how audiences will respond to specific ads. Since we adopted the new models in the first half of this year, we’ve already seen a 2% to 4% increase in conversions based on testing within selected segments…

…Finally, there is continued momentum with our Advantage+ solutions, including our ad creative tools. We’re seeing strong retention with advertisers using our Generative AI-powered image expansion, background generation and text generation tools, and they’re already driving improved performance for advertisers even at this early stage. Earlier this month, we began testing our first video generation features, video expansion and image animation. We expect to make them more broadly available by early next year.

Meta’s management expects to significantly increase Meta’s infrastructure for generative AI while prioritising fungibility

Given the lead time of our longer-term investments, we also continue to maximize our flexibility so that we can react to market developments. Within Reality Labs, this has benefited us as we’ve evolved our road map to respond to the earlier-than-expected success of smart glasses. Within Generative AI, we expect significantly scaling up our infrastructure capacity now while also prioritizing its fungibility will similarly position us well to respond to how the technology and market develop in the years ahead.

Meta’s management continues to develop tools for individuals and businesses to create AI agents easily; management thinks that Meta’s progress with AI agent tools is currently at where Meta was with Meta AI a year ago; management wants the AI agent tools to be widely used in 2025

There are also other new products like that, things around AI Studio. This year, we really focused on rolling out Meta AI as kind of our are kind of single assistant that people can ask any question to, but I think there’s a lot of opportunities that I think we’ll see ramp more over the next year in terms of both consumer and business use cases, for people interacting with a wide variety of different AI agents, consumer ones with AI Studio around whether it’s different creators or kind of different agents that people create for entertainment. Or on the business side, we do want to continue making progress on this vision of making it set any small business or any business over time can with a few clicks stand up in AI agent that can help do customer service and sell things to all of their customers around the world, and I think that’s a huge opportunity. So it’s very broad…

…But I’d say that we’re — today, with AI Studio and business AIs about where we were with Meta AI about a year ago. So I think in the next year, our goal around that is going to be to try to make those pretty widespread use cases, even though there’s going to be a multiyear path to getting kind of the depth of usage and the business results around that we want. 

Meta’s management is not currently sharing quantitative metrics on productivity improvements with the internal use of AI, but management is excited about the internal adoption they are seeing and the future opportunities for doing so

On the use of AI and employee productivity, it’s certainly something that we’re very excited about. I don’t know that we have anything particularly quantitative that we’re sharing right now. I think there are different efficiency opportunities with AI that we’ve been focused on in terms of where we can reduce costs over time and generate savings through increasing internal productivity in areas like coding. For example, it’s early, but we’re seeing a lot of adoption internally of our internal assistant and coding agent, and we continue to make Llama more effective at coding, which should also make this use case increasingly valuable to developers over time.

There are also places where we hope over time that we’ll be able to deploy these tools against a lot of our content moderation efforts to help make the big body of content moderation work that we undertake, to help it make it more efficient and effective for us to do so. And there are lots of other places around the company where I would say we’re relatively early in exploring the way that we can use LLM based tools to make different types of work streams more efficient.

It appears that Meta has achieved more than management expected in terms of developing its own AI infrastructure (in other words, developing its own AI chips)

So I think part of what we’re seeing this year is the infra team is executing quite well. And I think that’s, why over the course of the year, we’ve been able to build out more capacity. I mean going into the year, we had a range for what we thought we could potentially do. And we have been able to do, I think, more than, I think, we’d kind of hoped and expected at the beginning of the year. And while that reflects as higher expenses, it’s actually something that I’m quite happy that the team is executing well on. And I think that will — so that execution makes me somewhat more optimistic that we’re going to be able to keep on building this out at a good pace but that’s part of the whole thing. 

Meta’s management is starting to test the addition of AI-generated or AI-augmented content to users of Instagram and Facebook; management has high confidence that AI-generated and/or AI-augmented content will be an important trend in the future

I think we’re going to add a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way. And I think that, that’s going to be just very exciting for the — for Facebook and Instagram and maybe Threads or other kind of feed experiences over time. It’s something that we’re starting to test different things around this. I don’t know if we know exactly what’s going to work really well yet. Some things are promising. I don’t know that this isn’t going to be a big impact on the business in ’25 would be my guess. But I think that there is I have high confidence that over the next several years, this is going to be an important trend and one of the important applications.

Meta’s management is currently focused on the engagement and user-experience of Meta AI; the monetisation of Meta AI will come later

Right now, we’re really focused on making Meta AI as engaging and valuable a consumer experience as possible. Over time, we think there will be a broadening set of queries that people use it for. And I think that the monetization opportunities will exist when over time as we get there. But right now, I would say we are really focused on the consumer experience above all and this is sort of a playbook for us with products that we put out in the world where we really dial in the consumer experience before we focus on what the monetization could look like.

Microsoft (NASDAQ: MSFT)

Microsoft’s AI business is on track to exceed $10 billion in annual revenue run rate in 2024 Q4 after being started for just 2.5 years; it will be the fastest business in the company’s history to do so; Microsoft’s AI business is nearly all inference (see Point 32 for more)

All up, our AI business is on track to surpass an annual revenue run rate of $10 billion next quarter, which will make it the fastest business in our history to reach this milestone…

…We’re excited that only 2.5 years in, our AI business is on track to surpass $10 billion of annual revenue run rate in Q2…

…If you sort of think about the point we even made that this is going to be the fastest growth to $10 billion of any business in our history, it’s all inference, right? 

Azure took share in 2024 Q3 (FY2025 Q1), driven by AI; Azure grew revenue by 33% in 2024 Q3 (was 29% in 2024 Q2), with 12 points of growth from AI services (was 8 points in 2024 Q2); Azure’s AI business has higher demand than available capacity

Azure took share this quarter…. 

… Azure and other cloud services revenue grew 33% and 34% in constant currency, with healthy consumption trends that were in line with expectations. The better-than-expected result was due to the small benefit from in-period revenue recognition noted earlier. Azure growth included roughly 12 points from AI services similar to last quarter. Demand continues to be higher than our available capacity. 

Microsoft’s management thinks Azure offers the broadest selection of AI chips, from Microsoft’s own Maia 100 chip to AMD and NVIDIA’s latest GPUs; Azure is the first cloud provider to offer NVIDIA’s GB200 chips

We are building out our next-generation AI infrastructure, innovating across the full stack to optimize our fleet for AI workloads. We offer the broadest selection of AI accelerators, including our first-party accelerator, Maia 100 as well as the latest GPUs from AMD and NVIDIA. In fact, we are the first cloud to bring up NVIDIA’s Blackwell system with GB200-powered AI servers.

Azure OpenAI usage more than doubled in the past 6 months, as both startups and enterprises move apps from test to production; GE Aerospace used Azure OpenAI to build a digital assistant for its 52,000 employees and in 3 months, the assistant has processed 500,000 internal queries and 200,000 documents; Azure recently added support for OpenAI’s newest o1 family of AI models; Azure AI is offering industry-specific models, including multi-modal models for medical imaging; Azure AI is increasingly an on-ramp for Azure’s data and analytics services, driving acceleration of Azure Cosmos DB and Azure SQL DB hyperscale usage

More broadly with Azure AI, we are building an end-to-end app platform to help customers build their own copilots and agents. Azure OpenAI usage more than doubled over the past 6 months as both digital natives like Grammarly and Harvey as well as established enterprises like Bajaj Finance, Hitachi, KT and LG move apps from test to production. GE Aerospace, for example, used Azure OpenAI to build a new digital assistant for all 52,000 of its employees. In just 3 months, it has been used to conduct over 500,000 internal queries and process more than 200,000 documents…

…This quarter, we added support for OpenAI’s newest model family, o1. We’re also bringing industry-specific models through Azure AI, including a collection of best-in-class multimodal models for medical imaging…

…Azure AI is also increasingly an on-ramp to our data and analytics services. As developers build new AI apps on Azure, we have seen an acceleration of Azure Cosmos DB and Azure SQL DB hyperscale usage as customers like Air India, Novo Nordisk, Telefonica, Toyota Motor North America and Uniper take advantage of capabilities purpose built for AI applications. 

Azure is offering its full catalog of AI models directly within the GitHub developer workflow; GitHub Copilot enterprise customers grew 55% sequentially in 2024 Q3; GitHub Copilot now has agentic workflows, such as Copilot Autofix, which helps users fix code 3x faster than it would take them on their own

And with the GitHub models, we now provide access to our full model catalog directly within the GitHub developer workflow…

… GitHub Copilot is changing the way the world builds software. Copilot enterprise customers increased 55% quarter-over-quarter as companies like AMD and Flutter Entertainment tailor Copilot to their own code base. And we are introducing the next phase of AI code generation, making GitHub Copilot agentic across the developer workflow. GitHub Copilot Workspace is a developer environment, which leverages agents from start to finish so developers can go from spec to plan to code all in natural language. Copilot Autofix is an AI agent that helps developers at companies like Asurion and Auto Group fix vulnerabilities in their code over 3x faster than it would take them on their own. We’re also continuing to build on GitHub’s open platform ethos by making more models available via GitHub Copilot. And we are expanding the reach of GitHub to a new segment of developers introducing GitHub Spark, which enables anyone to build apps in natural language.

Microsoft 365 Copilot has a new Pages feature, which management thinks is the first new digital artefact for the AI age; Pages helps users brainstorm with AI and collaborate with other users; Microsoft 365 Copilot responses are now 2x faster and 3x better; daily users of Microsoft 365 have more than doubled sequentially; Microsoft 365 copilot saves Vodafone employees 3 hours per person per week, and will be rolled out to 68,000 employees; 70% of the Fortune 500 now use Microsoft 365 Copilot; Microsoft 365 copilot is being adopted at a faster rate than any other new Microsoft 365 feature; with Copilot Studio, organisations can build autonomous agents to connect with Microsoft 365 Copilot; more than 10,000 organisations have used Copilot Studio, up 2x sequentially; monthly active users of Copilot across Microsoft’s CRM and ERP portfolio grew 60% sequentially

We launched the next wave of Microsoft 365 Copilot innovation last month, bringing together web, work, and Pages as the new design system for knowledge work. Pages is the first new digital artifact for the AI age, and it’s designed to help you ideate with AI and collaborate with other people. We’ve also made Microsoft 365 Copilot responses 2x faster and improved response quality by nearly 3x. This innovation is driving accelerated usage, and the number of people using Microsoft 365 daily more than doubled quarter-over-quarter. We are also seeing increased adoption from customers in every industry as they use Microsoft 365 Copilot to drive real business value. Vodafone, for example, will roll out Microsoft 365 Copilot to 68,000 employees after a trial showed that, on average, they save 3 hours per person per week. And UBS will deploy 50,000 seats in our largest finserve deal to date. And we continue to see enterprise customers coming back to buy more seats. All up, nearly 70% of the Fortune 500 now use Microsoft 365 Copilot, and customers continue to adopt it at a faster rate than any other new Microsoft 365 suite…

…With Copilot Studio, organizations can build and connect Microsoft 365 Copilot to autonomous agents, which then delegate to Copilot when there is an exception. More than 100,000 organizations from Nsure, Standard Bank and Thomson Reuters to Virgin Money and Zurich Insurance have used Copilot Studio to date, up over 2x quarter-over-quarter…

…Monthly active users of Copilot across our CRM and ERP portfolio increased over 60% quarter-over-quarter. 

Azure is bringing AI to industry-specific workflows; DAX Copilot is used in over 500 healthcare organisations to document more than 1.3 million physician-patient encounters each month; DAX Copilot is growing revenue faster than GitHub Copilot did in its first year

We’re also bringing AI to industry-specific workflows. One year in, DAX Copilot is now documenting over 1.3 million physician-patient encounters each month at over 500 health care organizations like Baptist Medical Group, Baylor Scott & White, Greater Baltimore Medical Center, Novant Health and Overlake Medical Center. It is showing faster revenue growth than GitHub Copilot did in this first year. And new features extend DAX beyond notes, helping physicians automatically draft referrals, after-visit instructions and diagnostic evidence.

LinkedIn’s AI tools help hirers find qualified candidates faster, and hirers who use AI assistant messages see a 44% higher acceptance rate

LinkedIn’s first agent hiring assistant will help hirers find qualified candidates faster by tackling the most time-consuming task. Already hirers who use AI assistant messages see a 44% higher acceptance rate compared to those who don’t. And our hiring business continues to take share.

In September 2024, Microsoft introduced a new AI companion experience – powered by Copilot – that includes voice and vision capabilities, allowing users to browse and converse with Copilot simultaneously

With Copilot, we are seeing the first step towards creating a new AI companion for everyone with new Copilot experience we introduced earlier this month, includes a refreshed design and tone along with improved speed and fluency across the web and mobile. And it includes advanced capabilities like voice and vision that make it more delightful and useful and feel more natural. You can both browse and converse with Copilot simultaneously because Copilot sees what you see. 

Roughly half of Microsoft’s cloud and AI-related capex in 2024 Q3 (FY2025 Q1) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; the capex spend for CPUs and GPUs are made based on demand signals; management will be looking at inference demand to govern the level of AI capex for training; management sees that growth in capex will eventually slow and revenue growth will increase, but how fast that happens will depend on the pace of adoption of AI; the capex that Microsoft has been committing is a sign of management’s commitment to grow together with OpenAI, and to grow Azure beyond OpenAI; Microsoft is currently not interested at all in selling GPUs for companies to train AI models and has turned such business away, and this gives management conviction about the company’s AI-related capex

Capital expenditures including finance leases were $20 billion, in line with expectations, and cash paid for PP&E was $14.9 billion. Roughly half of our cloud and AI-related spend continues to be for long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend is primarily for servers, both CPUs and GPUs, to serve customers based on demand signals…

…The inference demand ultimately will govern how much we invest in training because that’s, I think, at the end of the day, you’re all subject to ultimately demand…

…I think in some ways, it’s helpful to go back to the cloud transition that we worked on over a decade ago, I think, in the early stages. And what you did see and you’ll see us do in the same time is you have to build to meet demand. Unlike the cloud transition, we’re doing it on a global basis in parallel as opposed to sequential given the nature of the demand. And then as long as we continue to see that demand grow, you’re right, the growth in CapEx will slow and the revenue growth will increase. And those 2 things, to your point, get closer and closer together over time. The pace of that entirely depends really on the pace of adoption…

…[Question] How does Microsoft manage the demands on CapEx from helping OpenAI with its scaling ambitions?

[Answer] I’m thrilled with their success and need for supply from Azure and infrastructure and really what it’s meant in terms of being able to also serve other customers for us. It’s important that we continue to invest capital to meet not only their demand signal and needs for compute but also from our broader customers. That’s partially why you’ve seen us committing the amount of capital we’ve seen over the past few quarters, is our commitment to both grow together and for us to continue to grow the Azure platform for customers beyond them…

…One of the things that may not be as evident is that we’re not actually selling raw GPUs for other people to train. In fact, that’s sort of a business we turn away because we have so much demand on inference that we are not taking what I would — in fact, there’s a huge adverse selection problem today where people — it’s just a bunch of tech companies still using VC money to buy a bunch of GPUs. We kind of really are not even participating in most of that because we are literally going to the real demand, which is in the enterprise space or our own products like GitHub Copilot or M365 Copilot. So I feel the quality of our revenue is also pretty superior in that context. And that’s what gives us even the conviction, to even Amy’s answers previously, about our capital spend, is if this was just all about sort of a bunch of people training large models and that was all we got, then that would be ultimately still waiting, to your point, for someone to actually have demand, which is real. And in our case, the good news here is we have a diversified portfolio. We’re seeing real demand across all of that portfolio.

Microsoft’s management continues to expect Azure’s growth to accelerate in FY2025 H2, driven by increase in AI capacity to meet growing demand

In H2, we still expect Azure growth to accelerate from H1 as our capital investments create an increase in available AI capacity to serve more of the growing demand.

Microsoft’s management thinks that the level of supply and demand for AI compute will match up in FY2025 H2

But I feel pretty good that going into the second half of even this fiscal year, that some of that supply/demand will match up…

…I do, as you heard, have confidence, as we get a good influx of supply across the second half of the year particularly on the AI side, that we’ll be better able to do some supply-demand matching and hence, while we’re talking about acceleration in the back half.

Microsoft’s management sees Microsoft’s partnership with OpenAI as having been super beneficial to both parties; Microsoft provides the infrastructure for OpenAI to innovate on models; Microsoft takes OpenAI’s models and innovates further, through post-training of the models, building smaller models, and building products on top of the models; management developed conviction on the OpenAI partnership after seeing products such as GitHub Copilot and DAX Copilot get built; management feels very good about Microsoft’s investment in OpenAI; Microsoft accounts for OpenAI’s financials under the equity method

The partnership for both sides, that’s OpenAI and Microsoft, has been super beneficial. After all, we were the — we effectively sponsored what is one of the most highest-valued private companies today when we invested in them and really took a bet on them and their innovation 4, 5 years ago. And that has led to great success for Microsoft. That’s led to great success for OpenAI. And we continue to build on it, right? So we serve them with world-class infrastructure on which they do their innovation in terms of models, on top of which we innovate on both the model layer with some of the post-training stuff we do as well as some of the small models we build and then, of course, all of the product innovation, right? One of the things that my own sort of conviction of OpenAI and what they were doing came about when I started seeing something like GitHub Copilot as a product get built or DAX Copilot get built or M365 Copilot get built…

… And the same also, I would say, we are investors. We feel very, very good about sort of our investment stake in OpenAI…

…  I would say, just a reminder, this is under the equity method, which means we just take our percentage of losses every quarter. And those losses, of course, are capped by the amount of investment we make in total, which we did talk about in the Q this quarter as being $13 billion. And so over time, that’s just the constraint, and it’s a bit of a mechanical entry. And so I don’t really think about managing that. That’s the investment and acceleration that OpenAI is making in themselves, and we take a percentage of that.

Microsoft’s management sees Copilot as the UI layer for humans to interact with AI; Copilot Studio is used to build AI agents to connect Copilot to other systems of the user’s choice; Copilot Studio can also be used to create autonomous AI agents but these AI agents are not fully autonomous because at some point, they will need to notify a human or require an input and that is where Copilot comes in again

The system we have built is Copilot, Copilot Studio, agents and autonomous agents. You should think of that as the spectrum of things, right? So ultimately, the way we think about how this all comes together is you need humans to be able to interface with AI. So the UI layer for AI is Copilot. You can then use Copilot Studio to extend Copilot. For example, you want to connect it to your CRM system, to your office system, to your HR system. You do that through Copilot Studio by building agents effectively.

You also build autonomous agents. So you can use even — that’s the announcement we made a couple of weeks ago, is you can even use Copilot Studio to build autonomous agents. Now these autonomous agents are working independently, but from time to time, they need to raise an exception, right? So autonomous agents are not fully autonomous because, at some point, they need to either notify someone or have someone input something. And when they need to do that, they need a UI layer, and that’s where, again, it’s Copilot.

So Copilot, Copilot agents built-in Copilot Studio, autonomous agents built in Copilot Studio, that’s the full system, we think, that comes together.

Netflix (NASDAQ: NFLX)

Within entertainment, Netflix’s management thinks the most important question for AI is whether it can help creators produce even better content; the ability of AI to reduce costs in content creation is of secondary importance

 Lots of hype, good and bad, about how AI is going to impact or transform the entertainment industry. I think that the history has been that entertainment and technology have worked hand-in-hand throughout the history of time. And it’s very important, I think, for creators to be very curious about what these new tools are and what they could do. But AI needs to pass a very important test. Actually, can it help make better shows and better films? That is the test and that’s what they got to figure out. But I’ve said this before and I will say it again. We benefit greatly from improving the quality of the movies and the shows much more so than we do from making them a little cheaper. So any tool that can go to enhance the quality, making them better is something that is going to actually help the industry a great deal.

Paycom Software (NYSE: PAYC)

Paycom’s management developed an AI agent internally for the company’s service team to help the team provide even better service; the AI agent improved Paycom’s immediate response rates by 25% without any additional human interaction; the AI agent was built in house; Paycom is using AI in other areas, such as in several existing and upcoming products

Internally, we developed and deployed an AI agent for our service team. This technology utilizes our own knowledge-based semantic search model and enables us to provide service to help our clients more quickly and consistently than ever before.The AI agent continually improves over time and is having an impact on helping our clients achieve even more value out of their relationship with Paycom. By utilizing our own AI agent, we were able to connect our clients to the right solution faster, improving our immediate response rates by 25% without any additional human interaction…

…[Question] Interesting to hear about using AI in the customer service organization. I’m curious if that’s technology that Paycom has built or if you’re using a third party.

[Answer] So that’s internal. We built it ourselves, and we’ve been using it. And so it gets better and better as we mentioned on the call. It’s sped up our process by 25% as far as being able to connect clients to the solution quicker, whether that be a configuration question, a tax question or what have you. And so that’s really been helpful to us, and it continues to do more and more from that perspective…

…[Question] A follow-up on the AI agent or the AI technology that you’re developing. Do you see an opportunity in the future to productize what you’re developing internally, maybe like in your — in future versions of your recruiting product or other products in your platform?

[Answer] I would say this isn’t the only area in which we’re using AI. We have it in several products that we both have released and will be releasing. And so there’s definitely opportunities to monetize AI. As far as this particular solution, it’s really helping us on the back end and helping our client as well. So I think we’re going to see results and benefits from that in other areas of efficiency across the board within our own organization.

Shopify (NASDAQ: SHOP)

Shopify recently enhanced Shopify Flow, a low-code workflow automation app, with a new admin API connector that provides an additional 304 new automation actions

Let’s start with Shopify Flow. A low-code workflow automation app that empowers merchants to build custom automations and help them run their businesses more efficiently. This includes a new automation trigger based on the merchant’s custom data and newly completed admin API connector that provides an additional 304 new actions to use in their automations. And as a result, Flow has become a much more powerful tool, enabling merchants to update products, process customer form submissions, edit orders and so much more.

The Shopify Inbox feature now uses AI to suggest personalised replies for merchants to respond to customer inquiries; half of merchants’ responses are now using the AI-suggested replies; fast customer response helps lift conversion rates for merchants; the replies feature may not seem like a big deal, but it actually helps free up a lot of time for merchants to focus on building products

Within Shopify Inbox, this product now uses AI to suggest replies based on each merchant’s unique store information making it super easy for merchants to respond quickly and accurately to customer inquiries. In fact, on average, merchants are using the Suggest Replies for about half of their responses, edited or not, showing just how effective this feature has become. Replying can quickly boost conversion rates, which means more sales for our merchants and in turn, for Shopify…

…I mentioned suggest replies in Shopify Inbox, which may not seem like a big deal, but it’s a huge deal because it means merchants can spend more of their time focused on the things that they need to be focused on like building our products.  

The Shop App has a new merchant-focused home feed that is powered by machine learning models to increase shopper engagement; the new home feed has led to an 18% increase in sessions where a buyer engaged with a recommendation; management thinks the combination of search with AI will make the search function on the Shop App a lot more relevant and personalised

This quarter, the Shop App launched a new merchant-focused home feed, showcasing the diversity and the richness of brands on Shop. The experience uses new machine learning models to help buyers keep up with the brands they love and discover new brands based on their preferences. These changes have already led to early success with an 18% increase in sessions where a buyer engaged with a recommendation…

…We also think Search and AI together makes the Shop search way more relevant, way more personalized. That is also very compelling.

Essentially every Shopify internal department is using AI to be more productive

Support engineering, sales, finance, just about every department internally is using AI in some way to get more efficient, more productive.

Shopify’s management thinks the integration of AI in search will change how consumers find merchants and products, but Shopify has helped merchants navigate many similar changes before, and Shopify will continue to help merchants navigate the AI-related changes

In terms of where consumers find merchants or find products, yes, AI and search is going to change. But to be clear, this entire flow and discovery process has been changing for many years. It’s the reason that you saw us integrate with places like YouTube or more recently, Roblox or TikTok or Instagram…

…You can rest assured that when consumers shift their buying preferences, their discovery preferences, their search preferences, and they’re looking for great products from great brands, Shopify will ensure that our merchants are able to do so. And that’s the reason even some of the more nuanced or some of the more — as you know, Shopify has an integration to Spotify. Why? Because some merchants that also have very large followings as a musician have massive followings on their artist profile, the fact that so you can now show Shopify products on your artist profile means for that particular segment of merchants, they can easily — they now have a new surface area in which to conduct business. And that’s the same thing when it comes to AI and search. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management expects TMSC’s business in 2024 Q4 to be supported by strong AI-related demand; management sees very strong AI demand in 2024 H2, leading to higher capacity utilisation rate for TSMC’s leading-edge 3nm and 5nm process technologies; management now expects server AI processors to account for mid-teens percent of TSMC’s total revenue in 2024 (previous expectation was for low-teens percent)

Moving into fourth quarter. We expect our business to continue to be supported by strong demand for our leading-edge process technologies. We continue to observe extremely robust AI-related demand from our customers throughout the second half of 2024, leading to increasing overall capacity utilization rate for our leading-edge 3-nanometer and 5-nanometer process technologies…

…We now forecast the revenue contribution from server AI processors to more than triple this year and account for mid-teens percentage of our total revenue in 2024.

TSMC’s management defines server AI processors as GPUs, AI accelerators, and CPUs for training and inference

At TSMC, we defined server AI processor as GPUs, AI accelerators and CPUs performing training and inference functions and do not including — include networking, edge or on-device AI.

TSMC’s management thinks AI demand is real, based on TSMC’s own experience of using AI and machine learning in its operations; a 1% productivity gain for TSMC is equal to a tangible NT$1 billion return on investment (ROI); management thinks TSMC is not the only company that has benefitted from AI applications

Whether this AI demand is real or not, okay, and my judgment is real, we have talked to our customers all the time, including hyperscaler customers who are building their own chips, and almost every AI innovators is working with TSMC. And so we probably get the deepest and widest look of anyone in this industry. And why I say it’s real? Because we have our real experience. We have using the AI and machine learning in our fab and R&D operations. By using AI, we are able to create more value by driving greater productivity, efficiency, speed, qualities. And think about it, let me use, 1% productivity gain, that was almost equal to about TWD 1 billion to TSMC. And this is a tangible ROI benefit. And I believe we cannot be the only one company that have benefited from this AI application. So I believe a lot of companies right now are using AI and — for their own improving productivity, efficiency and everything.

TSMC’s management thinks AI demand is just at the beginning

[Question] Talk a little bit about what you think about the duration of this current semiconductor up-cycle? Do you think it will continue into the next couple of years? Or are we getting closer to the peak of the cycle?

[Answer] The demand is real and I believe it’s just the beginning of this demand, all right? So one of my key customers said, the demand right now is insane, that it’s just the beginning. It’s [ a form of scientific ] to be engineering, okay? And it will continue for many years.

When TSMC builds fabs to meet AI demand, management has a picture in mind of what the long-term demand picture looks like

[Question] Keen to understand how TSMC gets comfortable with customer demand for AI beyond 2025. And I ask this because it takes a couple of years before you can build a fab, so you need to be taking early — an early view on what does AI look like in 2026, 2027. So how are you specifically cooperating on long-term plans for capacity with these AI customers? And what commitments are these customers giving you?

[Answer]  let me say again that we did talk to a lot of our customers. Almost every AI innovator are working with us and that’s including the hyperscalers. So if you look at the long-term market — long-term structure and market demand profile, I think we have some picture in our mind and we make some judgment, of course, and we work with them on a rolling basis. So how we prepare our capacity, actually, just like Wendell said, we have a disciplined and [ a rollout ] system to plan the appropriate level of capacity. And that — to support our customers’ need, also to maximize our shareholders’ value. That’s what we’re always keeping our mind.

There’s more AI content that goes into the chips in PCs (personal computers) and smartphones; management expects the PC and smartphone business of TSMC to be healthy in the next few years because of AI-related applications 

The unit growth of PC and smartphone is still in the low single digit. But more importantly is the content. The content now we put more AI into that, they are cheap and so the silicon area increased faster than the unit growth. So again, I would like to say that for this PC and the smartphone business, not — is gradually increased and we expect it to be healthy in the next few years because of our AI-related applications.

Advanced packaging is currently a high single-digit percentage of TSMC’s revenue and management expects it to grow faster than TSMC’s overall business over the next 5 years; the margins of advanced packaging are improving, but it’s not at the corporate average level yet

Advanced packaging in the next several years, let’s say, 5 years, will be growing faster than the corporate average. This year, it accounts for about high single digit of our revenue. In terms of margins, yes, it is also improving. However, it’s still — it’s approaching corporate, but not there yet.

Demand for TSMC’s CoWoS (advanced packaging) continues to far exceed supply, even though TSMC has doubled CoWoS capacity compared to a year ago and will double it again

Let me share with you today’s situation is our customer’s demand far exceed our ability to supply. So even we work very hard and increase the capacity by about more than twice, more than 2x as of this year compared with last year and probably double again, but still not enough. And — but anyway, we are working very hard to meet the customers’ requirement.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasingly seeing tangible benefits from deploying AI across the company’s business; management wants to continue investing in AI; the most significant benefits are in content recommendation and targeting, which directly benefits Tencent’s business and advertising revenue; management also sees AI as a productivity tool, as Tencent’s Copilot is being used by Tencent’s software engineers frequently and is helping them generate efficiency gains; management is trying to incorporate AI into a lot of Tencent’s products, but they think it will take a few more quarters before real use cases show up

We are increasingly seeing a tangible benefit of deploying AI across our products and operations, including marketing services and cloud. And we’ll continue investing in AI technology, tools and solutions that assist users and partners…

…I think that the most significant one right now is actually around content recommendation and at targeting because the AI in — the AI engine in those two use cases are generating a significant amount of additional user time and at the same time, it’s generating a higher incremental targeting rate, response rate for our apps and both of them actually are direct benefits to the business and direct benefit to ad revenue. and both of the video accounts and our performance at revenue actually at scale…

… It’s actually a productivity tool that everybody is using on a frequent basis, for example, our Copilot is being used by our engineers across the board on a very frequent basis, and it’s actually generating efficiency gains for our business. and different businesses, a lot of our products are actually testing our Hunyuan and trying to incorporate AI into the — either the production process, right, so that they would gain efficiency or in the user experience use case so that it can actually make their user experience better. So I would say, right now, we are seeing more and more adoption among all our different products and services. It would take probably a few more quarters for us to see some real use cases at scale. 

Tencent’s management used the company’s foundation AI model, Tencent Hunyuan, to facilitate tagging and categorisation of content and advertising materials; Tencent also upgraded its machine learning platforms to deliver better advertising targeting; marketing services revenue from video accounts was up 60% year-on-year; Mini Programs marketing services revenue had robust growth; Tencent used large language models (LLMs) to improve the relevance of Weixin Search results, leading to higher commercial queries and click-through rates, and consequently, an increase in search revenue of more than 100%

Our Marketing Services revenue grew 17% year-on-year. Strength in games and e-commerce categories outweighed weakness in real estate and food and beverage. The Paris Olympics somewhat cushioned industry-wide weakness in brand ad revenue during the third quarter but this positive factor will be absent in the fourth quarter. We leveraged our foundation model, Tencent Hunyuan to facilitate tagging and categorization of content and ad materials. And we upgraded our machine learning platforms to deliver more accurate ad targeting.

By property, video accounts marketing services revenue increased over 60% year-on-year. As we systematically strengthen transaction capabilities in Weixin, advertisers increasingly utilize our marketing tools to boost their exposure and drive sales conversion. Mini Programs marketing services revenue grew robustly year-on-year as our Mini Games and Mini Dramas provided high-value rewarded video ad inventory and generated incremental closed-loop demand. And for Weixin Search, we utilized large language model capabilities to facilitate understanding of complex queries and content, enhancing the relevance of search results. Commercial queries increased and click-through rate improved, and our search revenue more than doubled year-on-year.

Tencent enjoyed swift year-on-year growth in GPU-focused cloud revenue and this revenue stream is now a teens percentage of Tencent’s infrastructure as a services revenue; Tencent has released Tencent Hunyuan Turbo, the new generation of its foundation AI model, which uses a heterogeneous mixture of experts architecture; compared to the previous generation, Hunyuan Turbo’s training and inference efficiency has doubled while its inference costs has halved; Hunyuan Turbo is ranked first for general capabilities among foundation AI models in China; Tencent has open-sourced Hunyuan models; management sees Tencent’s AI revenue being lesser than US cloud companies because China does not have a large enterprise, SaaS, and startup markets for AI services 

Our cloud revenue from GPUs primarily used for AI grew swiftly year-on-year and now represents a teens percentage of our infrastructure as a services revenue. We released Tencent Hunyuan Turbo, which utilizes a heterogeneous mixture of experts architecture, doubling our training and inference efficiency and halving inference cost versus its predecessor Hunyuan Pro. SuperCLUE ranked Hunyuan Turbo first for general capabilities among domestic peers. Last week, we made the Hunyuan large model and the Hunyuan 3D generation models available on an open-source basis. Our international cloud revenue increased significantly year-on-year. We leveraged domain expertise in areas such as games and live streaming and competitive pricing to win international customers…

…The IAS revenue is now in the teens generated by AI. But having said that, we think the amount of AI revenue is actually less than U.S. cloud companies. And the main reason is because, number one, China doesn’t really have a every big enterprise market. And if you look at the U.S., a lot of enterprises are actually sort of fitted in with AI and the — in testing out how AI can do for their business that they’re actually buying a lot of compute, which is not happening in China yet. There’s a very big SaaS ecosystem in the U.S., which everybody is actually trying to add AI to their functionality and thus charge the customers more. And that SaaS ecosystem is not really that vibrant in China. And thirdly, there are also fewer AI start-ups in China, which are actually buying a lot of compute. So as a result, the AI revenue in China on the cloud side is somewhat sort of at scale for us, but I think it will not be exploding like in the U.S. 

Tencent’s management does not want to embed commercial search results into the company’s AI chatbot, YongBao right now; the current focus for YongBao is on growing usage, not monetisation

[Question] Will you ramp up the Gen AI chatbot, would that eventually embed with the commercial sponsor answer as well?

[Answer] In terms of whether YongBao will embed commercial search results, the answer is no. for the current time, we’re focused on making YongBao be as appealing and attractive to users as it can be and we’re not focused on premature monetization.

Tencent’s management plans to invest in capex for AI, but the amount of investment will be small compared to the companies in the USA

If you look at CapEx, right, we believe we have a progressive CapEx plan, especially given that the development of a cloud business and the advent of AI, but at the same time, it’s measured compared to a lot of the U.S. companies. 

Tencent’s management sees the company’s advertising business being driven by 3 factors, namely consumer spending, Tencent’s ability to utilise AI to continue boosting click-through rates from currently low levels, and deployment of more inventory

In terms of the drivers for 2025, the overall macro environment would obviously be important accelerator or decelerator or neutral force for the aggregate advertising market. And that in turn will be a function primarily of consumer confidence. And consumer spending behavior. Now within that overall environment, our relative performance will be a function of, first of all, our advertising technology and our ability to utilize GPUs, utilize neural networks to continue boosting click-through rates from the current very low levels to higher levels that mechanically translates into more revenue. And then secondly, our deployment of specific inventories, in particular, video accounts, in particular, Weixin Search.

Tesla (NASDAQ: TSLA)

Tesla’s management released FSD v12.5 in 2024 Q3, which has increased data and training compute, and 5x increase in parameter count; Tesla also released Actually Smart Summon (your vehicle will autonomously drive to you in parking lots) and FSD for Cybertruck, which includes end-to-end neural nets for highway driving for the first time; version 13 of FSD is coming soon and it is expected to have a 5-6 fold improvement in miles between interventions compared to version 12.5; over the course of 2024, FSD’s improvement in miles between interventions has been at least 3 orders of magnitude; management expects FSD to become safer than human in 2025 Q2; Tesla vehicles on autopilot have 1 crash per 7 million miles, compared to 1 crash per 700,000 miles for the US average; Tesla has earned $236 million in revenue in 2024 Q3 from the release of FSD for Cybertruck and Actually Smart Summon

In Q3, we released the 12.5 series of FSD (Supervised)1 with improved safety and comfort thanks to increased data and training compute, a 5x increase in parameter count, and other architectural choices that we plan to continue scaling in Q4. We released Actually Smart Summon, which enables your vehicle to autonomously drive to you in parking lots, and FSD (Supervised) to Cybertruck customers, including end-to-end neural nets for highway driving for the first time…

…Version 13 of FSD is going out soon… We expect to see roughly a 5- or 6-fold improvement in miles between interventions compared to 12.5. And actually, looking at the year as whole, the improvement in miles between interventions, we think will be at least 3 orders of magnitude. So that’s a very dramatic improvement in the course of the year, and we expect that trend to continue next year.  The current total expectation, internal expectation for the Tesla FSD having longer miles between interventions [indecipherable] is the second quarter of next year, which means it may end up being in the third quarter but it’s next — it seems extremely likely to be next year…

…miles between critical interventions, mentioned by Elon already made 100x improvement with 12.5 from starting of this year and then with v13 release, we expect to be 1,000x from the beginning, from January of this year on production software. And this came in because of technology improvements going to end-to-end, having higher frame rate, partly also helped by hardware force, more capabilities, so on. And we hope that we continue to scale the neural network, the data, the training compute, et cetera. By Q2 next year, we should cross over the average, even in miles per critical intervention [indiscernible] in that case…

…Our internal estimate is Q2 of next year to be safer than human and then to continue with rapid improvements thereafter…

… So we published Q3 vehicle safety report, which shows 1 crash for every 7 million miles on autopilot that compares with the U.S. average of crash roughly every 700,000 miles. So it’s currently showing a 10x safety improvement relative to the U.S. average…

…We released FSD for Cybertruck and other features like actually small [indiscernible] like Elon talked about in North America, which contributed $326 million of revenues in the quarter. 

Tesla has deployed a 29,000 H100 cluster and expects to have a 50,000 H100 cluster by the end of October 2024, to support FSD and Optimus; Tesla is not training compute-constrained; Tesla’s AI has gotten so good that it now takes a long time to decide which version of the software is better because mistakes happen so infrequently and that is the big bottleneck to Tesla’s AI development; management is being very careful with AI-spending

We deployed and are training ahead of schedule on a 29k H100 cluster at Gigafactory Texas – where we expect to have 50k H100 capacity by the end of October…

…We continue to expand our AI training capacity to accommodate the needs of both FSD and Optimus. We are currently not training compute-constrained. [indiscernible] probably the big limiting factors of the FSD is actually getting so good that it takes us a while to actually find mistakes. And when you start getting to where it can take 10,000 miles to find a mistake, it takes a while to actually figure out which it is, is software A better than software B? It actually takes a while to figure it out because neither 1 of them makes the mistakes, would take a long time to make mistakes. So it’s actually the single biggest limiting factor is how long does it take us to figure out which version is better? Sort of a high-class problem…

… One thing which I’d like to elaborate is that we’re being really judicious on our AI compute spend to and saying how best we can utilize the existing infrastructure before making further investments…

…We still got to take which models are performing better. So the validation network to picking the models because as mentioned the miles between intervention is pretty large. We had to drive a lot of miles going close to. We do have simulation and other ways to get those metrics. Those 2 help, but in the end, that’s a big bottleneck. That’s why we’re not training-compete constrained alone. 

In the 10 October 2024 “We, Robot” event by Tesla, the company had showcased 50 autonomous vehicles, including 20 Cybercabs; the Cybercabs had no steering wheel, brake, or accelerator pedals, so they were truly autonomous

On October 10, we laid out a vision for an autonomous and future that I think is very compelling that the Tesla team did a phenomenal job there with actually giving people an option to experience the future, where you have humanoid robots working among the craft, not with a canned video and a presentation or anything but walking among crowd so he drinks and whatnot. And we had 50 autonomous vehicles. There were 20 Cybercabs but there were an additional 30 Model Ys, operating fully autonomously the entire night, carrying thousands of people with no incidents the entire night…

…Worth emphasizing that the Cybercab had no steering wheel or brake or accelerator panels, meaning there was no way for anyone to intervene manually a unit if they wanted to and the whole night went very smoothly.

Tesla is already offering autonomous ridehailing for Tesla employees in the Bay Area; the ridehailing service currently has a safety driver; Tesla has been testing autonomous ridehailing for some time; Elon Musk expects ridehailing to be rolled out to the public in California and Texas in 2025, and maybe other states in the USA; California has a lot of regulations around ridehailing, but there’s still a regulatory pathway; Tesla actually has passed Federal regulations for ridehailing, but it’s the state level where there are problems

We have for Tesla employees in the Bay Area, we already are offering ridehailing capabilities. So you can actually, with the development app, you can request a ride and it will take you anywhere in the Bay Area. We do have a safety driver for now but it’s not required to do that…

… We’ve been testing it for the good part of the year. And the building blocks that we needed in order to build this functionality and deliver it to production, we’ve been thinking about working on for years…

…So it’s not like we’re just starting to think about this stuff right now while we’re building out the early stages of our ridehailing network. We’ve been thinking about this for quite a long time, and we’re excited to get the functionality out there…

…We do expect to roll out ridehailing in California and Texas next year to the public. Now California is somewhere — there’s quite a long regulatory approval process. I think we should get approval next year but it’s contingent upon regulatory approval. Texas is a lot faster so it’s — we’ll definitely have available in Texas and probably have it available in California, subject to regulatory approval. And then — and maybe some other states actually next year as well, but at least California and Texas…

…[Question] Elon mentioned unsupervised FSD in California and Texas next year. Does that mean regulators have agreed to it in the entire state for existing hardware 3 and 4 vehicles?

[Answer] As I said earlier, California loves regulation… here’s a pathway. Obviously, Waymo operates in California so there’s just a lot of forms and a lot of approvals that are required. I mean, I’d be shocked if we don’t get approved next year, but it’s just not something we totally control. But I think we will get approval next year in California and Texas. And towards the Bay Area, branch out beyond California and Texas…

…I think it’s important to reiterate this like on our certifying a vehicle at the federal level in the U.S. is done by meeting FMVSS regulations. Our vehicles today that are produced there capable to meet all those regulations, the Cybercab regulations. And so the deployment of the vehicle to the road is no limitation, but its limitation is what you said at the state level where they control autonomous vehicle deployment. Some states are relatively easy, as you mentioned, for Texas. It’s other ones have always like California that may take a little longer. The other ones hadn’t set up anything yet. 

Tesla’s management acknowledges that there’s a chance that Tesla vehicles with Hardware Version 3 may not support unsupervised full self-dricing, and if so, Tesla will replace the hardware for those vehicle fleets for free into Hardware Version 4

By some measure, Hardware 4 has really several times the capability of Hardware 3. It’s easier to get things to work with then it takes a lot of effort to sort of squeeze that box analyst hat Hardware 3. And there is some chance that Hardware 3 is — does not achieve the safety level that allows for unsupervised FSD. There is some chance of that. And if that turns out to be the case, we will upgrade those group bought Hardware 3 FSD for free. And we have designed the system to be upgradeable so it’s really just to sort of switch out the computer thing, the camera, the cameras are capable. But we don’t actually know the answers of that. But if it does turn out, we’ll make sure we take care of those who are.

Tesla’s management thinks real-world AI in self-driving cars is different from LLMs (large language models) in that (1) real-world AI requires massive amounts of context that needs to be processed with a small amount of compute power and the way around this limitation is to do massive amounts of training so that the amount of inference that needs to be done is tiny, and (2) it’s difficult to sort out what data coming in from the video feed is important for the training

The nature of real world AI is different from LLM in that you have a massive amount of context. So like the — you’ve got a case of Tesla cameras that [indiscernible] if you include tunnel camera that — so you’ve got some context. And that is then distilled down into a small number of control outputs, whereas it’s like it’s very rare to have, in fact, I’m not sure any LLM out there can do gigabytes of context. And then you’ve got to then process that in the car with a very small amount of compute power. It’s all doable and it’s happening, but it is a different problem than what, say, a Gemini or OpenAI is doing.

And now part of the way you can make up for the fact that the inference computer is quite small, it is by spending a lot of effort on training. And just like a human the way you train on something, the less metal work takes when you try to — when you do it, like when the first time like a driving it absorbs your whole mind. But then as you train more and more on driving then the driving becomes a background task. It doesn’t — it only absorbs a small amount of your mental capacity because you have a lot of training. So we can make up for the fact that the inference computers — it’s tiny compared to a 10-kilowatt bank of GPUs because you’ve got a few hundred watts of inference compute. We can make up that with heavy training.

And then there’s also vast amounts to the actual petabytes of data coming in are tremendous. And then sorting out what training is important, of the vast amounts of video data coming in the feed, what is actually most important for training. That’s also quite difficult.

Tesla’s management thinks Elon Musk’s xAI AI-startup has been helpful to Tesla, but the 2 companies are focused on very different kinds of AI problems 

Well, I should say that xAI has been helpful to Tesla AI quite a few times in terms of things like scaling it, like training, just even like recently in the last week or so, improvements in training, where if you’re doing a big training run and it fails, being able to continue training and to recover from a training run, has been pretty helpful. But there are different problems. xAI actually is working on artificial general intelligence or artificial super intelligence. Tesla is autonomous cars and autonomous robots. There are different problems…

…Yes, Tesla is focused on real-world AI. And I was saying earlier, it is quite a bit different from LLM. But you have massive context in the form of video and some amount of audio, that’s going to be distilled like extremely efficient inference compute. I do think Tesla is the most efficient in the world in terms of inference compute because out of necessity, we have to be very good at efficient inference. We can’t put 10 kilowatts of GPUs in a car. We’ve got a couple of hundred watts. And it’s a pretty well designed Tesla AI chip, but it’s still a couple hundred watts. But there are different problems. I mean, the stuff at xAI. We’re running inference. I mean, it is running inference, answering questions on a 10-kilowatt rack. It’s like you can’t put that in a car. It’s a different problem.

Elon Musk created xAI because he thought there wasn’t a truth-seeking AI company being built

xAI is because I felt there wasn’t there wasn’t a truth-seeking digital super intelligence company out there, like that’s what it came down to. There needed to be a truth-seeking AI company that is very [indiscernible] about being truthful. I’m not saying xAI is perfect, but that is truth, but that is at least the explicit aspiration, even if something is politically incorrect, it would still be truhtful. I think this is very important for AI safety. So I think xAI has been helpful to Tesla and will continue to be helpful to Tesla, but they are very different problems.

There are no other car companies that has a world-class AI and chip-design team like Tesla

And like what other car company has a world-class chip design team? Like zero. What other car company has a world-class AI team like Tesla does? 0. Those were all startups that were created from scratch.

The Trade Desk (NASDAQ: TTD)

The incorporation of AI into Kokai, Trade Desk’s ad-buying platform, is encouraging adoption of Trade Desk by CFOs and CMOs

While there has been a lot of macro focus on the reduction in inflation rates, historic highs for stock market indices and growing indications of a soft landing, that’s not necessarily translating to consumer confidence, which is why CMOs are becoming much more closely aligned with their CFOs. CFOs want more evidence than ever that marketing is working. And for CFOs that doesn’t just mean traditional marketing KPIs. It means growing the top line business. All of our AI and data science injection into Kokai, our latest product release, is encouraging CMOs and CFOs to lean more and more on TTD to deliver real, measured growth…

…When CMOs faced pressure to achieve more with less, they turn to platforms like ours for flexibility, precision and measurable results.

Companies need an AI strategy, and Trade Desk’s AI product, Koa, is a great copilot for advertising traders; Trade Desk has plenty of opportunities in an AI-world because of the data assets it has, and management wants to improve all aspects of the company through AI

Every company needs an AI strategy. Our AI product, Koa, is a great copilot for traders. But this is only the beginning. There are endless possibilities for us as we have 1 of the best data assets on the Internet. The learnings that come from buying the global open Internet outside of walled gardens. To win in this new frontier, we’re looking across our entire suite of products, algorithms and features and asking how they all can be advanced by AI.

Visa (NYSE: V)

For Risk and Identity Solutions within value-added services, Visa wants to acquire Featurespace, an AI payments protection tech company that will enable Visa to enhance fraud prevention tools to clients and protect consumers in real time; Worldline, a Visa partner, will be using Decision Manager to provide businesses with AI-based e-commerce fraud detection abilities; Featurespace is a world leader in providing AI solutions to fight fraud

In Risk and Identity Solutions, we recently announced our intent to acquire Featurespace, a developer of real-time artificial intelligence payments protection technology. It will enable Visa to provide enhanced fraud prevention tools to our clients and protect consumers in real-time across various payment methods.  And Worldline, already a Visa partner and leading European acquirer, will soon be launching an optimized fraud management solution, utilizing Decision Manager to provide businesses with AI-based e-commerce fraud detection capabilities…

…Featurespace is a world leader in providing AI-driven solutions to combat that fraud, to reduce that fraud, to enable our clients and partners to continue to serve their customers in a safe way.

Visa’s management sees AI as being a driver of productivity across multiple functions in the company, and as a differentiator in its products and services

[Question] I just wanted to ask how you see AI playing into the business model. Do you see it more as driving VAS or incremental business model, uplift revenue or cost improvement? Or is it more of a competitive differentiator that will just keep you ahead of your competition?

[Answer] As it relates more broadly to especially generative AI at Visa, I see it really in 2 different buckets. The first is we are adopting it aggressively across our company to drive productivity. And we’ve seen some great results from everywhere to our engineering teams, to our accounting teams, to our sales teams, our client service teams. And we’re still in the early stages of, I think, the very significant impact this will have on the productivity of our business. I also see it as a real differentiator to the products and services that we’re putting in market. You’ve heard me talk about some of the new risk capabilities, risk management capabilities, for example, that we’ve deployed in the account-to-account space, which are all enabled with generative AI. You mentioned Featurespace. We’ve had some really good success in other parts of both our value-added services business and the broader consumer payments business as well. And we’ve got a product pipeline that is very heavily tilted towards some, we think, very exciting generative AI capabilities that hopefully you’ll hear more from us on soon.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Fiverr, Mastercard, Meta Platforms, Microsoft, Netflix, Shopify, TSMC, Tesla, and Visa. Holdings are subject to change at any time.

The Best Investment Theme For The New Trump Presidency

There is no shortage of investing ideas being thrown around that could potentially do well under the new Trump administration – but what would actually work?

Last week, Donald Trump won the latest US Presidential Elections, which would see him be sworn in as the USA’s new President on 20 January 2025. Often, there’s a huge rush of investment themes that accompany the inauguration of a new political leader in a country. It’s no exception this time. 

For my own investment activities, the only theme I’m in favour of with the new Trump presidency – in fact, with any new presidency – is to look at a stock as a piece of a business, and assess the value of that business. Why? Because there’s a long history of investment themes accompanying shifts in political leadership that have soured. In a November 2014 article for The Motley Fool, Morgan Housel shared some examples:

“During the 1992 election, a popular argument was that Bill Clinton’s proposed remake of the U.S. healthcare system would be disastrous for pharmaceutical stocks… by the end of Clinton’s presidency pharmaceutical companies were some of the most valuable companies in the world. Pfizer increased 791% during Clinton’s presidency. Amgen surged 611%. Johnson & Johnson popped 385%. Merck jumped 299%. Those crushed the market, with the S&P 500 rising 251% from January 1993 to January 2001…

…During the 2000 election, Newsweek wrote that if George W. Bush wins, the ensuing tax changes could “help banks, brokers and other investment firms.” By the end of Bush’s second term, the KBW Bank Index had dropped almost 80%. The article also recommended pharmaceutical stocks thanks to Bush’s light touch on regulation. The NYSE Pharmaceutical Index lost nearly half its value during Bush’s presidency…

…During the 2008 election, many predicted that an Obama victory would be a win for green energy like solar and wind and a loss for big oil… The opposite happened: The iShares Clean Energy ETF is down 51% since then, while Chevron (CVX 0.10%) is up 110%.

During the 2012 election, Fox Business wrote that if Obama wins, “home builders such as Pulte and Toll Brothers could see increased demand for new homes due to a continuation of the Obama Administration’s efforts to limit foreclosures, keeping homeowners in their existing properties.” Their shares have underperformed the S&P 500 by 26 percentage points and 40 percentage points since then, respectively.”

It was more of the same in the presidential elections that came after Housel’s article.

When Trump won the 2016 US elections for his first term as President, CNBC proclaimed the banking sector as a strong beneficiary because of his promises to ease banking regulations. But from the day Trump was sworn into office (President-elects are typically sworn in on 20 January in the following year after the elections) till the time he stepped down four years later, the KBW Nasdaq Bank Index was up by less than 20%, whereas the S&P 500 was up by nearly 70%. The KBW Nasdaq Bank Index tracks the stock market performance of 24 of America’s largest banks.

CNBC surveyed more than 100 investment professionals shortly after Joe Biden won the 2020 elections. They thought that “consumer discretionary, industrials and financials will perform the best under a Biden administration.” From Biden’s first day as President till today, the S&P 500 is up by slightly under 60%. Meanwhile, the S&P 500 Consumer Discretionary Index, which comprises consumer discretionary companies within the S&P 500 index, has gained just around 30%. The Dow Jones Industrials Index (a collection of American industrial companies) and the KBW Nasdaq Bank Index are both also trailing the S&P 500 with their respective gains of around 40% and 20%.

I have no idea if the hot themes for Trump’s second term as President would end up performing well. But given the weight of the historical evidence, I have no interest in participating in them. Politics and investing seldom mix well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Market View: Levels to watch for US equities in Nov; Market reaction to US Sep PCE, BOJ rate decision; Earnings out of Amazon, Apple, Meta, Microsoft; Singtel

Last week, on 01 November 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s The Evening Runway show. We discussed a number of topics, some of which are:

  • What the Bank of Japan’s interest rate decision and the US September Personal Consumption Expenditure numbers mean for stocks (Hints: It’s important to differentiate between the economy and the stock market; even the US Federal Reserve has very little control over the movement of US stocks, according to recent research from New York University finance professor, Aswath Damodaran)
  • What the Australian Competition and Consumer Commission’s lawsuit against Optus Mobile means for Singtel (Hint: Optus represents only a minority of Singtel’s overall earnings, so even if Optus’s entire business is zero-ed, it would not be catastrophic for Singtel; but it’s very unlikely that Optus’s business would be materially diminished because of the lawsuit)
  • The latest earnings results of the mega-cap US technology companies (Hint: Apple is increasingly becoming a services business; Microsoft’s latest comments on its AI revenues is positive for the sustainability of the business; Meta is already seeing clear improvements in its core advertising business from its AI investments; Intel’s future depends on the success of its foundry business, which is struggling at the moment because Intel’s most advanced chip designs are actually outsourced to Taiwan Semiconductor Manufacturing Company)
  • What the upcoming US presidential election means for the US stock market (Hint: The returns an investor can earn from 1950 to 2024 by staying invested across all US presidents absolutely dwarfs what the investor can earn from only investing under Republican presidents or Democrat presidents)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Apple, Meta Platforms, Microsoft, and Taiwan Semiconductor Manufacturing Company. Holdings are subject to change at any time.

The US Stock Market And US Presidents

History’s verdict on how US stocks have performed under different US presidents

The US presidential election is just a few weeks away. And as usual, large swathes of participants in the US stock market are trying to predict the victor because they think it will have significant consequences on how US stocks perform. I don’t have a crystal ball. But I do have history’s verdict, thanks to excellent research from the US-based wealth management firm, Ritholtz Wealth Management, that I came across recently.

Here’s a table showing the annualised returns of the S&P 500 for each US President, going back to Theodore Roosevelt’s first term in 1901:

Table 1; Source: Ritholtz Wealth Management 

I think the key takeaway from the table is that how the US stock market performs does not depend on what political party the US President belongs to. Republican presidents have presided over bad episodes for US stocks (Herbert Hoover, Richard Nixon, and George W. Bush, for example) as well as fantastic times (Calvin Coolidge, Dwight Eisenhower, and Ronald Reagan, for example). The same goes for Democrat presidents, who have led the country through both poor stock market returns (Woodrow Wilson and Franklin Roosevelt, for example) as well as great gains (Franklin Roosevelt, Lyndon Johnson, and Barack Obama, for example). Presidents do not have that much power over the financial markets. Don’t let politics influence your investing decision-making.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Problems With China’s Economy And How To Fix Them

An analysis of China’s balance sheet recession, and what can be done about it.

Economist Richard Koo (Gu Chao Ming) is the author of the book The Other Half of Macroeconomics and the Fate of Globalization. Investor Li Lu published a Mandarin review of the book in November 2019, which I translated into English in March 2020. When I translated Li’s review, I found myself nodding in agreement to Koo’s unique concept of a balance sheet recession as well as his analyses of Japan’s economic collapse in the late 1980s and early 1990s, and the Japanese government’s responses to the crash. 

When I realised that Koo was interviewed last week in an episode of the Bloomberg Odd Lots podcast to discuss the Chinese government’s recent flurry of stimulus measures, I knew I had to tune in – and I was not disappointed. In this article, I want to share my favourite takeaways (the paragraphs in italics are transcripts from the podcast)

Takeaway #1: China is currently facing a balance sheet recession, and in a balance sheet recession, the economy can shrink very rapidly and be stuck for a long time

I think China is facing balance sheet recession and balance sheet recession happens when a debt-financed bubble bursts, asset prices collapse, liabilities remain, people realise that their balance sheets’ under water or nearly so, and they all try to repair their balance sheets all at the same time…

…Suppose I have $1000 of income and I spend $900 myself. The $900 is already someone else’s income so that’s not a problem. But the $100 that I saved will go through people like us, our financial institutions, and will be lent to someone who can use it. That person borrows and spends it, then total expenditure in economy will be $900 that I spent, plus $100 that this guy spent, to get $1000 against original income of $1000. That’s how economy moves forward, right? If there are too many borrowers and economy is doing well, central banks will raise rates. Too few, central bank will lower rates to make sure that this cycle is maintained. That’s the usual economy.

But what happens in the balance sheet recession is that when I have $1000 in income and I spend $900 myself, that $900 is not a problem. But the $100 I decide to save ends up stuck in the financial system because no one’s borrowing money. And China, so many people are refusing to borrow money these days because of that issue. Then economy shrinks from $1000 to $900, so 10% decline. The next round, the $900 is someone else’s income, when that person decides to save 10% and spends $810 and decides to save $90, that $90 gets stuck in the financial system again, because repairing financial balance sheets could take a very long time. I mean, Japanese took nearly 20 years to repair their balance sheets.

But in the meantime, economy can go from $1000, $900, $810, $730, very, very quickly. That actually happened in United States during the Great Depression. From 1929 to 1933, the United States lost 46% of its nominal GDP. Something quite similar actually happened in Spain after 2008when unemployment rates skyrocketed to 26% in just three and a half years or so. That’s the kind of danger we face in the balance sheet recession.

Takeaway #2: Monetary policy (changing the level of interest rates) is not useful in dealing with a balance sheet recession – what’s needed is fiscal policy (government spending), but it has yet to arrive for China

I’m no great fan of using monetary policy, meaning policies from the central bank to fight what I call a balance sheet recession…

…Repairing balance sheets of course is the right thing to do. But when everybody does it all at the same time, we enter the problem of fallacy of composition, in that even though everybody’s doing the right things, collectively we get the wrong results. And we get that problem in this case because in the national economy, if someone is repairing balance sheets, meaning paying down debt or increasing savings, someone has to borrow those funds to keep the economy going. But in usual economies, you bring interest rates down, there’ll be people out there willing to borrow the money and spend it. That’s how you keep the economy going.

But in the balance sheet recession, you bring interest rates down to very low levels – and Chinese interest rates are already pretty low. But even if you bring it down to zero, people will be still repairing balance sheets because if you are in negative equity territory, you have to come out of that as quickly as possible. So when you’re in that situation, you cannot expect private sector to respond to lowering of interest rates or quantitative easing, forward guidance, and all of those monetary policy, to get this private sector to borrow money again because they are all doing the right things, paying down debt. So when you’re in that situation, the economy could weaken very, very quickly because all the saved funds that are returned to the banking system cannot come out again. That’s how you end up with economy shrinking very, very rapidly.

The only way to stop this is for the government, which is outside of the fallacy of composition, to borrow money. And that’s the fiscal policy of course, but that hasn’t come out yet. And so yes, they did the quick and easy part with big numbers on the monetary side. But if you are in balance sheet recession, monetary policy, I’m afraid is not going to be very effective. You really need a fiscal policy to get the economy moving and that hasn’t arrived yet.

Takeaway #3: China’s fiscal policy for dealing with the balance sheet recession needs to be targeted, and a good place to start would be to complete all unfinished housing projects in the country, followed by developing public works projects with a social rate of return that’s higher than Chinese government bond yields

If people are all concerned about repairing their balance sheets, you give them money to spend and too often they just use it to pay down debt. So even within fiscal stimulus, you have to be very careful here because tax cuts I’m afraid, are not very effective during balance sheet recessions because people use that money to repair their balance sheets. Repairing balance sheets is of course the right thing to do, but it will not add to GDP when they’re using that tax cuts to pay down debt or rebuild their savings. So that will not add to consumption as much as you would expect under ordinary circumstances. So I would really like to see government just borrow and spend the money because that will be the most effective way to stop the deflationary spiral…

… I would use money first to complete all the apartments that were started but are not yet complete. In that case you might have to take some heavy handed actions, but basically the government should take over these companies and the projects, and start putting money so that they’ll complete the projects. That way, you don’t have to decide what to make, because the things that are already in the process of being built – or the construction drawings are there, workers are there, where to get the materials. And in many cases, potential buyers already know. So in that case, you don’t waste time thinking about what to build, who’s to design, and who the order should go to.

Remember President Obama, when he took over 2009, US was in a balance sheet recession after the collapse of the housing bubble. But he was so careful not to make the Japanese mistake of building bridges to nowhere and roads to nowhere. He took a long time to decide which projects should be funded. But that year-and-a-half or so, I think the US lost quite a bit of time because during that time, economy continued to weaken. There were no shovel-ready projects.

But in the Chinese case, I would argue that these uncompleted apartments are the shovel-ready projects. You already know who wants them, who paid their down payments and all of that. So I will spend the money first on those projects, complete those projects, and use the time while the money is used to complete these apartments.

I would use the magic wand to get the brightest people in China to come into one room and ask them to come up with public works projects with a social rate of return higher than 2.0%. The reason is that Chinese government bond is about 2.00-something. If these people can come up with public works projects with a social rate of return higher than let’s say 2.1%, then those projects will be basically self-financing. It won’t be a burden on future taxpayers. Then once apartments are complete, then the economy still is struggling from balance sheet recession, then I would like to spend the money on those projects that these bright people might come up with.

Takeaway #4: The central government in China actually has a budget deficit that is a big part of the country’s GDP, unlike what official statistics say

But in China, even though same rules should have applied, local governments were able to sell lots of land, make a lot of money in the process, and then they were able to do quite a bit of fiscal stimulus, which also of course added to their GDP. That model will have to be completely revised now because no one wants to buy land anymore. So the big source of revenue of local governments are gone and as a result, many of them are very close to bankrupt. Under the circumstances, I’m afraid central government will have to take over a lot of these problems from the local government. So this myth that Chinese central government, the budget deficit is not a very big part of GDP, that myth will have to be thrown out. Central government will have to take on, not all of it perhaps, but some of the liabilities of the local governments so that local governments can move forward.

Takeaway #5: There’s plenty of available-capital for the Chinese central government to borrow from, and the low yields of Chinese government bonds are a sign of this

So even though budget deficit of China might be very large, the money is there for government to borrow. If the money is not there for the government to borrow, Chinese government bond yields should have gone up higher and higher. But as you know, Chinese government 10-year government bond yields almost down to 2.001% or 2%. It went that low because there are not enough borrowers out there. Financial institutions have to place this money somewhere, all these deleveraged funds coming back into the financial institutions, newly generated savings, all the money that central bank put in, all comes to basically people like us in the financial institutions, the fund managers. But if the private sector is not borrowing money, the only borrower left is the government.

So even if the required budget deficit might be very large to stabilize the economy, the funds are available in the financial market. Only the government just have to borrow that and spend it. So financing should not be a big issue for governments in balance sheet recession. Japan was running huge budget deficits and a lot of conventional minded economists who never understood the dynamics of balance sheet recession was warning about Japan’s budget deficit growing sky high, and then interest rates going sky high. Well, interest rates kept on coming down because of the mechanism that I just described to you, that all those funds coming into the financial sector cannot go to the private sector, end up going to our government bond market. And I see the same pattern developing in China today.

Takeaway #6: Depending on exports is a great way for a country to escape from a balance sheet recession, but this route is not available for China because its economy is already running the largest trade surplus in the world

Export is definitely one of the best ways if you can use it, to come out of balance sheet recession. But China, just like Japan 30 years ago, is the largest trade surplus country in the world. And if the world’s largest trade surplus country in the world tries to export its way out, very many trading partners will complain. You are already such a large destabilizing factor on the world trade, now you’re going to destabilize it even more.

I remember 30 years ago that United States, Europe, and others were very much against Japan trying to export its way out. Because of their displeasure, particularly the US displeasure, Japanese yen, which started at 160 yen when the bubble burst in 1990, ended up 80 yen to the dollar, five years later, 1995. What that indicated to me was that if you’re running trade deficit, you can probably export your way out and no one can really complain because you are a deficit country to begin with. But if you are the surplus country, and if you’re the largest trade surplus country in the world, there will be huge pushback against that kind of move by the Chinese. We already seeing that, in very many countries complaining that China should not export its problems.

Takeaway #7: Regulatory uncertainties for businesses that are caused by the Chinese central government may have played a role in the corporate sector’s unwillingness to borrow

Aside from a balance sheet recession, which is a very, very serious disease to begin with, we have those other factors that started hurting the Chinese economy, I would say, starting as early as 2016.

When you look at the flow of funds data for the Chinese economy, you notice that the Chinese corporate sector started reducing their borrowings, starting around 2016. So until 2016, Chinese companies were borrowing all the household sector savings generated, which is of course the ideal world. The household sector saving money, the corporate sector borrowing money. But starting around 2016, you see corporate sector borrowing less and less. And at around the Covid time, corporate sector was actually a net saver, not a net borrower. So that trend, I think has to do with what you just described, that regulatory uncertainties got bigger and bigger under the current leadership and I think people began to realize that even after you make these big investments in the new projects, they may not be able to expect the same revenue stream that they expected earlier because of this regulatory uncertainty.

Takeaway #8: China’s economy was already running a significant budget deficit prior to the bubble bursting, and this may have made the central government reluctant to step in as borrower of last resort now to fix the balance sheet recession

If the household sector is saving money, but the corporate sector is not borrowing money, you need someone else to fill that gap. And actually that gap was filled by Chinese government, mostly decentralized local governments. But if that temporary fiscal jolt of fiscal stimulus then turn the economy around, then those local government interventions would’ve been justified. But because this was a much more deeply rooted – here, I would use structural problems, this regulatory uncertainties and middle income trap and so forth – local government just had to keep on borrowing and spending money to keep the economy going. That was happening long before the bubble burst. So if you look at total, or what I call general government spending – not just the central government, but the general government – they were financial deficit to the tune of almost 7% of GDP by 2022. This is before the bubble bursting.

So if you are already running a budget deficit, 7% of GDP before the onset of balance sheet recession, then whatever you have to do to stop balance sheet recession, we have to be on top of the 7%. Suppose you need 5% GDP equivalent to keep the economy going, then you’re talking about 12% of GDP budget deficit. I think that’s one of the reasons why Chinese policy makers, even though many of them are fully aware that in the balance sheet recession, you need the government to come in, they haven’t been able to come to a full consensus yet because even before the bubble burst, Chinese government was writing a large budget deficit.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q3 2024

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the third quarter of 2024.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the third quarter of 2024 – was held last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the world remains treacherous, but the US economy – and the consumer – remains on solid footing 

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The geopolitical situation looks treacherous to JPMorgan’s management, and could have major impacts on the economy in the short term

We have been closely monitoring the geopolitical situation for some time, and recent events show that conditions are treacherous and getting worse. There is significant human suffering, and the outcome of these situations could have far-reaching effects on both short-term economic outcomes and more importantly on the course of history.

2. The US economy remains resilient, but there are risks; JPMorgan’s management wants to be prepared for any environment, as they think the future can become quite turbulent

While inflation is slowing and the U.S. economy remains resilient, several critical issues remain, including large fiscal deficits, infrastructure needs, restructuring of trade and remilitarization of the world. While we hope for the best, these events and the prevailing uncertainty demonstrate why we must be prepared for any environment…

…I’ve been quite clear that I think things — or the future could be quite turbulent. 

3. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose from US$1.5 billion a year ago; Consumer & Community Banking’s net charge offs rose from US$0.5 billion a year ago

Credit costs were $3.1 billion, reflecting net charge-offs of $2.1 billion and a net reserve build of $1 billion, which included $882 million in Consumer, primarily in Card and $144 million in Wholesale. Net charge-offs were up $590 million year-on-year, predominantly driven by Card…

…In terms of credit performance this quarter, credit costs were $2.8 billion driven by Card and reflected net charge-offs of $1.9 billion, up $520 million year-on-year and a net reserve build of $876 million predominantly from higher revolving balances.

4. JPMorgan’s credit card outstanding loans was up double-digits

Card outstandings were up 11% due to strong account acquisition and the continued normalization of revolve. 

5. Auto originations are down

In Auto, originations were $10 billion, down 2%, while maintaining strong margins and high-quality credit. 

6. JPMorgan’s investment banking fees had strong growth in 2024 Q3, signalling higher appetite for capital-markets activity from companies; management is cautiously optimistic about companies’ enthusiasm towards capital markets activities, but headwinds persist 

IB fees were up 31% year-on-year, and we ranked #1 with year-to-date wallet share of 9.1%. And advisory fees were up 10%, benefiting from the closing of a few large deals. Underwriting fees were up meaningfully with debt up 56% and equity up 26% primarily driven by favorable market conditions. In light of the positive momentum throughout the year, we’re optimistic about our pipeline, but the M&A, regulatory environment and geopolitical situation are continued sources of uncertainty.

7. Management is seeing muted demand for new loans from companies partly because they can easily access capital markets; demand for loans in the multifamily homes market is muted; management is not seeing any major increase in appetite for borrowing after the recent interest rate cut

In the middle market and large corporate client segments, we continue to see softness in both new loan demand and revolver utilization, in part due to clients’ access to receptive capital markets. In multifamily, while we are seeing encouraging signs in loan originations as long-term rates fall, we expect overall growth to remain muted in the near term as originations are offset by payoff activity…

…[Question] Lower rates was supposed to drive a pickup in loan growth and conversion of some of these Investment Banking pipelines. I mean, obviously, we just had one cut and it’s early. But any beginning signs of this in terms of the interest in borrowing more, and again, conversion of the banking pipelines?

[Answer] Generally no, frankly, with a couple of minor exceptions…

… I do think that some of that DCM [debt capital markets] outperformance is in the types of deals that are opportunistic deals that aren’t in our pipeline. And those are often driven by treasurers and CFOs sort of seeing improvement in market levels and jumping on those. So it’s possible that, that’s a little of a consequence of the cuts…

…I mentioned we did see, for example, a pickup in mortgage applications and a tiny bit of pickup in refi. In our multi-family lending business, there might be some hints of more activity there. But these cuts were very heavily priced, right? The curve has been inverted for a long time. So to a large degree, this is expected. So I’m not — it’s not obvious to me that you should expect immediate dramatic reactions, and that’s not really what we’re seeing.

8. Management expects the yield curve to remain inverted

The way we view the curve remains inverted. 

9. Management thinks asset prices are elevated, but they are unclear to what extent

We have at a minimum $30 billion of excess capital. And for me, it’s not burning a hole in my pocket…

…Asset prices, in my view, and you — and like you’ve got to take a view sometimes, are inflated. I don’t know if they’re extremely inflated or a little bit, but I’d prefer to wait. We will be able to deploy it. Our shareholders will be very well served by this waiting…

…I’m not that exuberant about thinking even tech valuations or any valuations will stay at these very inflated values. And so I’m just — we’re just quite patient in that. 

10. Consumer spending behaviour is normalising, so a rotation out of discretionary spending into non-discretionary spending is not a sign of consumers preparing for a downturn; retail spending is not weakening; management sees the consumer as being on solid footing; management’s base case is that there is no recession

I think what there is to say about consumer spend is a little bit boring in a sense because what’s happened is that it’s become normal. So meaning — I mean I think we’re getting to the point of where it no longer makes sense to talk about the pandemic. But maybe one last time.

One of the things that you had was that heavy rotation into T&E as people did a lot of traveling, and they booked cruises that they hadn’t done before, and everyone was going out to dinner a lot, whatever. So you had the big spike in T&E, the big rotation into discretionary spending, and that’s now normalized.

And you would normally think that rotation out of discretionary into nondiscretionary would be a sign of consumers battening down the hatches and getting ready for a much worse environment. But given the levels that it started from, what we see it as is actually like normalization. And inside that data, we’re not seeing weakening, for example, in retail spending.

So overall, we see the spending patterns as being sort of solid and consistent with the narrative that the consumer is on solid footing and consistent with the strong labor market and the current central case of a kind of no-landing scenario economically. But obviously, as we always point out, that’s one scenario, and there are many other scenarios.

11. Management thinks that the Federal Reserve’s quantitative tightening (QT) should be wound down because there are signs of stress in certain corners of the financial markets caused by QT

[Question] You I think mentioned QT stopping at some point. We saw the repo sort of market spike at the end of September. Just give us your perspective on the risk of market liquidity shock as we move into year-end. How — and do you have a view on how quickly Fed should recalibrate QT or actually stop QT to prevent some [indiscernible]?

[Answer] The argument out there is that the repo spike that we saw at the end of this quarter was an indication that maybe the market is approaching that lowest comfortable level of reserves that’s been heavily speculated about, and recognizing that, that number is probably higher and driven by the evolution of firms’ liquidity requirements as opposed to some of the more traditional measures…

…It would seem to add some weight to the notion that maybe QT should be wound down. And that seems to be increasingly the consensus, that, that’s going to get announced at some point in the fourth quarter.

12. Management sees inflationary factors in the environment

I’m not actually sure they can actually do that because you have inflationary factors out there, partially driven by QE. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

How Recessions and Interest Rate Changes Affect Stocks

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future.

After years of investing in stocks, I’ve noticed that stock market participants place a lot of emphasis on how recessions and changes in interest rates affect stocks. This topic is even more important right now for investors in US stocks, given fears that a recession could happen soon in the country, and the interest rate cut last month by the Federal Reserve, the country’s central bank. I have no crystal ball, so I have no idea how the US stock market would react if a recession were to arrive in the near future and/or the Federal Reserve continues to lower interest rates.   

What I have is historical context. History is of course not a perfect indicator of the future, but it can give us context for possible future outcomes. I’ve written a few articles over the years in this blog discussing the historical relationships between stocks, recessions, and movements in interest rates, some of which are given below (from oldest to the most recent):

I thought it would be useful to collect the information from these separate pieces into a single place, so here goes!

The history of recessions and stocks

These are the important historical relationships between recessions and stocks:

  • It’s not a given that stocks will definitely fall during a recession. According to a June 2022 article by Ben Carlson, Director of Institutional Asset Management at Ritholtz Wealth Management, there have been 12 recessions in the USA since World War II (WWII). The average return for the S&P 500 (a broad US stock market benchmark) when all these recessions took place was 1.4%. There were some horrible returns within the average. For example, the recession that stretched from December 2007 to June 2009 saw the S&P 500 fall by 35.5%. But there were also decent returns. For the recession between July 1981 and November 1982, the S&P 500 gained 14.7%.
  • Holding onto stocks in the lead up to, through, and in the years after a recession, has mostly produced good returns. Carlson also showed in his aforementioned article that if you had invested in the S&P 500 six months prior to all of the 12 recessions since WWII and held on for 10 years after each of them, you would have earned a positive return on every occasion. Furthermore, the returns were largely rewarding. The worst return was a total gain of 9.4% for the recession that lasted from March 2001 to November 2001. The best was the first post-WWII recession that happened from November 1948 to October 1949, a staggering return of 555.7%. After taking away the best and worst returns, the average was 257.2%. 
  • Avoiding recessions flawlessly would have caused your return to drop significantly. Data from Michael Batnick, Carlson’s colleague at Ritholtz Wealth Management, showed that a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint.
  • Stocks tend to bottom before the economy does. The three most recent recessions in the USA prior to COVID-19 would be the recessions that lasted from July 1990 to March 1991, from March 2001 to November 2001, and from December 2007 to June 2009. During the first recession in this sample, data on the S&P 500 from Yale economist Robert Shiller, who won a Nobel Prize in 2013, showed that the S&P 500 bottomed in October 1990. In the second episode, the S&P 500 found its low 15 months after the end of the recession, in February 2003. This phenomenon was caused by the aftermath of the dotcom bubble’s bursting. For the third recession, the S&P 500 reached a trough in March 2009, three months before the recession ended. Moreover, after the December 2007 – June 2009 recession ended, the US economy continued to worsen in at least one important way over the next few months. In March 2009, the unemployment rate was 8.7%. By June, it rose to 9.5% and crested at 10% in October. But by the time the unemployment rate peaked at 10%, the S&P 500 was 52% higher than its low in March 2009. Even if we are right today that the economy would be in worse shape in the months ahead, stocks may already have bottomed or be near one – only time can tell.
  • The occurrence of multiple recessions has not stopped the upward march of stocks. The logarithmic chart below shows the performance of the S&P 500 (including dividends) from January 1871 to February 2020. It turns out that US stocks have done exceedingly well over these 149 years (up 46,459,412% in total including dividends, or 9.2% per year) despite the US economy having encountered numerous recessions. If you’re investing for the long run, recessions are nothing to fear.
Figure 1; Source: Robert Shiller data; National Bureau of Economic Research

The history of interest rates and stocks

These are the important historical relationships between interest rates and stocks:

  • Rising interest rates have been met with rising valuations. According to Robert Shiller’s data, the US 10-year Treasury yield was 2.3% at the start of 1950. By September 1981, it had risen to 15.3%, the highest rate recorded in Shiller’s dataset. In that same period, the S&P 500’s price-to-earnings (P/E) ratio moved from 7 to 8. In other words, the P/E ratio for the S&P 500 increased slightly despite the huge jump in interest rates. It’s worth noting too that the S&P 500’s P/E ratio of 7 at the start of 1950 was not a result of earnings that were temporarily inflated. Yes, there’s cherry picking with the dates. For example, if I had chosen January 1946 as the starting point, when the US 10-year Treasury yield was 2.2% and the P/E ratio for the S&P 500 was 19, then it would be a case of valuations falling alongside rising interest rates. But this goes to show that while interest rates have a role to play in the movement of stocks, it is far from the only thing that matters.
  • Stocks have climbed in rising interest rate environments. In a September 2022 piece, Carlson showed that the S&P 500 climbed by 21% annually from 1954 to 1964 even when the yield on 3-month Treasury bills (a good proxy for the Fed Funds rate, which is the key interest rate set by the Federal Reserve) surged from around 1.2% to 4.4% in the same period. In the 1960s, the yield on the 3-month Treasury bill doubled from just over 4% to 8%, but US stocks still rose by 7.7% per year. And then in the 1970s, rates climbed from 8% to 12% and the S&P 500 still produced an annual return of nearly 6%.
  • Stocks have done poorly in both high and low interest rate environments, and have also done well in both high and low interest rate environments. Carlson published an article in February 2023 that looked at how the US stock market performed in different interest rate regimes. It turns out there’s no clear link between the two. In the 1950s, the 3-month Treasury bill (which is effectively a risk-free investment, since it’s a US government bond with one of the shortest maturities around) had a low average yield of 2.0%; US stocks returned 19.5% annually back then, a phenomenal gain. In the 2000s, US stocks fell by 1.0% per year when the average yield on the 3-month Treasury bill was 2.7%. Meanwhile, a blockbuster 17.3% annualised return in US stocks in the 1980s was accompanied by a high average yield of 8.8% for the 3-month Treasury bill. In the 1970s, the 3-month Treasury bill yielded a high average of 6.3% while US stocks returned just 5.9% per year. 
  • A cut in interest rates by the Federal Reserve is not guaranteed to be a good or bad event for stocks. Josh Brown, CEO of Ritholtz Wealth Management, shared fantastic data in an August 2024 article on how US stocks have performed in the past when the Federal Reserve lowered interest rates. His data, in the form of a chart, goes back to 1957 and I reproduced them in tabular format in Table 1; it shows how US stocks did in the next 12 months following a rate cut, as well as whether a recession occurred in the same window. I also split the data in Table 1 according to whether a recession had occurred shortly after a rate cut, since eight of the 21 past rate-cut cycles from the Federal Reserve since 1957 took place without an impending recession. Table 2 shows the same data as Table 1 but for rate cuts with a recession; Table 3 is for rate cuts without a recession. What the data show is that US stocks have historically done well, on average, in the 12 months following a rate-cut. The overall record, seen in Table 1, is an average 12-month forward return of 9%. When a recession happened shortly after a rate-cut, the average 12-month forward return is 8%; when a recession did not happen shortly after a rate-cut, the average 12-month forward return is 12%. A recession is not necessarily bad for stocks. As Table 2 shows, US stocks have historically delivered an average return of 8% over the next 12 months after rate cuts that came with impending recessions. It’s not a guarantee that stocks will produce good returns in the 12 months after a rate cut even if a recession does not occur, as can be seen from the August 1976 episode in Table 3.
Table 1; Source: Josh Brown
Table 2; Source: Josh Brown
Table 3; Source: Josh Brown

Conclusion

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future. But it’s also worth bearing in mind that anything can happen in the financial markets. Things that have never happened before do happen, so there are limits to learning from history. Nonetheless, there’s a really important lesson from all the data seen above that I think is broadly applicable even far into the future, and it is that one-factor analysis in finance – “if A happens, then B will occur” – should be largely avoided because clear-cut relationships are rarely seen.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time. 

The Federal Reserve Has Much Less Power Over Financial Markets Than You Think 

It makes sense to mostly ignore the Federal Reserve’s actions when assessing opportunities in the stock market.

Last week, the Federal Reserve, the USA’s central bank, opted to lower the federal funds rate (the key interest rate controlled by it) by 50 basis points, or 0.5%. The move, both before and after it was announced, was heavily scrutinised by market participants. There’s a wide-held belief that the Federal Reserve wields tremendous influence over nearly all aspects of financial market activity in the USA.

But Aswath Damodaran, the famed finance professor from New York University, made an interesting observation in a recent blog post: The Federal Reserve actually does not have anywhere close to the level of influence over America’s financial markets as many market participants think.

In his post, Damodaran looked at the 249 calendar quarters from 1962 to 2024, classified them according to how the federal funds rate changed, and compared the changes to how various metrics in the US financial markets moved. There were 96 quarters in the period where the federal funds rate was raised, 132 quarters where it was cut, and 21 quarters where it was unchanged. Some examples of what he found:

  • A median change of -0.01% in the 10-year Treasury rate was seen in the following quarter after the 96 quarters where the federal funds rate increased, whereas a median change of 0.07% was seen in the following quarter after the 132 quarters where the federal funds rate was lowered. Put another way, the 10-year Treasury rate has historically tended to (1) decrease when the federal funds rate increased, and (2) increase when the federal funds rate decreased. This means that the Federal Reserve has very little control over longer-term interest rates. 
  • A median change of -0.13% in the 15-year mortgage rate was seen in the following quarter after the quarters where the federal funds rate increased, whereas a median change of -0.06% was seen in the following quarter after the quarters where the federal funds rate was lowered. It turns out that the Federal Reserve also exerts little control over the types of interest rates that consumers directly interact with on a frequent basis.
  • A median change of 2.85% in US stocks was seen in the following quarter after the quarters where the federal funds rate increased, a median change of 3.07% was seen in the following quarter after the quarters where the federal funds rate was lowered, and a median change of 5.52% was seen in the following quarter after the quarters where the federal funds rate was unchanged. When discussing the stock-market related data, Damodaran provided a provocative question and answer: 

“At the risk of disagreeing with much of conventional wisdom, is it possible that the less activity there is on the part of the Fed, the better stocks do? I think so, and stock markets will be better served with fewer interviews and speeches from members of the FOMC and less political grandstanding (from senators, congresspeople and presidential candidates) on what the Federal Reserve should or should not do.”

I have always paid scant attention to what the Federal Reserve is doing when making my investing decisions. My view, born from observations of financial market history* and a desire to build a lasting investment strategy, is that business fundamentals trump macro-economics. Damodaran’s data lends further support for my stance to mostly ignore the Federal Reserve’s actions when I assess opportunities in the stock market. 

*A great example can be found in Berkshire Hathaway, Warren Buffett’s investment conglomerate. Berkshire produced an 18.7% annual growth rate in its book value per share from 1965 to 2018, which drove a 20.5% annual increase in its stock price. Throughout those 53 years, Berkshire endured numerous macro worries, such as the Vietnam War, the Black Monday stock market crash, the “breaking” of the Bank of England, the Asian Financial Crisis, the bursting of the Dotcom Bubble, the Great Financial Crisis, Brexit, and the US-China trade war. Damodaran’s aforementioned blog post also showed that the federal funds rate moved from around 5% in the mid-1960s to more than 20% in the early-1980s and then to around 2.5% in 2018. And yet, an 18.7% input (Berkshire’s book value per share growth) still resulted in a 20.5% output (Berkshire’s stock price growth).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2024 Q2)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q2 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q2). In it, I shared commentary in earnings conference calls for the second quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management believes that Adobe’s approach to AI is highly differentiated; the greatest differentiation is at the interface layer, as Adobe is able to rapidly integrate AI across its product portfolio and allow users to realise value

Adobe’s customer-centric approach to AI is highly differentiated across data, models and interfaces…

…Our greatest differentiation comes at the interface layer with our ability to rapidly integrate AI across our industry-leading product portfolio, making it easy for customers of all sizes to adopt and realize value from AI. 

Adobe’s Firefly models are trained on data that allow outputs to be commercially safe and management thinks this feature of being commercially safe is really important to enterprises; Adobe now has Firefly models for imaging, vector, design, and video; Firefly is in a wide range of Adobe products; Firefly has powered more than 12 billion generations since its launch in March 2023 (was 9 billion in 2024 Q1); management’s strategy is to build Firefly models into more streamlined and precise workflows within Adobe’s products; Adobe has Firefly Service APIs for organisations to generate content at scale, and the API calls tripled quarter-on-quarter; Firefly Service APIs are gaining real traction

We train our Firefly models on data that allows us to offer customers a solution designed to be commercially safe. We have now released Firefly models for imaging, vector and design and just previewed a new Firefly video model…

… Firefly-powered features in Adobe Photoshop, Illustrator, Lightroom, and Premier Pro help creators expand upon their natural creativity and accelerate productivity. Adobe Express is a quick and easy create anything application, unlocking creative expression for millions of users. Acrobat AI Assistant helps extract greater value from PDF documents. Adobe Experience Platform AI Assistant empowers brands to automate workflows and generate new audiences and journeys. Adobe GenStudio brings together content and data, integrating high-velocity creative expression with the enterprise activation needed to deliver personalization at scale…

…We have now surpassed 12 billion Firefly-powered generations across Adobe tools…

… Our strategy is to build technology that will create more streamlined and precise workflows within our tools through features like text-to-template in Express, Generative Fill in Photoshop, Generative Recolor in Illustrator, Generative Remove in Lightroom and the upcoming Generative Extend for Video and Premier Pro. We’re exposing the power of our creative tools and the magic of generative AI through Firefly Service APIs so organizations can generate and assemble content at scale…

…The introduction of the new Firefly video model earlier this week at IBC is another important milestone in our journey. Our video model, like the other models in the Firefly family, is built to be commercially safe with fine-grain control and application integration at its core. This will empower editors to realize their creative vision more productively in our video products, including Premier Pro…

…Strong demand for Firefly Services, which provide APIs, tools and services for content generation, editing and assembly, empowering organizations to automate content production while maintaining quality and control. Total API calls tripled quarter over quarter…

…Firefly Services, which is you can think of that also as a consumption model where we have that, it’s off to a really good start. Our ability to give enterprises the ability to automate content, create custom models within enterprises, we’re seeing real traction because it’s a differentiated solution and that it’s designed to be commercially safe…

…One other thing I’d just emphasize there is that the commercial safety is so important to businesses of all sizes, frankly, and that is something that we feel very, very differentiated.

Adobe released significant advancements in AI Assistant across Adobe Acrobat and Reader in 2024 Q2 (FY2024 Q3) and saw 70% sequential growth in AI interactions in AI Assistant; the advancements in AI Assistant include content creation capabilities; Tata Consultancy Services used AI Assistant in Adobe Acrobat to create event summaries of hours of conference videos in minutes; management intends to actively promote subscription plans for Adobe Acrobat and Reader that include generative AI capabilities

For decades, PDF has been the de facto standard for storing unstructured data, resulting in the creation and sharing of trillions of PDFs. The introduction of AI Assistant across Adobe Acrobat and Reader has transformed the way people interact with and extract value from these documents. In Q3, we released significant advancements, including the ability to have conversations across multiple documents and support for different document formats, saving users valuable time and providing important insights. We are thrilled to see this value translate into AI Assistant usage with over 70% quarter-over-quarter growth in AI interactions. 

In addition to consumption, we’re focused on leveraging generative AI to expand content creation in Adobe Acrobat. We’ve integrated Adobe Firefly Image Generation into our edit PDF workflows. We’ve optimized AI Assistant in Acrobat to generate content fit for presentations, e-mails and other forms of communication, and we’re laying the groundwork for richer content creation, including the generation of Adobe Express projects.

The application of this technology across verticals and industries is virtually limitless. Tata Consultancy Services recently used Adobe Premiere Pro to transcribe hours of conference videos and then used AI Assistant in Acrobat to create digestible event summaries in minutes. This allowed them to distribute newsletters on session content to attendees in real-time.

We’re excited to leverage generative AI to add value to content creation and consumption in Acrobat and Reader in the months ahead. Given the early adoption of AI Assistant, we intend to actively promote subscription plans that include generative AI capabilities over legacy perpetual plans that do not.

Adobe GenStudio is integrated across Experience Cloud and Creative Cloud and helps marketers quickly plan, create, store, deliver, and measure marketing content; Vanguard used Adobe GenStudio to increase quality engagement with investors by 176% through one-to-one personalisation, and to enjoy millions in savings

Customers are embracing the opportunity to address their content supply chain challenges with Adobe GenStudio. With native integrations across Experience Cloud and Creative Cloud, GenStudio empowers marketers to quickly plan, create, store, deliver, and measure marketing content and drive greater efficiency in their organizations. Financial services leader Vanguard is creating an integrated content supply chain to serve the strategic goal of deepening their relationships with a broad range of investors. Leveraging the GenStudio solution, Vanguard was able to increase quality engagement by 176% by focusing on one-to-one personalization and to realize millions in savings by improving content velocity and resource allocation with an end-to-end content creation workflow.

Adobe’s management has been very consistent over the past 1-1.5 years in how they have approached AI, and that is, Adobe would be developing a broad set of models for the creative community, and the models would be highly differentiated based on quality, commercial safety, and integrability into Adobe’s product portfolio

I think we’ve been incredibly consistent with what we’ve said, dating back 1 year, 1.5 years ago, where we talked about the fact that we were going to develop the broadest set of models for the creative community. And we were going to differentiate the models based on quality, commercial safety, integratability into our tools and controllability. And as you’ve seen very methodically over the last 18 months, we continue to bring more and more of that innovation to life. And that fundamentally is working as we’ve now started to integrate it much more actively into our base. If you look at it with photography, we now have in our tool, Generative Remove, we have AI-assisted edits in design, we have Generative Pattern, Generative Fill Shape. We have, in Photoshop, we have Gen Remove. We also have Gen Fill, and I can continue on with all the generations, but we’ve also now started to integrate it in Firefly Services for what we’re enabling enterprises to be able to access and use in terms of batch work and through APIs.

Adobe’s management is seeing the accelerated use and consumption of generative AI credits in Adobe’s products play out the way they expected it to; total consumption credits are going up with the introduction of each new generative AI capability 

If you look at sort of how that’s played out, as we talked about, we’re seeing accelerated use and generative credits being consumed because of that deeper integration into all of our tools, and that is playing out as we expected…

…And we do see with every subsequent capability we integrate into the tool, total credits consumed going up. 

Adobe’s management is seeing Adobe enjoying indirect monetisation from the AI features of its products, such as (1) the products having more value and pricing, (2) users being retained better when they use generative AI features, and (3) higher conversion of users when they try out Adobe products

When you look at then how that converts to monetization, first and foremost, we’ve integrated it a lot of that value into our core products with more value and more pricing. We’re also seeing that when people use these generative features, they retain better. We’re also seeing that when people come to Adobe to try our Creative Cloud applications or Express application, they’re able to convert better. And so there are all these ancillary implied benefits that we’re getting. 

For direct monetisation of the AI features in Adobe’s products, management is thinking of (1) instituting caps on generative AI credit consumption, (2) having AI plans with different AI capabilities; but direct monetisation is currently still not the key focus that management has, because they want to focus on proliferation and usage of generative AI across the user base

In terms of direct monetization, what we’ve said in the past is that the current model is around generative credits, which is I think where you’re going with this. And we do see with every subsequent capability we integrate into the tool, total credits consumed going up. Now what we are trying to do as we go forward, we haven’t started instituting the caps yet. And part of this is, as we’ve said all along, we want to really focus our attention on proliferation and usage across our base. We see a lot of users excited about it. It’s some of the most actively used features that we’ve ever released. And we want to avoid the generation anxiety that people feel. But we’re watching very closely as the economy of generative credits evolves, and we’re going to look at instituting those caps at some point when we feel the time is right and/or we’re also looking at other alternative models. What we did with Acrobat AI Assistant has proven to be very effective. And so we’re also considering other opportunities like having standard CC plans that have a core set of generative capabilities but also having premium API — sorry, premium AI plans that will include things more like video and other things.

Adobe’s management thinks Adobe’s generative AI video models are already pretty capable, but they are going to get better over time; management thinks that the real value of generative AI video models is not in their ability to create a video through a description the user gives, but in their ability to extend the video

I don’t know if you had a chance to see some of the videos we put out there integrated directly into premier, also text to video, images to video, more controllability. We have also the ability now to generate not just themes with humans and dogs and organic animals, but all these like overlays and things that creative professionals actually want to work with. And so we’re very excited about the set of things that they can get out of the box that get going. And human faces and things will just continue to get better…

…I spend a couple of hours with our video team. They have just absolutely hit it out of the park. I mean, the work that they have done, which is leveraging the image models with video, and again, I think to David’s point, the integration with Premier, that’s where we’ve always said, it’s the integration of the model and the application that differentiates it. I think when other models first came out, people were like, “Wow, you can describe it.” That’s just such a small part of where the value is. And the real value is, you have a video, you want to extend it. It’s a game changer in terms of what we can do. So really excited about the stuff that we’re doing in video. 

MongoDB (NASDAQ: MDB)

MongoDB’s management sees AI as a longer-term opportunity for MongoDB; management is seeing companies largely still experimenting with AI applications currently; management thinks inference workloads will come, but monetisation of AI apps will take time

AI continues to be an additional long-term opportunity for our business. At the start of the fiscal year, we told you that we didn’t expect AI to be a meaningful tailwind for our business in fiscal year 2025, which has proven accurate. Based on recent peer commentary, it seems that the industry now mostly agrees with this view. Companies are currently focusing their spending on the infrastructure layer of AI and are still largely experimenting with AI applications. Inference workloads will come and should benefit MongoDB greatly in the long run, but we are still very early when the monetization of AI apps will take time. AI demand is a question of when, not if.

MongoDB’s management has been talking to customers and they think MongoDB is the ideal database for AI apps for five reasons: (1) AI workloads involve a wide variety of data types and MongoDB’s document-model database is meant to handle this variety well, thus providing a well-rounded one-stop solution, (2) MongoDB’s database is high-performance and scalable, and allows AI workloads to utilise real-time operational data, (3) MongoDB’s database is integrated with leading app development frameworks and AI platforms, (4) MongoDB’s database has enterprise-grade security and compliance features, and (5) MongoDB’s database can be run anywhere on the customer’s choice; management feels very good about MongoDB’s positioning for AI

Our discussions with customers and partners give us increasing conviction that we are the ideal data layer for AI apps for a number of key reasons.

First, more than any other type of modern workload, AI-driven workloads require the underlying database to be capable of processing queries against rich and complex data structures quickly and efficiently. Our flexible document model is uniquely positioned to help customers build sophisticated AI applications because it is designed to handle different data types, your source data, vector data, metadata and generated data right alongside your live operational data, outdating the need for multiple database systems and complex back-end architectures.

Second, MongoDB offers a high performance and scalable architecture. As the latency of LLMs improve, the value of using real-time operational data for AI apps will become even more important.

Third, we are seamlessly integrated with leading app development frameworks and AI platforms, enabling developers to incorporate MongoDB into their existing workflows while having the flexibility to choose the LLM and other specific tools that best suit their needs.

Fourth, we meet or exceed the security and compliance requirements expected from an enterprise database, including enterprise-grade encryption, authorization and auditability.

Lastly, customers can run MongoDB anywhere, on-premise or as a fully managed service in 1 of the 118 global cloud regions across 3 hyperscalers giving them the flexibility to run workloads to meet — to best meet their application use cases and business needs…

… As the performance of these LLMs and latency of these LLMs increase, accessing real-time data becomes really important like, say, you’re calling and talking to a customer support chatbot, that you want that chatbot to have up-to-date information about that customer so that they can provide the most relevant and accurate information possible…

…I think it’s a quickly evolving space, but we feel very good about our positioning for AI, even though it’s still very early days.

MongoDB’s management sees 3 ways AI can accelerate MongoDB’s business over time: (1) AI will drive the cost of building applications, as all past platform shifts have done, thus leading to more apps and higher demand for databases, (2) MongoDB can be the database of choice for developers building AI applications (see Point 9 on MongoDB’s new AI Applications Program), and (3) MongoDB can help customers modernise their application estate (see Point 10 for more on this opportunity)

We see 3 main opportunities where we believe AI will accelerate our business over time. The first is that the cost of building applications in the world of AI will come down as we’ve seen with every previous platform shift, creating more applications and more data requiring more databases. The second opportunity is for us to be the database of choice for customers building greenfield AI applications…

…The third opportunity is to help customers modernize their legacy application estate. 

MongoDB’s management made the MongoDB AI Applications Program (MAAP) generally available in July 2024; MAAP brings the cloud computing hyperscalers and prominent AI model-building startups into one ecosystem to reduce the complexity and difficulty for MongoDB’s customers when they build new AI applications 

While we see that there’s tremendous amount of interest in and planning for new AI-powered applications, the complexity and fast-moving nature of the AI ecosystem slows customers down. That’s why we launched the MongoDB AI Applications Program, or MAAP, which became generally available to customers last month. MAAP brings together a unique ecosystem, including the 3 major cloud providers, AWS, Azure and GCP as well as Accenture and AI pioneers like Anthropic and Cohere. MAAP offers customers reference architectures and end-to-end technology stack that includes prebuilt integrations, professional services and a unified support system to help customers quickly build and deploy AI applications.

Modernising legacy application estates is a big opportunity, as most of the $80 billion database market is still in legacy relational databases; MongoDB has the Relational Migrator product to help customers migrate from legacy relational databases to the company’s document-model database; management thinks AI can significantly improve the process of modernising legacy applications by helping with understanding legacy code and rewriting them as modern versions; MongoDB launched a few pilots with customers earlier in 2024 to modernise their legacy applications with the help of AI and the results are exciting; the CIO (Chief Information Officer) of an insurance company in the pilots said the modernisation process was the first tangible return he had seen in his AI investments; management thinks it will take time for the modernisation program to contribute meaningful revenue to MongoDB, but they are excited 

Most of the existing $80-billion-plus database industry is built on dated relational architecture. Modernizing legacy applications has always been part of our business, and we have taken steps over the years to simplify and demystify this complex process through partnerships, education and most recently, our Relational Migrator product. AI offers a potential step function improvement, lowering the cost and reducing their time and risk to modernize legacy applications…

…Earlier this year, we launched several pilots with our customers where we work with them to modernize mission-critical applications, leveraging both AI tooling and services. The early results from these pilots are very exciting as our customers are experiencing significant reductions in time and cost of modernization. In particular, we have seen dramatic improvements in time and cost to rewrite application code and generate test suites. We see increasing interest from customers that want to modernize their legacy application estate, including large enterprise customers. As a CIO of one of the world’s largest insurance companies said about our pilot, this is the first tangible return he’s seen on his AI investments. While it’s still early days and generating meaningful revenue from this program will take time, we are excited about the results of our pilots and the growing pipeline of customers eager to modernize their legacy estate…

…Since day one, since our IPO, we’ve been getting customers to migrate off relational to MongoDB. But one of the biggest friction points has been that while it’s easy to move the data, you can map the schema from a relational schema to a document schema and you can automate that, the biggest stumbling block is that the customer has to or some third party has to rewrite the application, which, by definition, creates more costs, more time and in some cases, more risk especially for older apps, where the development teams who built those apps no longer exist. So what’s been compelling about AI is that AI has finally created a shortcut to overcome that big hurdle. And so essentially, you can start basically diagnosing the code, understand the code, recreate a modern version of that code and generate test suites to make sure the new code performs like the old code. So that definitely gets people’s interest because now, all of a sudden, what may take years or multiyears, you can do in a lot less time. And the pilots that we have done, the time and cost savings have been very, very compelling.

That being said, we’re in the very early days. There’s a lot of interest. We have a growing pipeline of customers across, frankly, all parts of the world from North America to EMEA and even the Pac Rim. And so we’re quite excited about the opportunity. But again, I would say it’s very early days.

Delivery Hero, a leading local delivery platform, is using MongoDB Atlas Vector Search to provide AI-powered hypersonalised results to users; Delivery Hero found that MongoDB Atlas Vector Search helped it build solutions for less cost than alternative technologies

Delivery Hero, a long-time MongoDB Atlas customer is the world’s leading local delivery platform, operating in 70-plus countries across 4 continents. Their quick commerce service enables customers to select fresh produce for delivery from local grocery stores. Approximately 10% of the inventory is fast-moving perishable produce that can go quickly out of stock. The company risks losing revenue and increasing customer churn if the customer didn’t have viable alternatives to their first choice. To address these risks, they are now using state-of-the-art AI models and MongoDB Atlas Vector Search to give hyperpersonalized alternatives to customers in real time if items they want to order are out of stock. With the introduction of MongoDB Atlas Vector Search, the data science team recognized that they could build a highly performant, real-time solution more quickly and for less cost than alternative technologies. 

MongoDB’s management believes that general-purpose LLMs (large language models) will win and will use RAG (retrieval augmented generation) as the primary way to combine generally available data to proprietary data; management is seeing advanced RAG use-cases in answering complex questions

There are some questions about LLMs, whether a general-purpose LLM or a fine-tune LLM, what the trade-offs are. Our belief is that given the performance of LLMs, you’re going to see the general purpose LLMs probably win and will use RAG as the predominant approach to marry generally available data with proprietary data. And then you are starting to see things like advanced RAG use cases where you get much more sophisticated ways to ask complex questions, provide more accurate and detailed answers and better adapt to different types of information and queries.

MongoDB’s management is seeing most AI workloads happen in the cloud, but they also see a lot of customers using open-source LLMs and running those workloads locally

We predominantly see most of the AI workloads in the cloud, but there are definitely lots of customers who are looking at using open source LLMs, in particular, things like Llama, and running those workloads locally.

MongoDB’s management believes MongoDB wins against Postgres for winning AI workloads because MongoDB can handle complex data types whereas Postgres, which is a relational – or SQL – database, struggles

MongoDB is designed to handle these different data structures. And I talked about we can help unify metadata, operational data, vector data and generate it all in one platform. Relational databases, and Postgres is one of them, have limitations in terms of what they can — how they can handle different types of data. In fact, when the data gets too large, these relational databases have to do what’s called off-row storage. And it becomes — it creates a performance overhead on these relational platforms. Postgres has this thing called TOAST, which is — stands for The Oversized-Attribute Storage Technique. And it’s basically a way to handle these different data types, but it creates a massive performance overhead. So we believe that we are architecturally far better for these more complex AI workloads than relational databases.

MongoDB’s management is seeing growing adoption of Vector, and Vector is helping attract new customers to MongoDB; an existing Atlas customer, a financial news organisation, migrated from Elastic Search to Atlas Search in order to use MongoDB’s Vector Search capabilities; an European energy company is using Vector Search for a geospatial search application

On Vector, we’re continuing to see growth in adoption, and we see Vector is effective in attracting new customers to the MongoDB platform. A world-renowned financial news organization, which is already running in Atlas, migrated from Elasticsearch to Atlas Search using Search Nodes to take advantage of our Vector Search capabilities to build a site search that combines lexical search with semantic search to find the most relevant articles for user query. And a European energy company built a geospatial search application using Atlas Search and Vector search and the app was built on-prem but — and to clouds to vectorize geospatial data and facilitate research and discovery.

MongoDB’s management is seeing MongoDB’s customers improve their software development productivity with the help of AI, but the rate of improvement is all over the place

[Question] We’ve talked before in the past that AI is just driving a lot of new code, making developers significantly more productive. Have you seen that behavior in any of your existing customers on Atlas where maybe their utilization rate goes up or the number of applications built per customer goes up?

[Answer] A common question I ask our customers when I meet with them in terms of what code generation tools that they’re using and what benefits they’re gaining. The answers tend to be a little bit all over the map. Some people see 10%, 15% productivity improvement. Some people say 20%, 25% productivity improvement. Some people say it helps my senior developers be more productive. Some people say it helps my junior developers become more like senior developers. So the answers tend to be all over the map.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue had incredibly strong growth in 2024 Q2, driven by demand for the Hopper GPU computing platform; compute revenue was up by 2.5x while networking revenue was up by 2x

Data Center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year-on-year, driven by strong demand for NVIDIA Hopper, GPU computing and our networking platforms. Compute revenue grew more than 2.5x. Networking revenue grew more than 2x from the last year.

Even as Nvidia is getting ready to launch its Blackwell-architecture GPUs, customers are still buying the Hopper-architecture GPUs; the H200 platform, based on the Hopper architecture, started ramping in 2024 Q2 and offers 40% more memory bandwidth than the H100; management thinks that the reasons why the Hopper-architecture chips still enjoy strong demand despite the imminent arrival of the Blackwell-architecture chips are (1) AI companies need chips today to process data right now, and (2) AI companies are in a race to build the best model and they’re all racing to be the first

Customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell…

…NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet and enterprise companies. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100…

…The demand for Hopper is really strong. And it’s true, the demand for Blackwell is incredible. There’s a couple of reasons for that. The first reason is, if you just look at the world’s cloud service providers and the amount of GPU capacity they have available, it’s basically none…

…A generative AI company spends the vast majority of their invested capital into infrastructure so that they could use an AI to help them create products. And so these companies need it now. They just simply can’t afford — you just raise money, they want you to put it to use now. You have processing that you have to do. You can’t do it next year. You got to do it today. And so that’s one reason. The second reason for Hopper demand right now is because of the race to the next plateau. The first person to the next plateau gets to introduce some revolutionary level of AI. The second person who gets there is incrementally better or about the same. And so the ability to systematically and consistently race to the next plateau and be the first one there is how you establish leadership…

…We believe our Hopper will continue to grow into the second half. We have many new products for Hopper or existing products for Hopper that we believe will start continuing to ramp in the next quarters, including our Q3 and those new products moving to Q4. So let’s say, Hopper, therefore, versus H1 is a growth opportunity for that. 

Nvidia’s management thinks that the next generation of AI models will need 10-20 times more compute to train

Next-generation models will require 10 to 20x more compute to train with significantly more data. The trend is expected to continue.

Nvidia’s management sees inferencing accounting for 40% of Data Center revenue over the last 4 quarters (was 40% as of 2024 Q1)

Over the trailing 4 quarters, we estimate that inference drove more than 40% of our Data Center revenue.

Nvidia’s management is seeing demand coming from builders of frontier AI models, consumer Internet companies, and companies building generative AI applications for a wide range of use cases

Demand for NVIDIA is coming from frontier model makers, consumer Internet services, and tens of thousands of companies and start-ups building generative AI applications for consumers, advertising, education, enterprise and health care, and robotics. 

Nvidia’s Data Center revenue in China grew sequentially in 2024 Q2, but still remains below the level seen prior to export controls; management expects tough competition in China

Our Data Center revenue in China grew sequentially in Q2 and a significant contributor to our Data Center revenue. As a percentage of total Data Center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going forward.

Nvidia has leadership in inference

The latest round of MLPerf inference benchmarks highlighted NVIDIA’s inference leadership, with both NVIDIA Hopper and Blackwell platforms combining to win gold medals on all tests.

Nvidia’s Blackwell family of chips combines GPUs, CPUs, DPUs (data processing units), NVLink, and networking; the GB200 NVL72 system in the Blackwell family links up 72 GPUs to act as 1 GPU and is up to 30 times faster for LLM (large language model) inference workloads; Nvidia has made a change to the Blackwell architecture to improve production yields; Blackwell’s production is expected to ramp in the fourth quarter of 2024; management sees demand for Blackwell exceeding supply by a wide margin up to 2025; there are more than 100 different Blackwell architecture systems; Nvidia’s Blackwell systems come in both air-cooled and liquid-cooled flavours; management expects Nvidia’s Data Center business to grow significantly in 2025 and 2026, powered by the Blackwell system; management sees Blackwell as a step-function improvement over Hopper that delivers 3-5 times more AI throughput than Hopper; Blackwell required 7 one-of-a-kind chips to build; Nvidia designed and optimised the Blackwell system end-to-end

The NVIDIA GB200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30x faster inference for LLM’s workloads and unlocking the ability to run trillion-parameter models in real time…

…We executed a change to the Blackwell GPU mass to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year ’26. In Q4, we expect to get several billion dollars in Blackwell revenue…

Demand for Blackwell platforms is well above supply, and we expect this to continue into next year…

…There are something like 100 different types of Blackwell-based systems that are built that were shown at Computex, and we’re enabling our ecosystem to start sampling those…

…We offer multiple configurations of Blackwell. Blackwell comes in either a Blackwell classic, if you will, that uses the HGX form factor that we pioneered with Volta. I think it was Volta. And so we’ve been shipping the HGX form factor for some time. It is air cooled. The Grace Blackwell is liquid cooled…

…We expect to grow our Data Center business quite significantly next year. Blackwell is going to be a complete game changer for the industry. And Blackwell is going to carry into the following year…

…Blackwall is a step-function leap over Hopper. Blackwell is an AI infrastructure platform, not just the GPU. It also happens to be the name of our GPU, but it’s an AI infrastructure platform. As we reveal more of Blackwell and sample systems to our partners and customers, the extent of Blackwell’s lead becomes clear. The Blackwell vision took nearly 5 years and 7 one-of-a-kind chips to realize: the Gray CPU, the Blackwell dual GPU and a colos package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for all-to-all GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet can support the massive burst traffic of AI. Blackwell AI factories are building size computers. NVIDIA designed and optimized the Blackwell platform full stack, end-to-end, from chips, systems, networking, even structured cables, power and cooling and mounts of software to make it fast for customers to build AI factories. These are very capital-intensive infrastructures. Customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO. Blackwell provides 3 to 5x more AI throughput in a power-limited data center than Hopper…

…The Blackwell system lets us connect 144 GPUs in 72 GB200 packages into 1 NVLink domain, with an aggregate NVLink bandwidth of 259 terabytes per second in 1 rack. Just to put that in perspective, that’s about 10x higher than Hopper.  

Nvidia’s Ethernet for AI revenue doubled sequentially; management sees Nvidia’s ethernet product, Spectrum-X, enjoying wide support from the AI ecosystem; Spectrum-X performs 1.6 times better than traditional Ethernet; management plans to launch new Spectrum-X products every year and thinks that Spectrum-X will soon become a multi-billion dollar product

Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers and enterprises, including xAI to connect the largest GPU compute cluster in the world. Spectrum-X supercharges Ethernet for AI processing and delivers 1.6x the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of GPUs today to millions of DPUs in the near future. Spectrum-X is well on track to begin a multibillion-dollar product line within a year.

Japan’s government is working with Nvidia to build an AI supercomputer; Nvidia’s management thinks sovereign AI revenue will be in the low-teens billion-range this year; management is seeing countries want to build their own generative AI that incorporates their own language, culture, and data

Japan’s National Institute of Advanced Industrial Science and Technology is building its AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year…

…It certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country.

Most of the Fortune 100 companies are working with Nvidia on AI projects

We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. 

Nvidia’s management is seeing a range of applications driving the company’s growth; these applications include (1) Amdocs’ smart agent which is reducing customer service costs by 30%, and (2) Wistron’s usage of Nvidia AI Ominiverse to reduce cycle times in its factories by 50%

A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots and agents to build new, monetizable business applications and enhance employee productivity. Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company’s history. SAP is using NVIDIA to build Joule copilot. Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake, who serves over 3 billion queries a day for over 10,000 enterprise customers, is working with NVIDIA to build copilots. And lastly, Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%.

Every automobile company that is developing autonomous vehicle technology is working with Nvidia; management thinks that automotive will account for multi-billions in revenue for Nvidia; Nvidia won the Autonomous Brand Challenge at the recent Computer Vision and Pattern Recognition Conference

Every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multibillion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute…

…At the Computer Vision and Pattern Recognition Conference, NVIDIA won the Autonomous Brand Challenge in the end-to-end driving at scale category, outperforming more than 400 entries worldwide. 

Nvidia’s management announced Nvidia AI Foundry – a platform for building custom AI models – in 2024 Q2; users of Nvidia AI Foundry are able to customise Meta’s Llama 3.1 foundation AI model; Nvidia AI Foundry is the first platform where users are able to customise an open-source, frontier-level foundation AI model; Accenture is already using Nvidia AI Foundry 

During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world’s enterprises with Meta’s Llama 3.1 collection of models… 

…Companies for the first time can leverage the capabilities of an open source, frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications.

Companies from many industries are using NIMs (Nvidia inference microservices) for deployment of generative AI; AT&T saw 70% cost savings and 8 times latency reduction with NIM; there are 150 organisations using NIMs; Nvidia recently announced NIM Agent Blueprints, a catalog of reference AI applications; Nvidia is using NIMs to open the Nvidia Omniverse to new industries

NVIDIA NIMs accelerate and simplify model deployment. Companies across health care, energy, financial services, retail, transportation, and telecommunications are adopting NIMs, including Aramco, Lowes, and Uber. AT&T realized 70% cost savings and 8x latency reduction affter moving into NIMs for generative AI, call transcription and classification. Over 150 partners are embedding NIMs across every layer of the AI ecosystem. 

We announced NIM Agent Blueprints, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise generative AI applications. With NIM Agent Blueprints, enterprises can refine their AI applications over time, creating a data-driven AI flywheel. The first NIM Agent Blueprints include workloads for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM Agent Blueprints to enterprises…

…We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate generative AI copilots and agents into USD workloads, accelerating our ability to build highly accurate virtual worlds.

Nvidia’s AI Enterprise software platform is powering Nvidia’s software-related business to approach a $2 billion annual revenue run-rate by the end of this year; management thinks Nvidia AI Enterprise represents great value for customers by providing GPUs at a price of $4,500 per GPU per year; management thinks the TAM (total addressable market) for Nvidia’s AI software business can be significant

NVIDIA NIM and NIM Agent Blueprints are available through the NVIDIA AI Enterprise software platform, which has great momentum. We expect our software, SaaS and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth…

…At $4,500 per GPU per year, NVIDIA AI Enterprise is an exceptional value for deploying AI anywhere. And for NVIDIA’s software TAM, it can be significant as the CUDA-compatible GPU installed base grows from millions to tens of millions. 

Computers that contain Nvidia’s RTX chip can deliver up to 1,300 AI TOPS (tera operations per second); there are more than 200 RTX AI computer models from computer manufacturers; there is an installed base of 100 million RTX AI computers; a game called Mecha BREAK is the first game to use Nvidia ACE, a generative AI service for creating digital humans

Every PC with RTX is an AI PC. RTX PCs can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designed from leading PC manufacturers. With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI. NVIDIA ACE, a suite of generative AI technologies is available for RTX AI PCs. Mecha BREAK is the first game to use NVIDIA ACE, including our small language model, Nemotron-4 4B, optimized on device inference. 

Foxconn, the largest electronics manufacturer in the world, and Mercedes-Benz, the well-known auto manufacturer, are using Nvidia Omniverse to produce digital twins of their manufacturing plants

The world’s largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz, signed multiyear contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories.

Many robotics companies are using Nvidia’s AI robot software

Boston Dynamics, BYD Electronics, Figure, Intrinsyc, Siemens, Skilled AI and Teradyne Robotics are using the NVIDIA Isaac robotics platform for autonomous robot arms, humanoids and mobile robots.

Nvidia’s management is seeing some customers save up to 90% in computing costs by transitioning from genera-purpose computing (CPUs) to accelerated computing (GPUs)

We know that accelerated computing, of course, speeds up applications. It also enables you to do computing at a much larger scale, for example, scientific simulations or database processing. But what that translates directly to is lower cost and lower energy consumed. And in fact, this week, there’s a blog that came out that talked about a whole bunch of new libraries that we offer. And that’s really the core of the first platform transition, going from general-purpose computing to accelerated computing. And it’s not unusual to see someone save 90% of their computing cost. And the reason for that is, of course, you just sped up an application 50x. You would expect the computing cost to decline quite significantly.

Nvidia’s management believes that generative AI is a new way to write software and is changing how every layer of computing is done

Generative AI, taking a step back about why it is that we went so deeply into it, is because it’s not just a feature, it’s not just a capability, it’s a fundamental new way of doing software. Instead of human-engineered algorithms, we now have data. We tell the AI, we tell the model, we tell the computer what are the expected answers, what are our previous observations, and then for it to figure out what the algorithm is, what’s the function. It learns a universal — AI is a bit of a universal function approximator and it learns the function. And so you could learn the function of almost anything, and anything that you have that’s predictable, anything that has structure, anything that you have previous examples of. And so now here we are with generative AI. It’s a fundamental new form of computer science. It’s affecting how every layer of computing is done from CPU to GPU, from human-engineered algorithms to machine-learned algorithms, and the type of applications you could now develop and produce is fundamentally remarkable.

Nvidia’s management thinks AI models are still seeing the benefits of scaling

There are several things that are happening in generative AI. So the first thing that’s happening is the frontier models are growing in quite substantial scale. And we’re still all seeing the benefits of scaling.

The amount of compute needed to train an AI model goes up much faster than the size of the model; management thinks the next generation of AI models could require 10-40 times more compute 

Whenever you double the size of a model, you also have to more than double the size of the data set to go train it. And so the amount of flops necessary in order to create that model goes up quadratically. And so it’s not unexpected to see that the next-generation models could take 10x, 20x, 40x more compute than last generation.

Nvidia’s management is seeing more frontier model makers in 2024 than in 2023

Surprisingly, there are more frontier model makers than last year.

Nvidia’s management is seeing advertising-related computing needs shifting from being powered by CPUs to being powered by GPUs and generative AI

The largest systems, largest computing systems in the world today, and you’ve heard me talk about this in the past, which are recommender systems moving from CPUs. It’s now moving from CPUs to generative AI. So recommender systems, ad generation, custom ad generation targeting ads at very large scale and quite hyper-targeting, search and user-generated content, these are all very large-scale applications that have now evolved to generative AI.

Nvidia’s management is seeing generative AI startups generating tens of billions of revenue-opportunities for cloud computing providers

The number of generative AI start-ups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners

Nvidia’s management is seeing that cloud computing providers have zero GPU capacity available because they are using it for internal workloads (such as accelerating data processing) and renting it out to model makers and other AI startups

If you just look at the world’s cloud service providers and the amount of GPU capacity they have available, it’s basically none. And the reason for that is because they’re either being deployed internally for accelerating their own workloads, data processing, for example…

…The second is, of course, the rentals. They’re renting capacity to model makers. They’re renting it to start-up companies. 

Nvidia’s management thinks Nvidia’s GPUs are the only AI GPUs that process and accelerate data; before the advent of generative AI, the number one use case of Nvidia’s GPUs was to accelerate data processing

NVIDIA’s GPUs are the only accelerators on the planet that process and accelerate data. SQL data, Panda’s data, data science toolkits like Panda’s, and the new one, Polar’s, these are the ones that are the most popular data processing platforms in the world, and aside from CPUs which, as I’ve mentioned before, are really running out of steam, NVIDIA’s accelerated computing is really the only way to get boosting performance out of that. And so the #1 use case long before generative AI came along is the migration of applications one after another to accelerated computing.

Nvidia’s management thinks that those who purchase Nvidia AI chips are getting immediate ROI (return on investment) for a few reasons: (1) GPUs are a better way to build data centers compared to CPUs because GPUs save money on data processing compared to CPUs, (2) cloud computing providers who rent out GPUs are able to rent out their GPUs the moment they are built up in the data center because there are many generative AI companies clamouring for the chips, and (3) generative AI improves a company’s own services, which delivers a fast ROI

The people who are investing in NVIDIA infrastructure are getting returns on it right away. It’s the best ROI infrastructure, computing infrastructure investment you can make today. And so one way to think through it, probably the easiest way to think through it is just to go back to first principles. You have $1 trillion worth of general-purpose computing infrastructure. And the question is, do you want to build more of that or not?

And for every $1 billion worth of Juniper CPU-based infrastructure that you stand up, you probably rent it for less than $1 billion. And so because it’s commoditized, there’s already $1 trillion on the ground. What’s the point of getting more? And so the people who are clamoring to get this infrastructure, one, when they build out Hopper-based infrastructure and soon, Blackwell-based infrastructure, they start saving money. That’s tremendous return on investment. And the reason why they start saving money is because data processing saves money, and data processing is probably just a giant part of it already. And so recommender systems save money, so on and so forth, okay? And so you start saving money.

The second thing is everything you stand up are going to get rented because so many companies are being founded to create generative AI. And so your capacity gets rented right away and the return on investment of that is really good.

And then the third reason is your own business. Do you want to either create the next frontier yourself or your own Internet services, benefit from a next-generation ad system or a next-generation recommender system or a next-generation search system? So for your own services, for your own stores, for your own user-generated content, social media platforms, for your own services, generative AI is also a fast ROI.

Nvidia’s management is seeing a significant number of data centers wanting liquid-cooled GPU systems because the use of liquid cooling enables 3-5 times more AI throughput compared to the past, resulting in cheaper TCO (total cost of ownership)

The number of data centers that want to go to liquid cooled is quite significant. And the reason for that is because we can, in a liquid-cooled data center, in any power-limited data center, whatever size of data center you choose, you could install and deploy anywhere from 3 to 5x the AI throughput compared to the past. And so liquid cooling is cheaper. Our TCO is better, and liquid cooling allows you to have the benefit of this capability we call NVLink, which allows us to expand it to 72 Grace Blackwell packages, which has essentially 144 GPUs.

Nvidia does not do the full integration of its GPU systems into a data center because it is not the company’s area of expertise

Our customers hate that we do integration. The supply chain hates us doing integration. They want to do the integration. That’s their value-add. There’s a final design-in, if you will. It’s not quite as simple as shimmying into a data center, but the design fit-in is really complicated. And so the design fit-in, the installation, the bring-up, the repair-and-replace, that entire cycle is done all over the world. And we have a sprawling network of ODM and OEM partners that does this incredibly well.

Nvidia has released many new libraries for CUDA, across a wide variety of use cases, for AI software developers to work with

Accelerated computing starts with CUDA-X libraries. New libraries open new markets for NVIDIA. We released many new libraries, including CUDA-X Accelerated Polars, Pandas and Spark, the leading data science and data processing libraries; CUVI-S for vector databases, this is incredibly hot right now; Ariel and Ciona for 5G wireless base station, a whole world of data centers that we can go into now; Parabricks for gene sequencing and AlphaFold2 for protein structure prediction is now CUDA accelerated.

Nvidia now has 3 networking platforms for GPUs

We now have 3 networking platforms, NVLink for GPU scale-up, Quantum InfiniBand for supercomputing and dedicated AI factories, and Spectrum-X for AI on Ethernet. NVIDIA’s networking footprint is much bigger than before. 

Salesforce (NYSE: CRM)

Agentforce is a new architecture and product that management believes will be fundamental to Salesforce’s AI leadership in the next decade; Salseforce will be introducing Agentforce in its upcoming Dreamforce customer-event; Agentforce is an autonomous AI and management will be getting every attendee at Dreamforce to turn on their own AI agents; Salesforce is already building agents for the company, Workday, and Workday will be Salseforce’s first Agentforce partner; Agentforce allows companies to build custom agents for sales, service, marketing, and commerce; management believes that within a year, most companies will be deploying autonomous AI agents at scale, and these agents will have a big positive impact on companies’ operations; Agentforce is currently management’s singular focus; many companies are already using Agentforce, including one of the world’s largest healthcare companies, which is resolving more than 90% of patient inquiries with Agentforce, and thinks Agentforce is much better than any other competing AI platform; a very large media company is using Agentforce to resolve 90% of employee and consumer issues; management thinks Salesforce is the first company to deploy high-quality enterprise AI agents at scale; Agentforce is not a co-pilot, it is an autonomous agent that is accurate and can be deployed right out of the box; users of Agentforce can do advanced planning and reasoning with minimal input; management sees Agentforce as being a trusted colleague that will complement human users; management sees thousands of companies using Agentforce by January 2025; early trials of Agentforce has showed remarkable success

We’re going to talk about a whole different kind of sales force today, a different kind of architecture and a product that we didn’t even talk about on the last earnings call that is going to be fundamental to our future and a manifestation of our decade of AI leadership, which is Agentforce. Now in just a few weeks, we’re going to kick off Dreamforce, and I hope all of you are planning to be there, the largest AI event in the world with more than 45,000 trailblazers in San Francisco. And this year, Dreamforce is really becoming Agentforce…

…We’re going to show our new Agentforce agents and how we’ve reimagined enterprise software for this new world of autonomous AI. And every customer, I’m going to try to get every customer who comes to Dreamforce to turn agents on while they’re there…

…This idea that you’re not just going to have sales agents and service agents who probably read, heard maybe you saw in CBC, we’re building the agents for Workday and we’re going to be building custom agents for so many of you as well with Agentforce, because it is a development platform as well as this incredible capability to radically extend your sales and service organizations.

So when you arrive at the Dreamforce campus, you’re going to see a big sign outside that says, humans with agents drive customer success together. And that’s because we now so strongly believe the future isn’t about having a sales force or a service force or a marketing force or a commerce force or an analytics force. The future is about also having an Agentforce. And while many customers today don’t yet have agent forces, but they do have sales forces or service forces, I assure you that within a year, we’re all going to have agent forces, and we’re going to have them at scale. And it’s going to radically extend our companies and it’s going to augment our employees, make us more productive. It’s going to turn us into these incredible margin and revenue machines. It’s going to be pretty awesome…

…with this Agentforce platform, we’re making it easy to build these powerful autonomous agents for sales, for service, for marketing, for commerce, automating the entire workflow on their own, embedding agents in the flow of work and getting our customers to the agent future first. And this is our primary goal of our company right now. This is my singular focus…

…We’re going to talk about the customers who have it, customers like OpenTable and Wiley and — and ADP and RBC and so many others who are deploying these agents and running them on top of our Data Cloud and our apps…

…At Dreamforce, you’re going to hear one of the very largest health care companies in the world. It’s got 20 million consumers here in the United States who is resolving more than 90% of all patient inquiries with Agentforce and they’re benchmarking us significantly higher than any other competing AI platform, and that’s based on some incredible AI breakthroughs that we have had at Salesforce…

…One of these very large media companies that we work with, a lot of probably know who have everything, every possible media asset, while they’re just resolving 90% of all of their employee and consumer issues with Agentforce, pretty awesome. So there’s nothing more transformational than agents on the technology horizon that I can see and Salesforce is going to be the first company at scale to deploy enterprise agents and not just any enterprise agents, the highest quality, most accurate agents in the world…

…We’re seeing that breakthrough occur because with our new Agentforce platform, we’re going to make a quantum leap for in AI, and that’s why it wants you all at Dreamforce, because I want you to have your hands on this technology to really understand this. This is not co-pilots…

…These agents are autonomous. They’re able to act with accuracy. They’re able to come right out of the box. They’re able to go right out of the platform…

…These agents don’t require a conversational prompt to take action. You can do advanced planning, reasoning with minimal human input. And the example of this incredible health care company, you’re going to be able to say to the agent, “Hey, I want to look at my labs, I want to understand this. It looks like I need repeat labs. Can you reschedule those for me? It looks like I need to see my doctor, can you schedule that for me? I also want to get an MRI, I want to get this.” And the level of automation that we’re going to be able to provide and unleash the productivity back into these organizations is awesome…

…This is going to be like having these trusted colleagues can handle these time-consuming tasks engaging with these — whether it’s inbound lead or resolving this customer, patient inquiry or whatever it is, this is humans with agents driving customer success together and Agentforce agents can be set up in minutes, easily scalable, work around with the block, any language. And by the beginning of next fiscal year, we will have thousands of customers using this platform. And we will have hand help them to make it successful for them, deploy it. The early trials have been remarkable to see these customers have the success, it has been just awesome…

…We’re just at the beginning of building an Agentforce ecosystem with companies able to build agents on our platform for their workforce and use cases, and we’re excited to have Workday as our first agent force partner.

Salesforce has been able to significantly reduce hallucinations with its AI products, and thus deliver highly accurate results, through the use of new retrieval augmented generation (RAG) techniques

The accuracy of our results, the reduction of hallucinations and the level of capability of AI is unlike anything I think that any of us have ever seen, and we’ve got some incredible new techniques, especially incredible new augmented RAG techniques that are delivering us the capability to deliver this accuracy with our — for our customers.

Salesforce’s management still sees the company as the No.1 AI CRM in the world

Of course, Salesforce is the #1 AI CRM.

 In 2024 Q2, Einstein delivered 25 trillion transactions and 1 trillion workflows; Wyndham is using Einstein to reduce average call times to free up time for service agents for higher-value work

We’re just operating at this incredible scale, delivering 25 trillion Einstein transactions across all of the clouds during the quarter, that’s 25 trillion and more than 1 trillion workflows…

…MuleSoft allows Wyndham to unlock business-critical data from various platforms and onboard future franchisees faster. And with Einstein generated recommended service replies, average call times have been reduced and service agents can focus on higher priority work

Salesforce’s management thinks many of the company’s customers have a misconception about AI in that they need to build and train their own AI models; management is able to use Salesforce’s AI models and resolve issues much better than its customers’ own models; management thinks Salesforce’s AI models have the highest efficacy

I think that there’s a lot of misconceptions about AI with my customers. I have been out there very disappointed with the huge amount of money that so many of these customers have wasted on AI. They are trying to DIY their AI…

…This idea that our customers are going to have to build their own models, train their own models, retrain their own models, retrain them again and I’m meeting with these customers, and they’re so excited when they and they say, “Oh, I built this model, and we’re resolving 10%, 20%, 30%, 40% and — of this or that and whatever. ” And I’m like, really, we’ll take a look at our models and our capability where you don’t have to train or retrain anything and you’re going to get more than 90%. And then they say, wait a minute, how do you do that? And this is a moment where every customer needs to realize you don’t need the DIY your AI. You can use a platform like Salesforce to get the highest efficacy of artificial intelligence, the best capability to fully automate your company, achieve all of your goals and you can do it with professional enterprise software…

…We’re in met with one of the largest CIOs in the world is telling me how excited he was for the B2C part of this business, he built this model and accuracy rates, than I was like, really, let me show you what we’re doing here. And then he said to me, why am I doing this? Why am I not just using your platform? And I said good question. So these customers are spending millions of dollars, but are they really getting the results that they want? It feels like this early days of cloud. It’s just early days of social mobile. Customers feel like they have to DIY it, they don’t need to, they can make it all happen themselves. And you can just see that to deliver this very high-quality capability they can use a deeply integrated platform like Salesforce.

Salesforce’s management is seeing the company’s customers get immediate ROI (return on investment) from deploying AI automation solutions because the solutions can be easily and quickly customised and configured

We’ve created out-of-the-box platform to deliver all of this for them. So this could be service reply recommendations, account summaries, report generation, you’ve seen in Slack, this kind of auto summarization, recaps, all of these amazing things, the level of automation, the amount of code that our team has written, the transformation of our platform in the last 18 months, it’s remarkable. And customers love it because they can take the platform and then all of this generative AI use case customize it for their own needs or configure it using our capability because they’re doing that without writing a line of code. It’s clicks, not code, deploy them in days, not weeks. They’re doing this in months, not years, and you’re getting immediate ROI. 

Salesforce’s management thinks many of the company’s customers are really disappointed with Microsoft Co-pilot because of a lack of accuracy

So many customers are so disappointed in what they bought from Microsoft CoPilots because they’re not getting the accuracy and the response that they want. Microsoft is disappointed so many customers with AI. 

Wiley is achieving double-digit percentage increase in customer satisfaction and deflection rate, and 50% case resolution with the first generation of Agentforce; Royal Bank of Canada and AP are seeing 90% case resolution rates with the second generation of Agentforce; OpenTable is using Agentforce to support its interactions with 60,000 restaurants and 160 million diners

Wiley is a long-standing Salesforce customer. It’s one of our first deployments in the first agent force trial. It’s pretty awesome. And you all know they make textbooks and it’s back-to-school. But maybe you don’t know that Wiley has to like surge their sales and service organization at back-to-school time when everyone’s buying these textbooks. Well, now they can use agents to do that surge. They don’t have to go buy a bunch of gig workers and bring them in. and that age and capacity is so exciting for them. What we saw with Wiley was, this is a quote from them, “we’re seeing double-digit percentage increase in customer satisfaction and deflection rate compared to older technologies and in these early weeks of our busiest season. ” So that was very reassuring to us. that we have the right thing that’s happening. And Wiley has already seen a 50% increase in case resolution. That’s with our first generation of Agentforce.

As I mentioned, the second generation of Agentforce, which we have with customers already, including some of these amazing organizations like Royal Bank of Canada, ADP and others is 90% case resolution. It is an awesome moment in this tech business.

OpenTable is another super great story. You all know they are managing 60,000 restaurants, 160 million diners to support. They’re on Agentforce now. They require that incredible scale to deliver top-notch customer service. That’s why they’re using the product. It’s been awesome to get their results and it can be all kinds of questions resolving basic issues, account activations, reservation management, loyalty point expiration. Agentforce for service can easily answer all of these questions like when do my points expire for a diner asset, a follow-up question like, what about in Mexico? What about — can I make this change? That’s where we’re delivering those incredible moments for OpenTable, giving them this kind of productivity enhancement. 

Agentforce is driving growth in cloud products’ sales for Salesforce

Agentforce for sales, you can imagine extending your sales force with SDRs, BDRs who are agents that are going out and building pipeline for you and generating all kind of demand and even really closing deals. So, this is going to drive sales cloud growth. It already is, service cloud growth. It already is because customers are going to extend their sales and service organizations and become a lot more productive with these agents.

Salesforce will be releasing industry-specific AI agents in the coming months

In the coming months, we’re going to release Agentforce agents for other roles, including industry-specific agents, health agents, as I mentioned. 

Data Cloud provides the foundation for Agentforce because it holds a huge amount of data and metadata; management continues to believe that data is the foundation of AI; Data Cloud federates and connects to all other data clouds of a user to deliver super accurate AI; Data Cloud is Salesforce’s fastest-growing organic product and will be the fastest to hit $1 billion, $5 billion, and $10 billion in revenue; Data Cloud customers were up 130% year-on-year in 2024 Q2; number of Data Cloud customers spending more than $1 million annually have doubled; Data Cloud processed 2.3 quadrillion records in 2024 Q2 (was 2 quadrillion in 2024 Q1); Data Cloud consumption was up 110% year-on-year in 2024 Q2; American Family Insurance is using Data Cloud to create a 360-degree view of customers; Adecco Group is using Data Cloud to create seamless access to information for 27,000 of its employees; Windhma is using Data Cloud to unify profiles of 165 million guest records, many of which are duplicates across multiple sources

This type of performance from our Agentforce platform wouldn’t be possible without Data Cloud. One of the reasons that our agents are so accurate is because of the huge amount of data and metadata that we had. And data is the foundation for every AI transformation. And with Data Cloud, we’re providing a high-performance data lake that brings together all our customer and business data, federating data from external repositories through this credible zero-copy alliance. So customers can use our Data Cloud and then federate and connect to all their other data clouds and then we can bring it all together to deliver the super accurate AI. 

That’s why Data Cloud is absolutely our fastest-growing organic product in history. It will be the fastest product to $1 billion — it’s going to probably be the fastest product of $5 billion, $10 billion. In Q2, the number of paid Data Cloud customers grew 130% year-over-year and the number of customers spending more than $1 million annually have already doubled. In the second quarter alone, and this is amazing, data Cloud processed 2.3 quadrillion records with 110% platform consumption growth year-over-year…

…American Family Insurance with millions of policyholders nationwide is using Data Cloud to consolidate data from multiple sources through our zero-copy partner network, creating a 360-view of the customers, enabling quick segmentation and activating lead data, including their real-time web interactions. The Adecco Group expanded their data cloud in the quarter, a great example of a company leveraging its gold mine of data to gain a unified view of its customers. Connecting all this data means that 27,000 Adecco employees using Salesforce will have seamless access to key information, including financial metrics and job fulfillment status, to help Adecco improve their job fill rate ratio and reduce their cost to serve…

…Wyndham utilizes Data Cloud to unify profiles of 165 million guest records, many of which were duplicates across many sources like Amazon Redshift and the Sabre Reservation System as well as Sales Cloud, Marketing Cloud and Service Cloud. 

Salesforce has rewritten all of its software to be under one unified platform; management thinks building AI agents without a unified platform is risky; the decision to unite all of Salesforce’s software was made 18 months ago with the shift to AI

We’ve automated every customer touch point and now we’re bringing these apps, data and agents together. It’s these 3 levels, and this is in 3 separate pieces of code or 3 different platforms or 3 different systems. This is 1 platform. We’ve rewritten all of our acquisitions, all of our core modules, our Data Cloud and our agents as 1 unified platform, which is how we are delivering not only this incredible functionality but this high level of accuracy and capability. And from this first-hand experience in meeting with these customers around the globe, I can unequivocably tell you that building these agents without a complete integrated platform is like trying to assemble a plane mid-flight, it’s risky chaotic and it’s not likely to succeed…

…With the shift to AI, it just became clear 18 months ago, we need to hit the accelerator pedal and rewrite all these things onto the core platform because customers are going to get this incredible value by having 1 integrated system, and it scales from small companies to extremely large companies. 

Bookings for Salesforce’s AI products was up more than 100% year-on-year in 2024 Q2; Salesforce signed 1,500 AI deals in 2024 Q2; aircraft maker Bombardier is using Salesforce’s AI products to arm sales reps with better information on, and recommendations for, prospects

We’re already accelerating this move from AI hype to AI reality for thousands of customers with amazing capabilities across our entire AI product portfolio. New bookings for these products more than doubled quarter-over-quarter. We signed 1,500 AI deals in Q2 alone. Some of the world’s largest brands are using AI solutions, including Alliant, Bombardier and CMA CGM. Bombardier, the maker of some of the world’s top performing aircraft, is enabling sales reps to sell smarter by consolidating need to know information on prospects in advance of meetings and providing recommendations on how to best engage with them through the Einstein copilot and prompt builder. 

Salesforce has a new team called Salesforce CTOs that will work alongside customers in deploying AI agents

To help our customers navigate this new world, we just launched a new team called Salesforce CTOs. These are deeply technical individuals who work alongside our customers to help them create and execute a plan for every stage of their AI journey to become agent first companies. 

Salesforce sees itself as customer zero for all its AI products, including Agentforce, and it is deploying its own AI products internally with success; 35,000 Salesforce employees are using Einstein as an AI assistant; Salesforce has already used Slack AI to create 500,000 channel summaries since February 2024, saving 3 million hours of work

We’re continuing our own AI journey internally as a Customer Zero of all of our products with great results. We now have 35,000 employees using Einstein as a trusted AI assistant, helping them work smarter and close deals faster. And since we launched Slack AI in February, our employees have created more than 500,000 channels — channel summaries, saving nearly 3 million hours of work. We’ll, of course, deploy Agentforce agents soon in a variety of different roles and tasks to augment, automate and deliver productivity and unmatched experiences for all employees and customers at scale.

Salesforce will be introducing Industry Toolkit at Dreamforce; Industry Toolkit contains more than 100 ready-to-use AI-powered actions; Industry Toolkit can be used with Agentforce 

At Dreamforce, we’re excited to share our new AI tool kit — industry toolkit, which features more than 100 ready-to-use customizable AI-powered actions. All of these actions can be applied to build industry-specific agents with Agentforce.

Salesforce’s management wants to see 1 billion AI agents by FY2026; there are already 200 million agents identified in trials

I’ll just give you my own personal goals. So I’m not giving any guidance here. My goal is that by the end of fiscal year ’26 that we will have 1 billion agents. Already in just looking at the number of consumers identified just in the trials that we have going on, we have like 100 million identified or more. Okay. call it, 200 million. But the funny thing is, of course, it’s only 1 agent. But let’s just think it’s like a manifestation of all these agents talking to all these consumers.

Salesforce already has a long history of selling non-human consumption-based products; with AI agents, management sees pricing on a consumption basis or on a per conversation basis (at $2 per conversation); management thinks AI agents is a very high-margin opportunity

On pricing. When you think about — when you think about apps and you think about humans, because humans use apps, not in all cases. So for example, the Data Cloud is a consumption product. The Commerce Cloud is a consumption product. Of course, the e-mail product, Marketing Cloud is a consumption product. Heroku is a consumption product. So of course, we’ve had non-human consumption-based products for quite a long time at Salesforce…

…When we look at pricing, it will be on a consumption basis. And when we think about that, we think about saying to our customers, and we have, it’s about $2 per conversation. So, that is kind of how we think about it, that we’re going to have a lot of agents out there, even though it’s only 1 agent. It’s a very high margin opportunity, as you can imagine, and we’re going to be reaching — look, you have to think about these agents are like, this is the new website. This is your new phone number. This is how your customers are going to be connecting with you in this new way, and we’re going to be helping our customers to manage these conversations. And it’s probably a per conversational charge as a good way to look at it or we’re selling additional consumption credits like we do with our data cloud. 

Veeva Systems (NYSE: VEEV)

Veeva’s management is seeing the company’s customers appreciate the patience they have displayed in adopting AI; customers started using Veeva’s Vault Direct Data API for AI use cases in 2024 Q2; Vault Direct Data API provides data access 100 times faster than traditional APIs; management thinks that the advantage of providing API access for AI use cases is the possibility of partners developing use cases that management could not even forsee; customers have to pay a fee to turn on Vault Direct Data API and the fee is for covering Veeva’s compute costs; there’s no heavy implementation tasks needed for Vault Direct Data API

When groundbreaking technology like GenAI is first released, it takes time for things to settle and become clearer. That’s starting to happen now. Customers have appreciated our taking the long view on AI and our orientation to tangible value rather than hype. In Q2, our first early customers started using the Vault Direct Data API to power AI and other use cases. The ability to retrieve data 100 times faster than traditional APIs is a major software platform innovation and will be a big enabler of AI that uses data from Vault applications…

… When you make an API like the Direct Data API, you don’t know the innovation you’re unleashing. And that’s the whole point because the data can be consumed so fast and transactionally accurately, use cases that weren’t practical before can become practical. I mean if I step back way back when to designing the first salesforce.com API, I knew it was going to unleash a lot of innovation, and you just don’t know. It’s not predictable, and that’s the good thing…

…[Question] Looking at Vault Direct Data API, how seamless is it for customers to turn it on and start using it? Is it something that needs an implementation? 

[Answer] That is something that’s purchased by the customer, so that is something that is not free for the customers to use. They purchase it. The fee is not that large. It covers our compute cost, that type of thing… 

…After that, no, there’s no implementation. You turn it on, and it’s on. And that’s that.

Veeva’s AI Partner Program is progressing well and has seen 30 AI use cases being developed by 10 partners across Veeva Development Cloud and Veeva Commercial Cloud; the AI use cases in Veeva Commercial Cloud are mostly related to data science while the use cases in Veeva Development Cloud are mostly related to generation of documents and reports; management does not want to compete with the partners that are in the AI Partner Program 

Our AI Partner Program is also progressing well. We now have more than 10 AI partners supporting roughly 30 use cases across R&D and Commercial. We also continue to explore additional AI application opportunities beyond our current AI solutions…

… [Question] You talked about some of the early traction you’re seeing with the AI Partner Program. Can you maybe talk about what are some of the use cases you’ve seen so far?

[Answer] The types of use cases in commercial often have to do with data science. So things like next best action, dynamic targeting, pre-call planning, things like that. And then in R&D, they can be more things like document generation, generate a clinical study report or doing specific medical coding, things like that. So those are the type of use cases…

…In terms of us monitoring that and informing our own road map, I guess there may be some of that. But mostly, that type of innovation really comes from internally our own thinking with our customers. We don’t want to really disrupt our partners, especially when the partners are having customer success. If there’s a major use case that we’re very clear that customers need and for some reason, the ecosystem is not delivering customer success, yes, maybe we might step in there. But I would guess that what we would do would be more holistic, I guess, in some sense and not specifically something a partner would tackle because we’re generally going to have more resources and more ability to sway our own road map than a partner would, and we want to be respectful to the ecosystem.

Zoom Video Communications (NASDAQ: ZM)

Zoom’s management is seeing customers seeking out the AI capabilities of Zoom’s Contact Center packages; Zoom’s management saw the ASP (average selling price) for its Contact Center product double sequentially because of the product’s AI-tier, which comes with higher pricing

We are seeing increased adoption of our advanced Contact Center packages, as customers seek to utilize our AI capabilities to enhance agent performance…

…If you remember, we started with one pricing tier. We eventually added two more and the AI agent is like that Eric was speaking about earlier, is in the highest tier. We actually saw our ASPs for Contact Center almost double quarter-over-quarter because it’s such a premium feature. And when I look at the Q2 deals, the majority of them were purchasing in one of the top 2 tiers, so all of that is contributing to what I would say is not only expansion in terms of seat count but expansion in terms of value being derived from the product.

Zoom’s AI companion uses generative AI to produce meeting summaries, live translations, image generation and more; Zoom AI Companion is now enabled on over 1.2 million accounts; management wIll be upgrading AI companion as Zoom transitions into the 2.0 phase of AI-enabled work; customers really like ZoomAI Companion; Zoom AI Companion is provided at no additional cost; in Zoom meetings, the No.1 use case of Zoom AI Companion is to create meeting summaries; management is constantly improving the quality of Zoom AI Companion’s meeting summaries; customers are giving positive feedback on Zoom AI companion

Today, AI Companion enhances an employee’s capabilities using generative AI to boost productivity through features like meeting summary, chat compose, image generation, live translation and enhanced features in Contact Center. As these features have grown in popularity, we are happy to share that Zoom AI Companion is now enabled on over 1.2 million accounts…

…Our progress broadening Zoom Workplace, building out enhanced AI tools for Contact Center and amassing a large base of AI users sets us up well to transition into the 2.0 phase of AI-enabled work. In this phase, Zoom AI Companion will move beyond enhancing skills to simplifying your workday, providing contextual insights, and performing tasks on your behalf. It will do this by operating across our collaboration platform to ensure your day is interconnected and productive…

…Our customers really like Zoom AI Companion. First of all, it works so well. Secondly, at no additional cost, not like some of other vendors who got to charge the customer a lot. And in our case, this is a part of our package…

… You take a Meeting, for example, right? For sure, the #1 use case like a meeting summary, right? And we keep improving that quality like in the [indiscernible] and or meeting summary are getting better and better. Like in July, we had another upgrade quarter-wise, even better than previous deliveries, right?..

… [Question] One question I had is when you’re looking at Zoom AI Companion, we’ve heard a lot of great things in the field if customers kind of comparing that to other products that are offered out there. Can you kind of remind us about how you guys think about tracking success with the product internally, given that you don’t kind of charge for it directly beyond having millions of people using it?

[Answer] The metrics that we’ve been talking about on here is account activation. So looking at how many — it’s not individual users, it’s actual customer accounts that have activated it… And also they share the stories like how Zoom AI Companion like is very accurate summary, action items are helping their employees’ productivity as well. And yes, a lot of very positive feedback about adopting Zoom AI Companion.

Zoom’s management has intention to monetise AI services for the Contact Center product, but not for Zoom Workplace

[Question] Now that you’re seeing more adoption, Kelly, of Zoom Companion, how do you think about the cost of providing these generative AI features and capabilities? And do you think Zoom could eventually charge on a usage basis for power users of the generally just trying to weigh cost versus revenue opportunities here?

[Answer] I mean when we launched AI Companion, right? So we do not want to charge the customer. However, that’s for the workplace for the business services like a Contact Center, all those new offerings. And I think for sure, we are going to monetize. As I mentioned in the previous earnings calls, new — new solutions or the billing services, AI, I think we are going to charge. They are AI Companion, right? But the workplace and our core you see offering and collaboration offering we do not want to charge. I want to see — I really appreciate our AI team’s great effort, right? And focus on the quality, focus on the cost reduction and so on and forth.

AI services are a drag on Zoom’s margins at the moment (as the company is providing a lot of AI services for free now) but management sees them as important investments for growth

[Question] Just on gross margins, like the impact of generative AI and maybe what you can do to alleviate some of that off there.

[Answer] I mean we’re guiding to 79% for this year, which we will, reflects the prioritization of AI, but also the very strong discipline that we continue to apply. And we are holding to our long-term target for gross margins of 80%. But of course, we think at this point in time, it’s very important to prioritize these investments as they really set us up for future growth.

Zoom’s dev ops team is saving costs for the company to make room for more AI investments

I also want to give a credit to our dev ops team. On the right hand, for sure, we are going to buy more and more GPUs, right? And also leverage that. Our team tried to save the money from other areas, fully automated, and so on and so forth, right? So that’s another way for us to save the cost, right, to make some room for AI.

The regulatory environment for AI in the US and Europe has so far had very little impact on Zoom’s business because Zoom’s management has been adamant and clear that it is not going to use customer’s data to train its AI models
[Question] Are you seeing anything in the broad sweep of AI regulation in the U.S. or Europe that you think can dampen innovation?

[Answer] That’s the reason why we launch AI Companion, we already mentioned, we are not going to use any of our customer data to train our AI models, right? And we take customers data very, very seriously, right? And as a customer, they know that they trust our brand and trust of what we’re doing. And so far, I do not see any impact in terms of like regulation. And again, this AI is moving rapidly, right? So almost the EMEA here and we all look at the potential regulation. But so far, impact actually to us, to our business, I think it’s extremely limited. So like meeting summary, and it’s a very important feature, customer like that. I think we do not use our customer data to train our AI model. And so why not keep using the feature? I think there’s no impact so far.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, MongoDB, Salesforce, Veeva Systems, and Zoom Video Communications. Holdings are subject to change at any time.