More Of The Latest Thoughts From American Technology Companies On AI (2024 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

Earlier this month, I published the two-part article, The Latest Thoughts From American Technology Companies On AI (2024 Q4) (see here and here). In them, I shared commentary in earnings conference calls for the fourth quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management will be offering new Firefly web app subscriptions that will support both Adobe’s Firefly AI models and 3rd-party models; management envisions the Firefly app as the umbrella destination for ideation; management recently introduced Adobe’s new Firefly video model into the Firefly app offering; management will be introducing Creative Cloud offerings with Firefly tiering; the Firefly video model has been very well-received by brands and creative professionals; users of the Firefly video model can generate video clips from a text prompt or image; the Firefly web app allows users to generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages; the Firefly web app subscription plans include Firefly Standard, Firefly Pro, and Firefly Premium; more than 90% of paid users of the Firefly web app have been generating videos; Firefly has powered 20 billion generations (16 billion in 2024 Q3) since its launch in March 2023, and is now doing more than 1 billion generations a month; management thinks the commercially-safe aspect of Firefly models is very important to users; management thinks the high-level of creative control users get with Firefly models is very important to them; the adoption rates of the Firefly paid plan signals to management that Firefly is adding value to creative professionals

In addition to Creative Cloud, we will offer new Firefly web app subscriptions that integrate and are an on-ramp for our web and mobile products. While Adobe’s commercially safe Firefly models will be integral to this offering, we will support additional third-party models to be part of this creative process. The Firefly app will be the umbrella destination for new creative categories like ideation. We recently introduced and incorporated our new Firefly video model into this offering, adding to the already supported image, vector and design models. In addition to monetizing stand-alone subscriptions for Firefly, we will introduce multiple Creative Cloud offerings that include Firefly tiering…

…The release of the Adobe Firefly Video model in February, a commercially-safe generative AI video model, has been very positively received by brands and creative professionals who have already started using it to create production-ready content. Users can generate video clips from a text prompt or image, use camera angles to control shots, create distinct scenes with 3D sketches, craft atmospheric elements and develop custom motion design elements. We’re thrilled to see creative professionals and enterprises and agencies, including Dentsu, PepsiCo and Stagwell finding success with the video model….

…In addition to generating images, videos and designs from text, the app lets you generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages. We also launched 2 new plans as part of this release, Firefly Standard and Firefly Pro and began the rollout of our third plan, Firefly Premium, yesterday. User engagement has been strong with over 90% of paid users generating videos…

…Users have generated over 20 billion assets with Firefly…

…We’re doing more than 1 billion generations now a month and 90% of people using Firefly the app also saw — are generating video as well as part of that…

…For Firefly, we have imaging, vector, design, video, voice, video and voice coming out just a couple of weeks ago, off to a good start. I know there have been some questions about how important is commercially safety of the models. They’re very important. A lot of enterprises are turning to them for the quality, the breadth but also the commercial safety, the creative control that we give them around being able to really match structure, style, set key frames for precise video generation, 3D to image, image to video…

…If we look at the early adoption rates of the Firefly paid plan, it really tells us both of these stories. We have a high degree of conviction that it’s adding value and being used by Creative Professionals,

Adobe’s management thinks that marketing professionals will need to create and deliver an unprecedented volume of personalised content and that marketing professionals will need custom, commercially safe AI models and AI agents to achieve this, and this is where Adobe GenStudio and Firefly Services can play important roles; management is seeing customers turn to Firefly Services and Custom Models for scaling on-brand marketing content production; there are over 1,400 custom models created since launch of Firefly Services and Custom Models; Adobe GenStudio for Performance Marketing has won leading brands recently as customers; Adobe GenStudio for Performance Marketing has partnerships with leading digital advertising companies

Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile applications, e-mail, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom, commercially safe models and agents tailored to address the inefficiencies of the content supply chain. With Adobe GenStudio and Firefly Services, Adobe is transforming how brands and their agency partners collaborate on marketing campaigns, unlocking new levels of creativity, personalization and efficiency. The combination of the Adobe Experience Platform and apps and Adobe GenStudio is the most comprehensive marketing platform to deliver on this vision…

…We had another great quarter in the enterprise with more customers turning to Firefly Services and Custom Models to scale on-brand content production for marketing use cases, including leading brands such as Deloitte Digital, IBM, IPG Health, Mattel and Tapestry. Tapestry, for example, has implemented a new and highly productive digital twin workflow using Custom Models and Firefly…

…Strong demand for Firefly Services and Custom Models as part of the GenStudio solution with over 1,400 custom models since launch.

GenStudio for Performance Marketing wins at leading brands including AT&T, Lennar, Lenovo, Lumen, Nebraska Furniture Mart, Red Hat, Thai Airways, and University of Phoenix.

Strong partnership momentum with GenStudio for Performance Marketing supporting ad creation and activation for Google, Meta, Microsoft Ads, Snap, and TikTok and several partners including Accenture, EY, IPG, Merkle and PWC offering vertical extension apps.

Adobe’s generative AI solutions are infused across the company’s products and management sees the generative AI solutions as a factor driving billions in annualised recurring revenue (ARR) for the company from customer acquisition to customer retention and upselling; Adobe has AI-first stand-alone and add-on products such as Acrobat AI Assistant, the Firefly App and Services, and GenStudio for Performance Marketing; the AI-first stand-alone and add-on products already accounted for $125 million in book of business for Adobe in 2024 Q4 (FY2025 Q1), and management expects this book of business to double by the end of FY2025; management thinks that the monetisation of Adobe’s AI services goes beyond the $125 million in book of business and also incorporates customers who subscribe to Adobe’s services and use the AI features

Our generative AI innovation is infused across the breadth of our products, and its impact is influencing billions of ARR across acquisition, retention and value expansion as customers benefit from these new capabilities. This strength is also reflected in our AI-first stand-alone and add-on products such as Acrobat AI Assistant, Firefly App and Services and GenStudio for Performance Marketing, which have already contributed greater than $125 million book of business exiting Q1 fiscal ’25. And we expect this AI book of business to double by the end of fiscal ’25…

…A significant amount of the AI monetization is also happening in terms of attracting people to our subscription, making sure they are retained and having them drive higher-value price SKUs. So when somebody buys Creative Cloud or when somebody buys Document Cloud, in effect, they are actually monetizing AI. But in addition to that, Brent, what we wanted to do was give you a flavor for the new stand-alone products that we have when we’ve talked about introducing Acrobat AI Assistant and rolling that out in different languages, Firefly, and making sure that we have a new subscription model associated with that on the web, Firefly Services for the enterprise and GenStudio. So the $125 million book of business that we talked about exiting Q1 only relates to that new book of business.

Adobe’s management is seeing every CMO (Chief Marketing Officer) being very interested in using generative AI in their content supply chain

Every CMO that we talk to, every agency that we work with, they’re all very interested in how generative AI can be used to transform how the content supply chain works.

Adobe’s management sees AI as bringing an even larger opportunity for Adobe

I am more excited about the larger opportunity without a doubt as a result of AI. And we’ve talked about this, Kash. If you don’t take advantage of AI, it’s a disruption. In our particular case, the intent is clearly to show how it’s a tailwind.

Adobe’s management is happy to support 3rd-party models within the Firefly web app or within other Adobe products so long as the models deliver value to users

We’ll support all of the creative third-party models that people want to support, whether it’s a custom model we create for them or whether it’s any other third-party model within Firefly as an app and within Photoshop, you’re going to see support for that as well. And so think of it as we are the way in which those models actually deliver value to a user. And so it’s actually just like we did with Photoshop plug-ins in the past, you’re going to see those models supported within our flagship applications.

Adobe’s management is seeing very strong attach rate and adoption of generative AI features in Adobe’s products with creative professionals

This cohort of Creative Professionals, we see very strong attach and adoption of the generative AI features we put in the product partially because they’re well integrated and very discoverable and because they just work and people get a lot of value out of that. So what you will see is you’ll start to see us integrating these new capabilities, these premium capabilities that are in the Firefly Standard, Pro and Premium plans more deeply into the creative workflow so more people have the opportunity to discover them.

Meituan (OTC: MPNGY)

Meituan’s autonomous vehicles and drones have accumulated 4.9 million and 1.45 million in orders-fulfilled by end-2024; Meituan’s drones started operating in Dubai recently

By year end of 2024, the accumulated number of commercial orders fulfilled by our autonomous vehicles and drones have reached 4.9 million and 1.45 million, respectively. Our drone business also started commercial operation in Dubai recently.

Meituan’s management wants to expand Meituan’s investments in AI, and is fully committed to integrating AI into Meituan’s platform; management’s AI strategy for Meituan has 3 layers, which are (1) integrating AI into employees’ work, (2) infusing AI into Meituan’s products, and (3) building Meituan’s own large language model

We will actively embrace and expand investment in cutting-edge technologies, such as AI or unmanned aerial delivery or autonomous delivery service vehicles, and accelerate the application of these technologies. And we are committed to fully integrating AI into consumers’ daily lives and help people eat better, live better…

…Our AI strategy builds upon 3 layers. The first one is AI at work. We are integrating AI in our employees’ day-to-day work and our daily business operations and to significantly enhance the productivity and work experience for our over 400,000 employees. And then second layer is AI in products. So we will use AI to upgrade our existing products and services, both 2B and 2C. And we will also launch brand-new AI-native products to better serve our consumers, merchants, couriers and business partners…

…The third layer is building our own in-house large language model, and we plan to continue to invest and enhance our in-house large language model with increased CapEx.

Meituan’s management has developed Meituan’s in-house large language model named Longcat; management has rolled out Longcat alongside 3rd-party models to improve employees’ productivity; Longcat has been useful for AI coding, conducting smart meetings, short-form video generation, for AI sales assistance, and more; Longcat has been used to develop an in-house AI customer service agent, which has driven a 20% improvement in efficiency and a 7.5 percentage points improvement in customer satisfaction; the AI sales assistant reduced the workload of Meituan’s business development (BD) team by 44% during the Spring Festival holidays; 27% of new code in Meituan is currently generated by its AI coding tools

On the first layer, AI at work, on the employee productivity front, we have our — we have developed our in-house large language model. It’s called longcat. By putting longcat side by side with external models, we have rolled out our very highly efficient tools for our employees, including AI coding, smart meeting and document assistant, and also, it’s quite useful in graphic design and short-form video generation and also AI sales assistance. These tools have substantially boost employee productivity and working experience…

…We have developed an intelligent AI customer service agent using our in-house large language model. So after the pilot operation, the results show more than 20% enhanced efficiency. And moreover, the customer satisfaction rate has improved over 7.5 percentage points…

…During this year’s Spring Festival holidays, we gathered an updated business information of our 1.2 million merchants on our platform with AI sales assistant. So it very effectively reduced the workload of our BD team, yes, by 44% and further enhanced the accuracy of the listed merchant information on our platform…

…Right now, in our company, about 27% of new code is generated by AI coding tools.

Meituan’s management is using AI to help merchants with online store design, information enhancement, and display and operation management; management is testing an AI assistant to improve the consumer experience in their search and transactions; management will launch a brand-new advanced AI assistant later this year that will give everyone a free personal assistant; the upcoming advanced AI assistant will be able to satisfy a lot of consumer-needs in the physical world because in order to bring AI to the physical world, physical infrastructure is needed and Meituan has that

We use AI across multiple categories by providing various tools such as smart online store design and smart merchant information enhancement and display and operation management…

…On the consumer side, we have already started testing AI assistant in some categories to enhance customer — consumer experience for their search and transaction on our platform. And for example, we have rolled out a restaurant assistant and travel assistant — reservation assistant. They can chat with the users, either by text or voice, making things more convenient and easier to use for users. And right now, we are already working on a brand-new AI native product. We expect to launch this more advanced AI assistant later this year and to cover all Meituan services so that everyone can have a free personal assistant. So based on our rich off-line service offerings and efficient on-demand delivery network, I think we will be able to handle many personalized needs in local services. And whether it’s ordering food delivery or making a restaurant reservation or purchasing group deals or ordering groceries or planning trips or booking hotels, I think we have got it covered with a one-stop, and we are going to deliver it to you on time…

…Our AI assistant will not only offer consumer services in the digital world, not just a chatbot, but it’s going to be able to satisfy a lot of their needs in the physical world because in order to bring AI to the physical world, you need more than just very smart algorithms or models. You need infrastructure in the physical world, and that’s our advantage…

…We have built a big infrastructure in the physical world with digital connections. We believe that, that kind of infrastructure is going to be very valuable when we are moving to the era of physical AI.

Meituan’s management expects to incur a lot of capex to improve Meituan’s in-house large language model, Longcat; to develop Longcat, management made the procurement of GPUs in 2024 a top priority, and expects to further scale GPU-related capital expenditure in 2025; Longcat has quite good evaluation results in China; Longcat’s API core volume has increased from 10% at the beginning of 2024 to 68% currently

On the algorithm model and compute side, it’s going to need a lot of CapEx and a very good foundation model. So in the past year, to ensure adequate supply of GPU resources has been a top priority for us. And even as we allocate meaningful resources in shareholder return and new initiatives, we keep investing billions in GPU resources. So our capital — CapEx this year has been substantial. And this year, we plan to further scale our investment in this very critical area. And thanks to our infrastructure and large language model team, we have made significant optimization, both in efficiency and effectiveness. And as a result, our in-house large language model, longcat, has achieved quite good evaluation results comparable to the top-tier models in China…

…The API core volume for Longcat has increased from 10% at the beginning of last year to 68% currently, so — which further validates the effectiveness of our in-house foundation model.

Meituan’s management believes that AI is going to give a massive push to the robotics industry; Meituan has been researching autonomous vehicles since 2016 and drones since 2017; management has made several investments in leading robotics and autonomous driving start-ups; management expects Meituan’s efforts in robotics and AI to be even more tightly integrated in the future

I think AI is going to give a massive push to the development of robotics. So we have been a very early mover when it comes to autonomous delivery vehicles and drones. So actually, we started our R&D in autonomous vehicles in late ’26 (sic) [ late ’16 ]. And we started our R&D in drones in 2017. So we have been working on this for many years, and we are making very good progress. So right now, we are looking to ways to apply AI in the on-demand delivery field. So apart from our in-house research — in-house R&D, we have also made quite several investments in leading start-ups in the robotics and autonomous driving sector to support their growth…

…In future, our robotics and AI will be even more tightly integrated, and we will keep improving in the areas such as autonomous delivery and logistics and automations because right now, apart — besides the last-mile delivery of on-demand delivery, we also operate a lot of rather big warehouses, and that will be very good use cases for automation technologies.

MongoDB (NASDAQ: MDB)

MongoDB’s management expects customers to start building AI prototypes and AI apps in production in 2025 (FY2026), but management expects the progress to be gradual, and so MongoDB’s business will only benefit modestly from AI in 2025 (FY2026); there are high-profile AI companies building on top of MongoDB Atlas, but in general, customers’ journeys with building AI applications will be gradual; management thinks that customers are slow in building AI applications because they lack AI skills and because there are still questions on the trustworthiness of AI applications; management sees the AI applications of today as being fairly simplistic, but thinks that AI applications will become more sophisticated as people become more comfortable with the technology

In fiscal ’26, we expect our customers will continue on their AI journey from experimenting with new technology stacks to building prototypes to join apps and production. We expect the progress to remain gradual as most enterprise customers are still developing in-house skills to leverage AI effectively. Consequently, we expect the benefits of AI to be only modestly incremental to revenue growth in fiscal ’26…

…We have some high-profile AI companies who are building on top of Atlas. I’m not at liberty to name who they are, but in general, I would say that the journey for customers is going to be gradual. I would say one is a lack of AI skills in their organizations. They really don’t have a lot of experience and it’s compounded by the rapid evolution of AI technology that they feel like it’s very hard for them to kind of think about like what’s stack to use and so on and so forth. The second, as I mentioned earlier, on the Voyage question, there’s also a real worry about the trustworthiness of a lot of these applications. So I would say the use cases you’re seeing are fairly simplistic — customer chat bots, maybe document summarization, maybe some very simple [indiscernible] workflows. But I do think that, that is we are in the early innings, and I expect a sophistication to increase as people get more and more comfortable,

In 2024 (FY2025), MongoDB started demonstrating that the modernisation of the technology-stack for applications (i.e. MongoDB’s Relational Migrator service) can be reduced with the help of AI tools; management will expand customer engagement for the modernisation so that it can contribute meaningfully to MongoDB’s business in 2026 (FY2027) and beyond; management will start with Java apps that run on Oracle; management sees a significant revenue opportunity in the modernisation of apps; MongoDB has successfully modernised financial applications for one of Europe’s largest ISVs (independent software vendors); management is even more confident of Relational Migrator now than in the past; Relational Migrator is tackling a very tough problem because it involves massive legacy code, and the use of AI in deciphering the code is very helpful; management is seeing a lot of interest from customers for Relational Migrator because the customers are in pain from their technical debt, and their legacy technology stack cannot handle AI applications

In fiscal ’25, our pilots demonstrated that AI tooling combined with services can reduce the cycle time of modernization. This year, we’ll expand our customer engagements so that app monetization can meaningfully contribute to our new business growth in fiscal ’27 and beyond. To start with, and based on customer demand, we are specifically targeting Java apps running on Oracle, which often have thousands of complex store procedures that need to be understood, converted and tested to successfully monetize the application. We addressed this through a combination of AI tools and agents along with inspection verification by delivery teams. Though the complexity of this work is high, the revenue opportunity for modernizing those applications is significant. For example, we successfully modernize our financial application for one of the largest ISVs in Europe, and we’re now in talks to modernize the majority of the legacy estate…

…[Question] What sort of momentum have you seen with relational migrator. And maybe how should we be thinking about that as a growth driver going forward?

[Answer] Our confidence and bullish on the space is even higher today than it was before…

…When you’re looking at a legacy app that’s got hundreds — tens of thousands, if not thousands, not tens of thousands of store procedures being able to reason about that code, being able to decipher that code and then ultimately to convert that code takes — is a lot of effort. And — but the good news is that we are seeing a lot of progress in that area. We see a lot of interest from our customers in this area because they are in so much pain with all the technical debt they’ve assumed. Second is that when they think about the future and how they enable AI in these applications, there’s no way they can do this on their legacy platforms. And so they’re motivated to try and modernize as quickly as possible.

MongoDB’s management sees AI transforming software from a static tool into a decision-making partner, but the rate of change is governed by the quality of the software’s data infrastructure; legacy databases cannot keep up with the requirements of AI and this is where MongoDB’s document-model database is advantageous; MongoDB’s database simplifies AI development by providing an all-in-one solution incorporating all the necessary pieces, including an operational data store, a vector database, and embedding and reranking models; MongoDB’s database provides developers with a structured approach when they are building AI applications; management sees AI applications being much better than traditional software in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language

AI is transforming software from a static tool into a dynamic decision-making partner. No longer limited to predefined tasks, AI-powered applications will continuously learn from real-time data, but this software can only adapt as fast as the data infrastructure is built on and legacy systems simply cannot keep up. Legacy technology stacks were not designed for continuous adaptation. Complex architectures, batch processing and rigid data models create friction at every step, slowing development, limiting organization’s ability to act quickly and making even small updates time consuming and risky. AI will only magnify these challenges. MongoDB was built for change. MongoDB was designed from the outset to remove the constraints of legacy databases, enabling businesses to scale, adapt and innovate at AI speed. Our flexible document model handles all types of data while seamless scalability ensures high performance for unpredictable workloads…

…We also simplify AI development by natively including vector and tech search directly in the database providing a seamless developer experience that reduces cognitive load, system complexity, risk and operational overhead, all with the transactional, operational and security benefits intrinsic to MongoDB. But technology alone isn’t enough. MongoDB provides a structured solution-oriented approach that addresses the challenges customers have with the rapid evolution of AI technology, high complexity and a lack of in-house skills. We are focused on helping customers move from AI experimentation to production faster with best practices that reduce risk and maximize impact…

…AI-powered applications excel where traditional software often falls short, particularly in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language…

…MongoDB demarcatizes the process of building trustworthy AI applications right out of the box. Instead of cobbling together all the necessary piece parts and operational data store, a vector database and embedding and reranking models, MongoDB delivers all of it with a compelling developer experience…

…We think architecturally, we have a huge advantage of the competition. One, the document model really supports different types of data structured, semi-structured and unstructured. We embed a search and Vector Search onto a platform. No one else does that. Then we’ve now with the Voyage AI, we have the most accurate embedding and reranking models to really address the quality and trust issue. And all this is going to be put together in a very elegant developer experience that reduces friction and enables them to move fast.

MongoDB acquired Voyage AI for $220 million, $200 million of which was paid in MongoDB shares; Voyage AI helps MongoDB’s database solve the hallucination issue – a big problem with AI applications – and make AI applications more trustworthy; management thinks the best way to ensure accurate results with AI applications is through high-quality data retrieval, and high-quality data retrieval is enabled by vector embedding and reranking models; Voyage AI’s vector embedding and reranking models have excellent ratings in the Hugging Face community and are used by important AI companies; Voyage AI has an excellent AI team; through Voyage AI, MongoDB can offer best-in-class embedding and reranking models; ISVs (independent software vendors) have gotten better performance when they switched from other embedding models to Voyage AI’s models; Voyage AI’s models increase the trustworthiness of the most demanding and mission-critical AI applications; Voyage AI’s models will only be available on Atlas

With the Voyage AI acquisition, MongoDB makes AI applications more trustworthy by pairing real-time data and sophisticated embedding and retreatment models that ensure accurate and relevant results…

…Our decision to acquire Voyage AI addresses one of the biggest problems customers have when building and deploying AI applications, the risk of hallucinations…

…The best way to ensure accurate results is through high-quality data retrieval, which shows that not only the most relevant information is extracted from an organization’s data with precision, high-quality retrieval is enabled by vector embedding and reranking models. Voyage AI has embedding and reranking models and are among the highest rated in the Hugging Face community for retrieval, classification, clustering and reranking and are used by AI leaders like Anthropic, LangChain, Harvey and Replit. Voyage AI led by Stanford professor, Tang Yuma, who has assembled a world-class AI research team from AI Labs at Stanford, MIT, Berkeley and Princeton. With this acquisition, MongoDB will offer best-in-class embedding and reranking models to power native AI retrievable…

…Let me address how the acquisition of Voyage AI will impact our financials. We disclosed last week that the total consideration was $220 million. Most Voyage shareholders received a consideration in MongoDB stock with only $20 million being paid out in cash…

…We know a lot of ISVs have already reached out to us since the acquisition saying they switched to Voyage from other model providers and they got far better performance. So the value of Voyage is being able to increase the quality and hence the trustworthiness of these AI applications that people are building in order to serve the most demanding and mission-critical use cases…

…Some of these new capabilities like Voyage now that will be available only on Atlas.

Swisscom was able to deploy a generative AI application in just 12 weeks using MongoDB Atlas

Swisscom, Switzerland’s leading provider of mobile, Internet and TV services deployed in new GenAI app in just 12 weeks using Atlas. Swisscom implemented Atlas to power a RAG application for the East Foresight library transforming unstructured data such as reports, recordings and graphics into vector bettings that large language models can interpret. This enables Vector Search to find any relevant contact resulting in more accurate and tailored responses for users.

If an LLM (large language model) is a brain, a database is memory, and embedding models are a way to find the right information for the right question; embedding models provide significant performance gains when used with LLMs

So think about the LLM as the brain. Think about the database is about your memory and the state of where how things are. And so — and then think about embedding as an ability to find the right information for the right question. So imagine you have a very smart person, say, like Albert Einstein on your staff and you’re asking him, in this case, the LLM, a particular question. While Einstein still needs to go do some homework based on what the question is about finding some information before he can formulate an answer. Rather than reading every book in a library, what the embedding models do is essentially act like a library and pointing Einstein to the right section, the right aisle, the right shelf, the right book and the right chapter on the right page, to get the exact information to formulate an accurate and high-quality response. So the performance gains you get a leveraging embedding models is significant.

Okta (NASDAQ: OKTA)

The emergence of AI agents has contributed to the growing importance to secure identity; management will provide access to Auth For GenAI on the Auth0 platform in March 2025; 200-plus startups and large enterprises are on the waitlist for Auth For GenAI; Auth For GenAI allows AI agents to securely call APIs; management is seeing that companies are trying to build agentic systems, only to run into problems with giving these agents access to systems securely; within AI, management sees agentic AI as the most applicable for Okta’s business in the medium term

With the steady rise of cloud adoption, machine identities and now AI agents, there has never been a more critical time to secure identity…

…On the Auth0 platform, we announced Auth For GenAI. We’ll begin early access this month. We already have a wait list of eager customers ranging from early startups to Fortune 100 organizations. Auth for GenAI is developed to help customers securely build and scale their Gen AI applications. This suite of features allows AI agents to securely call APIs on behalf of users while enforcing the right level of access to sensitive information…

…People are trying to stitch together agentic platforms and write their own agentic systems and what they run smack into is, wait a minute. How am I going to get these agents access all these systems if I don’t even know what’s in these systems and I don’t even know the access permissions that are there and how to securely authenticate them, so that’s driving the business…

…I’ll focus on the agentic part of AI. That’s probably the most, in the medium term, that’s probably the most applicable to our business…

…On the agent side, the equivalent of a lot of these deployments have like passwords hardcoded in the agent. So if that agent gets compromised, it’s the equivalent of your monitor having a bunch of sticky notes on it with your passwords before single sign-on. So Auth for GenAI gives you a protocol in a way to do that securely. So you can store these tokens and have these tokens that are secured. And then if that agent needs to pop out and get some approval from the user, Auth for GenAI supports that. So you can get a step-up biometric authentication from the user and say, “Hey, I want to check Jonathan’s fingerprint to make sure before I book this trip or I spend this money, it’s really Jonathan.” So those 3 parts are what Auth for GenAI is, and we’re super, super excited about it. We have a waitlist. Over 200-plus Fortune 100s and startups are on that thing.

Okta’s management thinks agentic AI is a real phenomenon and will turbocharge machine identity for Okta by 2 orders of magnitude higher; already today, a good part of Okta’s business is providing machine identity; management is the most excited about the customer identity part of Okta’s business when it comes to agentic AI because companies will start having agentic AIs as customers too; management thinks Okta will be monetise agentic AI from both people building agents, and people using agents

The agenetic revolution is real, and the power of AI and the power of these language models, the interaction modalities that you can have with these systems these machines doing things on your path and what they can do and how they can infer next actions, et cetera, et cetera. You all know it’s really real. But the way to think about it from an Okta perspective, it is like machine identity on steroids, turbocharged to like 2 orders of magnitude higher. So that’s like really exciting for us because what do we do. A good part of our business is actually logging in machines right now. Auth0 has the machine-to-machine tokens where people, if they build some kind of web app that services other machines, they can use Auth0 for the login for that. Okta has similar capabilities. And now you have not only that basic authentication challenge but you have the — all of these applications as you get 2 orders of magnitude, more things logging in, you have to really worry about the fine grain authorization into your services…

…[Question] Which side of the business are you more excited about from an agentic AI perspective?

[Answer] I think the customer identity side is more exciting. I think it’s a little bit of a — my answer is a little bit of a — I’m kind of like having both ways because a lot of the — when you talk about developers building agentic AI, they’re doing it inside of enterprises. So like the pattern I was talking about earlier, there’s these teams and these companies that have been tasked with we hear about this [ agent ] and make it work. And the first thing they have to do is I’ve had many conversations with customers where they’ve been in these discussions and we want — we did a POC and now we’re worried about doing it broadly, but the task was basically hook everything up to our existing — hook these agents up to all of our existing systems. And before we could do that inside of enterprise, we had to get a good identity foundation in front of all these things. And so it’s kind of like similar to your building something and you’re a developer, you’re exposing APIs, you’re doing fine grain authorization. You’re taking another — you’re using another platform or you’re building your own agentic AI platform, and you’re having to talk to those systems and those APIs to do things on user’s behalf, so you’re a developer, but it’s kind of like a workforce use case, but I think people building these systems and getting the benefit from that is really exciting…

…We can monetize it on “both side”, meaning people building the agents and people using the agents. The agents have to log in and they have to log into something. So I think it’s potential to monetize it on both sides.

Okta’s management thinks the software industry does not yet know how to account for AI agents in software deals; management thinks that companies will eventually be buying software licenses for both people and AI agents

One of the things that we don’t have today is the industry doesn’t have a way to like identify an agent. I don’t mean in the sense of like authenticating or validated agent. I mean to actually a universal vernacular for how to record an agent, how to track it and how to account for it. And so I think that’s something you’ll see coming. You’ll see there will be actually a type of account, an Okta that’s an agent account. You’ll see companies starting to — when they buy software, they say, hey, I buy these many people and these many agentic licenses. And that’s not quite there yet. Of course, platforms that are coming out with agent versions have this to some degree, but there isn’t a common cross-company, cross enterprise definition of an agent, which is an interesting opportunity for us actually.

Sea Ltd (NYSE: SE)

Sea’s management is using AI in Shopee to understand shoppers’ queries better and to help sellers enhance product listings, and these AI initiatives have improved purchase conversion rates and sellers’ willingness to spend on advertising; management has upgraded Shopee’s chatbots with AI and this led to meaningful improvement in customer service satisfaction score and customer service cost-per-contact; management is using AI to improve the shopper return-refund process and has seen a 40% year-on-year decrease in resolution times in Asia markets; management thinks Shopee is still early in the AI adoption curve

We continue to adopt AI to improve service quality in a practical and effective manner. By using large language models to understand queries, we have made search and discovery more accurate, helping users find relevant products faster. We provide our sellers with AI tools to enhance product listings by improving descriptions, images, and videos. These initiatives have improved purchase conversion rates while also making sellers more willing to spend on ads, boosting our ad revenue…

… After upgrading our chatbots with AI, we saw a meaningful increase in our customer service satisfaction score over the past year, and a reduction in our customer service cost-per-contact by nearly 30% year-on-year. We also used large language model capabilities to enhance our buyer return-refund process, addressing a key e-commerce pain-point. In the fourth quarter, we improved resolution times in our Asia markets by more than 40% year-on-year, with nearly six in ten cases resolved within one day. We believe we are still early in the AI adoption curve and remain committed to exploring AI-driven innovations to improve efficiency and deliver better experiences for our users.

Sea’s management thinks the use of AI is helping Sea both monetise its services better, and save costs

[Question] I just wanted to get some color with regard to the benefit from AI. Are we actually seeing cost efficiency, i.e., the use of AI actually save a lot of the manual labor cost? So that helps to achieve a lot of cost savings? Or are we actually seeing the monetization is getting better coming from AI?

[Answer] We are seeing both, in fact, for example, in our search and recommendations, we actually use the large language model to better understand user queries, making certain discovery a lot more accurate and helping users find relevant faster… We are also using the AI to understand the product a lot better like historically, it was a fintech matching, but now we can use existing pictures and the descriptions and the reviews to generate a lot more richer understanding of the product. And all those help us essentially matching, our product users’ intention a lot better. 

We are also having a lot of AIGC, AI-generated content in our platform. We provide that as a tool to our sellers to be able to produce image, a description of the product or the videos, especially a lot better compared to what they had before.

And both of this increased our conversions meaningfully in our platform.

On the other side, on the cost savings side, I think in Forrest’s opening, we talked about the chatbot, the — if you look at our queries, about 80% of the queries are answered by the chatbot already, which is a meaningful cost savings for the — for our operations. I think that’s also why you can see that our cost management for e-commerce is doing quite well. Even for the 20% answered by the agent, we have an AI tool for the agent to be able to understand the context a lot better, so can help them to respond a lot faster to the customers,

Tencent (OTC: TCEHY)

Tencent’s AI initiatives can be traced to 2016; management has been investing in Tencent’s proprietary foundation model, HunYuan, since early 2023; management sees HunYuan as the foundation for Tencent’s consumer and enterprise businesses

Our AI initiatives really trace back to 2016 when we first established our AI lab. Since 2023, early part of that, we have been investing heavily in our proprietary HunYuan foundation model, which forms an important technology foundation for our consumer and enterprise-facing businesses and will serve as a growth driver for us in the long run. Our investments in HunYuan enable us to develop end-to-end foundation model capabilities in terms of infrastructure, algorithm, training, alignment and data management and also to tailor solutions for the different needs of internal and external use cases.

Tencent’s management has released multimodal HunYuan foundation models across image, video, and 3D generation; the multimodal HunYuan foundation models have received excellent scores in AI benchmarking

In addition to LLMs, we have released multimodal HunYuan foundation models with capabilities that span across image, video and 3D generation. HunYuan’s image generation models achieved the highest score from FlagEval in December of last year. In video generation, our model excels in video output quality and ranked first on Hugging Face in December of last year. 

Tencent’s management has been actively releasing Tencent’s AI models to the open source community

Our 3D generation model was the industry’s first open source model supporting text and image to 3D generation. In addition to that, we also contribute to the open source community actively and have open sourced a series of advanced models in the HunYuan family for 3D generation, video generation, large language and image generation. Several of these models have gained great popularity among developers worldwide.

For Tencent’s consumer-facing AI products, management has been utilising different AI models because they believe that a combination of models can handle complex tasks better than a single model; Tencent’s native AI application, Yuanbao, provides access to multiple models; Yuanbao’s DAU (daily active users) increased 20-fold from February 2025 to March 2025; management has been testing AI features in Weixin to improve the user experience and will be adding more AI features over time; management will be introducing a lot more consumer-facing AI applications in the future; management thinks consumer AI is in a very early stage, but they can see Yuanbao becoming a strong AI native assistant helping with deep research, and the Ema Copilot being a personal and collaborative library; management is looking to infuse AI into each of Tencent’s existing consumer products

Going to our consumer-facing AI products. We adopt a multimodal strategy to provide the best AI experience to our users, so we can leverage all available models to serve different user needs. We need this because different AI models are optimized for different capabilities, performance metrics and use cases and a combination of various models can handle complex tasks better than a single model…

…On the product front, our AI native application, Yuanbao, provides access to multiple models, including Chain of Thought reasoning models such as HunYuan T1 and DeepSeek R1 and fast-thinking model HunYuan Turbo S with the option of integrating web search results. Yuanbao search results can directly access high-quality proprietary content from Tencent ecosystem, such as official accounts and video accounts. By leveraging HunYuan’s multimodal capabilities, Yuanbao can process prompts in images, voice and documents in addition to text. Our cloud infrastructure supports stable and uncapped access to leading models. From February to March, Yuanbao’s DAU increased 20-fold to become the third highest AI native mobile application in China by DAU…

…We have also started testing AI features in Weixin to enhance user experience, such as for search, language input and content generation and we will be adding more AI features in Weixin going forward…

…We actually have a whole host of different consumer-facing applications and you should expect more to come. I think AI is actually in a very early stage. So it’s really hard to talk about what the eventual state would look like. But I would say, one, each product will continue to evolve into very useful and even more powerful products for users. So Yuanbao can be sort of a very strong AI native assistant and the Ema copilot could be your personal library and also a collaborative library for team collaborations. And Weixin can have many, many different features to come, right? And in addition to these products, I think our other products would have AI experiences, including QQ, including browser and other products. So I think we would see more and more AI — consumer AI-facing products. And at the same time, each one of the products will continue to evolve…

…Each one of our products would actually try to look for unique use cases in which they can leverage AI to provide a great user experience to their users…

…Yuanbao, well, right now, it is a chatbot and search. But over time, I think it would actually proliferate into a all-capable AI assistant with many different functionalities serving different types of people. So if — it would range from sort of students who want to learn and it would include all kinds of different people who, actually knowledge workers who want to complete their work and would sort of cover deep research, which allows people to very deep research into different topics.

Tencent’s management thinks that there are advantages to both developing Tencent’s own foundation models and using 3rd-party models

By investing in our own foundation models, we are able to fully leverage our proprietary data to tailor solutions to meet customized internal and customer needs, while at the same time, making use of external models allowed us to benefit from innovations across the industry.

Tencent’s management has been accelerating AI integration into Tencent’s cloud businesses, including its infrastructure as a service business, its platform as a service business, and its software as a service business; the AI-powered transcription and meeting summarisation functions in Tencent Meeting saw a year-on-year doubling in monthly active users to 15 million

We have been accelerating AI integration into our cloud business across our infrastructure, platform and Software as a Service solutions.

Through our Infrastructure as a Service solutions, enterprise customers can achieve high-performance AI training and inference capabilities at scale and developers can access and deploy mainstream foundation models.

For Platform as a Service, PaaS, our TI platform supports model fine-tuning and inference demands with flexibility, will provide powerful solutions supporting enterprise customers in customizing AI assistants using their own proprietary data and developers in generating mini programs and mobile applications through natural language prompts.

Our SaaS products increasingly benefit from AI-powered tools. Real-time transcription and meeting summarization functions in Tencent Meeting gained significant popularity resulting in monthly active users for these AI functions doubling year-on-year to 15 million. Tencent Docs also enhanced the user productivity and content generation and processing.

Tencent’s AI cloud revenue doubled in in 2024, despite management having limited the availability of GPUs for cloud services in preference for internal use-cases for ad tech, foundation model training, and inference for Yuanbao and Weixin; management has stepped up the purchase of GPUs in 2024 Q4 and expects the revenue growth of cloud services to accelerate as the new GPUs are deployed for external use cases; Tencent’s capital expenditures in 2024 Q4 increased more than 3x to US$10.7 billion from a year ago because of the higher purchases of GPUs; management believes the step-up in capex in 2024 Q4 is to a new higher steady state

In 2024, our AI cloud revenue approximately doubled year-on-year. Increased allocation of GPUs for internal use cases initially for ad tech and foundation model training and more recently on AI inference for Yuanbao and Weixin has limited our provision of GPUs to external clients and thus constrained our cloud services revenue growth. For external workloads, we have prioritized available GPUs towards high-value use cases and clients. Since the fourth quarter of 2024, we have stepped up our purchase of GPUs. And as we deploy these GPUs, we expect to accelerate the revenue growth of our overall cloud services…

…As the capabilities and benefits of AI become clearer, we have stepped up our AI investments to meet our internal business needs, train foundation models and support searching demand for inference we’re experiencing from our users. To consolidate our resources around this all important AI effort, we have reorganized our AI teams to sharpen focus on both fast product innovation and deep model research. Matching our stepped-up execution momentum and decision-making velocity, we increased annual CapEx more than threefold to USD 10.7 billion in 2024, equivalent to approximately 12% of our revenue with a notable uplift in fourth quarter of the year as we bought more GPUs for both inference needs as well as for our cloud services…

…We did step up CapEx to a new sort of higher steady state in the fourth quarter of last year…

…Part of the reason why you see such a big step up in terms of the CapEx in the fourth quarter is because we have a bunch of rush orders for GPUs for both inference as well as for our cloud service. And we would only be able to capture the large increase in terms of IaaS service demand when we actually install these GPUs into the data center, which would take some time. So I would say we probably have not really captured a lot of that during the first quarter. But over time, we will capture quite a bit of it with the arrival and installation of the GPUs.

Tencent’s management already sees positive returns for Tencent from their investment in AI; the positive returns come in 3 areas, namely, in advertising, in games, and in video and music services; in advertising, Tencent has been using AI to approve ad content more efficiently, improve ad targeting, streamline the ad creative process for advertisers, and deliver higher return on investment for advertisers; Tencent’s marketing services experienced revenue growth of 20% in 2024 because of AI integration, despite a challenging macro environment; in games, Tencent is using AI to improve content production efficiency and build in-game chat bots, among other uses; in video and music services, Tencent is using AI to improve productivity in content creation and effectively boost content discovery

We believe our investment in AI has already been generating positive returns for us…

…For advertising, we enhanced our advertising system with neural network AI capabilities since 2015. We rebuilt ad tech platform using large model capabilities since 2020, enabling long sequence user behavior analysis across multiple properties which resulted in increased user engagement and higher click-through rates. Since 2023, we have been adding large language model capabilities to facilitate more efficient approvals of ad content, to better understand merchandise categories and users commercial intent for more precise ad targeting and to provide generative AI tools for advertisers to streamline the ad creative process, leveraging AI-powered ad targeting capabilities and generative AI ad creative solutions. Our marketing services business is already a clear beneficiary of AI integration with revenue growth of 20% in 2024 amid challenging macro environment.

In games, we adopted machine learning technology in our PvP games since 2017. We leveraged AI in games to optimize matching experience, improve game balance and facilitate AI coaching for new players, empowering our evergreen games strategy. Our games business is now integrating large language model capabilities, enhanced 3D content production efficiency and to empower in-game chatbots.

For our video and music services, we’re leveraging AI to improve productivity in animation, live action video and music content creation. Our content recommendation algorithms are powered by AI and are proven effective in boosting content discovery. These initiatives enables us to better unlock the potential of our great content platforms…

…Across pretty much every industry we monitor the AI enhancements we’re deploying and delivering superior return on investment for advertisers versus what they previously enjoyed and versus what’s available elsewhere.

Tencent’s management expects to further increase capital expenditure in 2025 and for capital expenditure to be a low-teens percentage of revenue for the year; while capital expenditure in 2025 is expected to increase, the rate of growth has slowed down significantly

We intend to further increase our capital expenditures in 2025 and expect our CapEx to account for low teens percentage of our revenue…

…[Question] You guided a CapEx to revenue ratio of low-teens for 2025, which is a similar ratio as for ’24. So basically, this guidance implies a significant slowdown of CapEx growth.

Tencent’s management sees several nuances on the impact to Tencent’s profit margins from the higher AI capital expenditures expected, but they are optimistic that Tencent will be able to protect its margins; the AI capital expenditures go into 4 main buckets, namely, (1) ad tech and games, (2) large language model training, (3) renting out GPUs in the cloud business, and (4) consumer-facing inference; management sees good margins in the 1st bucket, decent margins in the 3rd bucket, and potentially some margin-pressure in the 4th bucket; but in the 4th bucket, management sees (1) the potential to monetise consumer-facing inference through a combination of advertising revenue and value-added services, and (2) avenues to reduce unit costs through software and better algorithms

[Question] As we step up the CapEx on AI, our margin will be inevitably dragged by additional depreciation and R&D expenses. So over the past few years, we have seen meaningful increase in margin as we focus on high-quality growth. So going forward, how should we balance between growth and profitability improvement?

[Answer] It’s worth digging into exactly where that CapEx is going to understand whether the depreciation becomes a margin pressure or not. So the most immediate use of the CapEx is GPUs to support our ad tech and to a lesser extent, our games businesses. And you can see from our results, you can hear from what Martin talked about, that, that CapEx actually generates good margins, high returns.

A second use of CapEx was GPUs for large language model training…

…Third, there’s CapEx related to our cloud business, which, we buy this GPU servers, we rent them out to customers, we generate a return. It may not be the highest return business in our portfolio but nonetheless, it’s a positive return. It covers the cost of the GPUs and therefore, the attendant depreciation.

And then finally, where I think there is potentially the short-term pressure is the CapEx for 2C [to-consumers] inference. And it — that is an additional cost pressure but we believe it’s a manageable cost pressure because that CapEx is a subset of the total CapEx. And we’re also optimistic that over time those — the 2C inference activity that we’re generating, just like previous activity within different Tencent platforms will be monetizing through a combination of advertising revenue and value-added services. So overall, while we understand that you have questions around the step-up in CapEx and how that translates into profitability over time, we’re actually quite optimistic that we can continue to grow the business while protecting margins…

…In the inference for consumer-facing product. There’s actually a lot of avenues through which we can actually reduce the unit cost by technical means, by software and by better algorithms. So I think that’s also sort of a factor to keep in mind.

Tencent’s management believes that the AI industry is now getting much higher productivity on large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected, as a result of DeepSeek’s breakthroughs; previously, the belief was that each new generation of large language models would require an order of magnitude more GPUs; Tencent’s AI-related capital expenditure is the largest amongst Chinese technology companies; management thinks that Chinese technology companies are spending less on capital expenditure as a percentage of revenue than Western peers because Chinese companies have been prioritizing efficient utilization of GPUs without impairing the ultimate effectiveness of the AI technology developed

There was a period of time last year when there was a belief that every new generation of large language model required an order of magnitude more GPUs. That period of time ended with the breakthroughs that DeepSeek demonstrated. And now the industry and we within the industry are getting much higher productivity on a large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected…

…There was a period last year when people asked us if our CapEx was big enough relative to our China peers, relative to our global peers. And now out of the listed companies, I think we had the largest CapEx of any China tech company in the fourth quarter. So we’re at the forefront among our China peers. In general, the China tech companies are spending less on CapEx as a percentage of revenue than some of their Western peers. But we believe for some time that’s because the Chinese companies are generally prioritizing efficiency and utilization — efficient utilization of the GPU servers. And that doesn’t necessarily impair the ultimate effectiveness of the technology that’s being developed. And I think DeepSeek’s success really sort of symbolized and solidified, demonstrated that, that reality.

Tencent’s management thinks AI can benefit Tencent’s games business in 3 ways, namely, (1) a direct, more short-term benefit in helping game developers be more productive, (2) an indirect, more long-term benefit in terms of games becoming an important element of human expression in an AI-dominated world, and (3) allow evergreen games to be more evergreen

We do believe that games benefit in a direct and potentially a less direct way from AI technology enhancements. The direct way is the game developers using AI to assist them in creating more content more quickly and serving more users more effectively. And then the indirect way, which may be more of a multi-decade rather than the second half of this year story is that as humanity uses AI more broadly, then we think there’ll be more time and also more desire for high agency activities among people who are now empowered by AI. And so one of the best ways for them to express themselves in a high agency way rather than a passive way is through interactive entertainment, which is games…

…We actually felt AI would allow evergreen games to be more evergreen. And we are already seeing sort of how AI can help us to execute and magnify our evergreen strategy. And part of it is within production, right, you can actually produce great content now within a shorter period of time so that you can keep updating the games with higher frequency of high-quality content. And with the PvE experience, when you have smarter box, right, you actually sort of make the game more exciting and more like PvP. And within PvP, a lot of the matching and balancing and coaching of new users can actually sort of be done in a much better way when you apply AI.

Tencent’s management sees strong competitive advantages that Tencent has when it comes to AI agents because of the large user base of Tencent’s products and the huge variety of activities that happen within Tencent’s products

We would be able to build stand-alone AI agents by leveraging models that are of great quality and at the same time by leveraging the fact that we have a lot of consumers on our different software platforms like our browser, like Yuanbao over time. But at the same time, right, even within Weixin and within QQ, we can have AI agents. And the AI agents can actually leverage the ecosystem within the apps and provide really great service to our users by completing complex tasks, right? If you look at Weixin, for example, Weixin has got a lot of users, a very long user time per day as well as high frequency of users opening up the app, that’s 1 advantage. The second advantage is that if you look at the activities within Weixin is actually very, very diversified, right? It’s not just sort of entertainment, it’s not just transactions, it’s actually sort of social communication and content and a lot of people will conduct their work within Weixin, a lot of people conduct their learning within Weixin and there are a lot of transactions that go through Weixin. And there’s a multitude of Mini Programs, which actually allowed all sorts of different activities to be carried out, right? So if you look at the Mini program ecosystem, we can easily build an agent based on a model that actually can connect to a lot of the different Mini Programs and have activities and complex tasks completed for our users. So I think those are all very distinctive advantages that we have.

Tencent’s management believes that AI search will eventually replace traditional search

At a high level, if we look at the history of web search subsuming web directory, if we look at our own behavior, with AI prompts vis-a-vis traditional search, I think it’s possible that AI search will subsume traditional search because ultimately, web directory traditional search, AI prompt, all represent mechanisms for accessing the Internet’s knowledge graph.

Tencent’s management believes that in China, AI chatbots will be monetised first through performance advertising followed by value-added services, as opposed to in the West, where AI chatbots have been monetised first through subscription models followed by performance advertising

In terms of how the AI prompt will be monetized, time will tell but I think that we can already see in the Western world, the first monetization is through subscription models and then over time, performance advertising will follow. I think in China, it will start with performance advertising and then value-added services will follow.

Veeva Systems (NYSE: VEEV)

Veeva’s management’s AI strategy for Veeva is to have its Commercial Cloud, Development Cloud, and Quality Cloud be the life sciences industry’s standard core systems of record; management is making Veeva’s data readily available for the development of AI applications by Veeva and 3rd parties through the Direct Data API released in 2024; management is seeing good uptake on the Direct Data API; the Direct Data API will be free to all of Veeva’s customers because management wants people to be building on the API; management found a way to offer the Direct Data API with lesser compute resources than originally planned for; Veeva is already using the Direct Data API internally, and 10 customers are already using it; it takes time for developers to be used to the Direct Data API, because it’s a fundamentally new type of API, but it’s a great API; management believes that Direct Data API will enable the life sciences industry to leverage their core data through AI faster than any other industry

We also executed well on our AI strategy. Commercial Cloud, Development Cloud, and Quality Cloud are becoming the industry’s standard core systems of record. With significant technology innovation including the Direct Data API released this year, we are making the data from our applications readily available for the development of relevant, timely AI solutions built by Veeva, our customers, and partners…

…We are seeing good uptake of the Direct Data API. And we — yes, as you mentioned, we recently announced that that’s going to be free to all of our customers. And — the reason there is we want everybody building on that pack of API. It’s just a much better, faster API for many use cases, and we found a way to do it where it was not going to consume as many compute resources as we thought it was…

…We are using it internally, for example, for connecting different parts of our clinical suite, different parts of our safety suite together and our partners are starting to do it. We have more than 10 customers that are already doing it. Some of them are large customers. it takes some time because it’s a different paradigm for integration. People have been using a hammer for a long time. And now you’re giving them a Jack Hammer and they got to learn how to use it. But we are super enthused. It’s a fundamental new type of API where you can get like all of the data out of your Vault, super quickly…

…I’m really pleased about what we’re doing for the life sciences industry because many of our core systems are Veeva and now their core systems are going to be enabled with this fundamental new API that’s going to allow them to leverage their core data faster than any other industry.

Reminder from management that Veeva recently announced 3 new AI solutions, namely, Vault CRM Bot, CRM Voice Control, and MLR Bot; management has more AI solutions in the pipeline, but the timing for release is still unclear; management wants to invest more in AI solutions and they think the company has strong leadership in that area

We announced new Veeva AI Solutions including Vault CRM Bot, CRM Voice Control, and MLR Bot…

……Now with CRM Voice Control that we’ll be bringing out this year, and also CRM Bot and the MLR Bot, medical legal regulatory review, and we have quite a few others in the plan, too. We don’t know exactly which ones we’ll bring out when, but we have — we’re putting more investment in AI solutions. We centralized the group around that, so we can develop. I have a strong leader there and develop more core competency around around AI.

Veeva’s management was initially a little skeptical of AI because of the amount of money flowing in, and the amount of hype surrounding it

[Question] I want to start with the AI offerings that you’ve built out Peter, maybe if you were on the tape back a year, there was a little bit of a perception from the investment community. That you were coming off as maybe a little bit skeptical on AI, but now you’ve come out with a lot of these products. Maybe can you walk us through kind of what’s driven the desire or the momentum to push out these products kind of quickly?

[Answer] AI is certainly captivating technology, right? So much money going into it, so much progress, and so much hype.

Veeva’s management thinks AI is shaking out the way they expected to, which is the existence of many large language models; management also thinks the development of AI has become more stable

If we just stay at that level, I’m really pleased that things are starting to shake out roughly how we thought they were going to take out. There’s not going to be one large language model, there are going to be multiple. There’s not going to be 50, but there’s going to be a good handful and they’re going to specialize in different areas. And it’s not so unstable anymore, where you wake up and everything changes, right? DeepSeek came out, came out, Yes, well, guess what? The world keeps turning. NVIDIA is going to have their own model? That’s okay, and the world keeps turning. So I think it’s starting to settle out.

Veeva’s management sees the infrastructure layer of AI as being really valuable, but they also see a lot of value in building specific use cases on top of the infrastructure layer, and that is where they want Veeva to play in

So it’s settling out that these core large language models are going to be at the platform level and that’s super valuable, right? That’s not where companies like Veeva play in, that core infrastructure level. It’s very valuable. But there’s a lot of great value on specific use cases on top that can be used in the workflow. So that’s what we’re doing now, focusing on our AI solutions.

Veeva’s management is using AI internally but it’s still early days and it has yet to contribute improvements to Veeva’s margin; Veeva’s expected margin improvements for 2025 (FY2026) is not related to AI usage

[Question] Going back to the topic of AI… how you’re leaning into kind of internal utilization too, if we think about kind of some of the margin strength you’re delivering throughout the business?

[Answer] Around the internal use of AI and the extent to which that was contributing to margins, I think. And I think the short answer there is it’s an area that we’re really excited about internally as well. We’re building strategies around, but it’s not a major contributor to the margin expansion that we saw in Q4 or in the coming year. So it’s something we’re looking into. We’re building strategies around. It’s not something we’re counting on, though, to deliver on this year’s guidance.

In 2023 and 2024, Veeva’s management was seeing customers get distracted from core technology spending because the customers were chasing AI; management is no longer seeing the AI distraction at play

I believe we called it before AI disruption, maybe that was 18 months or so a year ago. I think that’s largely behind us. Our customers have settled into what AI is and what it does. They’re still doing some innovation projects, but it’s not consuming them or distracting from the core work. So I think we’re largely through that [ 3 ] of AI distraction now.

Veeva’s management thinks that Veeva is the fastest path to AI for a life sciences industry CRM because any AI features will have to be embedded in the workflow of a life sciences company

It turns out Veeva is the fastest path to AI that you can use in CRM because it has to be done in the workflow of what you’re doing. This is not some generic AI. This is AI for pre-call planning for compliance, for how to — for the things that a pharmaceutical rep does in a compliant way based on the data sources that are needed in CRM. So Veeva is the fastest path to AI.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Meituan, MongoDB, Okta, Tencent, and Veeva Systems. Holdings are subject to change at any time.

Is There Something Wrong With Pinduoduo’s Asset-Light Business Model?

The company’s gross property, plant, and equipment is barely sufficient to buy a laptop for each employee

Chinese e-commerce company Pinduoduo has experienced explosive growth in the past few years, with its revenue growing at a breath-taking pace of 142% per year from RMB 505 million in 2016 to RMB 247 billion in 2023. Profitability has also increased markedly over the same period, rising from a loss of RMB 322 million to a profit of RMB 60 billion. What’s even more impressive is that Pinduoduo has achieved this while being asset-light. The company ended 2023 with total assets of merely RMB 348 billion, which equates to a remarkable return on assets (profit as a percentage of total assets) of 17%. 

But I noticed two odd things about Pinduoduo’s asset-light nature as I dug deeper into the numbers. Firstly, Pinduoduo’s gross property, plant, and equipment per employee in 2023 was miles ahead of other large Chinese technology companies such as Alibaba, Meituan, and Tencent. This is shown in Table 1.

Table 1; Source: Company annual reports

Secondly – and the more important oddity here – was that Pinduoduo’s gross property, plant, and equipment per employee in 2017 was merely RMB 10,591, or RMB 0.01 million (gross property, plant, and equipment of RMB 12.3 million, and total employees of 1,159). According to ChatGPT, a professional laptop cost at least RMB 8,000 in China back in 2017, meaning to say, Pinduoduo’s gross property, plant, and equipment in that year was barely sufficient to purchase just a professional laptop for each employee. 

I’m not saying that something nefarious is definitely happening at Pinduoduo. But with the numbers above, I wonder if there’s something wrong with Pinduoduo’s purportedly asset-light business model. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Meituan and Tencent. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q4) – Part 2

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the fourth quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

I’ve split the latest commentary into two parts for the sake of brevity. This is Part 2, and you can Part 1 here. With that, I’ll let the management teams take the stand… 

Microsoft (NASDAQ: MSFT)

Microsoft’s management is seeing enterprises move to enterprise-wide AI deployments 

Enterprises are beginning to move from proof of concepts to enterprise-wide deployments to unlock the full ROI of AI. 

Microsoft’s AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-on-year; Microsoft’s AI business did better than expected because of Azure, Microsoft Copilot (within Copilot, price per seat was a strength and still retains good signal for value)

Our AI business has now surpassed an annual revenue run rate of $13 billion, up 175% year-over-year…

…[Question] Can you give more color on what drove the far larger-than-expected Microsoft AI revenue? We talked a bit about the Azure AI component of it. But can you give more color on that? And our estimates are that the Copilot was much bigger than we had expected and growing much faster. Any more details on the breakdown of what that Microsoft AI beat would be great.

[Answer] A couple of pieces to that, which you correctly identified, number one is the Azure component we just talked about. And the second piece, you’re right, Microsoft Copilot was better. And what was important about that, we saw strength both in seats, both new seats and expansion seats, as Satya talked about. And usage doesn’t directly impact revenue, but of course, indirectly does as people get more and more value added. And also price per seat was actually quite good. We still have a good signal for value.

Microsoft’s management is seeing AI scaling laws continue to show up in both pre-training and inference-time compute, and both phenomena have been observed internally at Microsoft for years; management has seen gains of 2x in price performance for each new hardware generation, and 10x for each new model generation

AI scaling laws continue to compound across both pretraining and inference time compute. We ourselves have been seeing significant efficiency gains in both training and inference for years now. On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimizations. 

Microsoft’s management is balancing across training and inference in the buildout of Microsoft’s AI capacity; the buildout going forward will be governed by revenue growth and capability growth; Microsoft’s Azure data center capacity is expanding in line with both near-term and long-term demand signals; Azure has more than doubled its capacity in the last 3 years, and added a record amount of capacity in 2024; Microsoft’s data centres uses both in-house as well as 3rd-party chips

Much as we have done with the commercial cloud, we are focused on continuously scaling our fleet globally and maintaining the right balance across training and inference as well as geo distribution. From now on, it’s a more continuous cycle governed by both revenue growth and capability growth thanks to the compounding effects of software-driven AI scaling laws and Moore’s Law…

…Azure is the infrastructure layer for AI. We continue to expand our data center capacity in line with both near-term and long-term demand signals. We have more than doubled our overall data center capacity in the last 3 years, and we have added more capacity last year than any other year in our history. Our data centers, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next-generation AI workloads. We continue to take advantage of Moore’s Law and refresh our fleet as evidenced by our support of the latest from AMD, Intel, NVIDIA, as well as our first-party silicon innovation from Maia, Cobalt, Boost and HSM.

Microsoft’s management is seeing growth in raw storage, database services, and app platform services as AI apps scale, with an example being Azure OpenAI apps that run on Azure databases and Azure App Services

We are seeing new AI-driven data patterns emerge. If you look underneath ChatGPT or Copilot or enterprise AI apps, you see the growth of raw storage, database services and app platform services as these workloads scale. The number of Azure OpenAI apps running on Azure databases and Azure App Services more than doubled year-over-year, driving significant growth in adoption across SQL, Hyperscale and Cosmos DB.

OpenAI has made a new large Azure commitment; OpenAI’s APIs run exclusively on Azure; management is still very happy with the OpenAI partnership; Microsoft has ROFR (right of first refusal) on OpenAI’s Stargate project

As we shared last week, we are thrilled OpenAI has made a new large Azure commitment…

… And with OpenAI’s APIs exclusively running on Azure, customers can count on us to get access to the world’s leading models…

…[Question] I wanted to ask you about the Stargate news and the announced changes in the OpenAI relationship last week. It seems that most of your investors have interpreted this as Microsoft, for sure, remaining very committed to OpenAI’s success, but electing to take more of a backseat in terms of funding OpenAI’s future training CapEx needs. I was hoping you might frame your strategic decision here around Stargate.

[Answer] We remain very happy with the partnership with OpenAI. And as you saw, they have committed in a big way to Azure. And even in the bookings, what we recognized is just the first tranche of it. And so you’ll see, given the ROFR we have, more benefits of that even into the future. 

Microsoft’s management thinks Azure AI Foundry has best-in-class tooling run times for users to build AI agents and access thousands of AI models; Azure AI Foundry already has 200,000 monthly active users after just 2 months; the models available on Azure AI Foundry include DeepSeek’s R1 model, and more than 30 industry-specific models from partners; Microsoft’s Phi family of SLMs (small language model) has over 20 million downloads

Azure AI Foundry features best-in-class tooling run times to build agents, multi-agent apps, AIOps, API access to thousands of models. Two months in, we already have more than 200,000 monthly active users, and we are well positioned with our support of both OpenAI’s leading models and the best selection of open source models and SLMs. DeepSeek’s R1 launched today via the model catalog on Foundry and GitHub with automated red teaming, content safety integration and security scanning. Our Phi family of SLM has now been downloaded over 20 million times. And we also have more than 30 models from partners like Bayer, PAYG AI, Rockwell Automation, Siemens to address industry-specific use cases.

Microsoft’s management thinks Microsoft 365 Copilot is the UI (user interface) for AI; management is seeing accelerated adoption of Microsoft 365 Copilot across all deal sizes; majority of Microsoft 365 Copilot customers purchase more seats over time; daily users of Copilot more than doubled sequentially in 2024 Q4, while usage intensity grew 60% sequentially; more than 160,000 organisations have used Copilot Studio, creating more than 400,000 custom agents in 2024 Q4, uo 2x sequentially; Microsoft’s data cloud drives Copilot as the UI for AI; management is seeing Copilot plus AI agents disrupting business applications; the initial seats for Copilot were for departments that could see immediate productivity benefits, but the use of Copilot then spreads across the enterprise

Microsoft 365 Copilot is the UI for AI. It helps supercharge employee productivity and provides access to a swarm of intelligent agents to streamline employee workflow. We are seeing accelerated customer adoption across all deal sizes as we win new Microsoft 365 Copilot customers and see the majority of existing enterprise customers come back to purchase more seats. When you look at customers who purchased Copilot during the first quarter of availability, they have expanded their seat collectively by more than 10x over the past 18 months. To share just one example, Novartis has added thousands of seats each quarter over the past year and now have 40,000 seats. Barclays, Carrier Group, Pearson and University of Miami all purchased 10,000 or more seats this quarter. And overall, the number of people who use Copilot daily, again, more than doubled quarter-over-quarter. Employees are also engaging with Copilot more than ever. Usage intensity increased more than 60% quarter-over-quarter and we are expanding our TAM with Copilot Chat, which was announced earlier this month. Copilot Chat, along with Copilot Studio, is now available to every employee to start using agents right in the flow of work…

…More than 160,000 organizations have already used for Copilot Studio, and they collectively created more than 400,000 custom agents in the last 3 months alone, up over 2x quarter-over-quarter…

…What is driving Copilot as the UI for AI as well as our momentum with agents is our rich data cloud, which is the world’s largest source of organizational knowledge. Billions of e-mails, documents and chats, hundreds of millions of Teams meetings and millions of SharePoint sites are added each day. This is the enterprise knowledge cloud. It is growing fast, up over 25% year-over-year…

…What we are seeing is Copilot plus agents disrupting business applications, and we are leaning into this. With Dynamics 365, we took share as organizations like Ecolab, Lenovo, RTX, TotalEnergies and Wyzant switched to our AI-powered apps from legacy providers…

…[Question] Great to hear about the strength you’re seeing in Copilot… Would love to get some color on just the common use cases that you’re seeing that give you that confidence that, that will ramp into monetization later.

[Answer] I think the initial sort of set of seats were for places where there’s more belief in immediate productivity, a sales team, in finance or in supply chain where there is a lot of, like, for example, SharePoint grounded data that you want to be able to use in conjunction with web data and have it produce results that are beneficial. But then what’s happening very much like what we have seen in these previous generation productivity things is that people collaborate across functions, across roles, right? For example, even in my own daily habit, it’s I go to chat, I use Work tab and get results, and then I immediately share using Pages with colleagues. I sort of call it think with AI and work with people. And that pattern then requires you to make it more of a standard issue across the enterprise. And so that’s what we’re seeing.

Azure grew revenue by 31% in 2024 Q4 (was 33% in 2024 Q3), with 13 points of growth from AI services (was 12 points in 2024 Q3); Azure AI services was up 157% year-on-year, with demand continuing to be higher than capacity;  Azure’s non-AI business had weaker-than-expected growth because of go-to-market execution challenges

Azure and other cloud services revenue grew 31%. Azure growth included 13 points from AI services, which grew 157% year-over-year, and was ahead of expectations even as demand continued to be higher than our available capacity. Growth in our non-AI services was slightly lower than expected due to go-to-market execution challenges, particularly with our customers that we primarily reach through our scale motions as we balance driving near-term non-AI consumption with AI growth.

For Azure’s expected growth of 31%-32% in 2025 Q1 (FY2025 Q3), management expects  contribution from AI services to grow from increased AI capacity coming online; management expects Azure’s non-AI services to still post healthy growth, but there are still impacts from execution challenges; management expects Azure to no longer be capacity-constrained by the end of FY2025 (2025 Q2); Azure’s capacity constraint has been in power and space

In Azure, we expect Q3 revenue growth to be between 31% and 32% in constant currency driven by strong demand for our portfolio of services. As we shared in October, the contribution from our AI services will grow from increased AI capacity coming online. In non-AI services, healthy growth continues, although we expect ongoing impact through H2 as we work to address the execution challenges noted earlier. And while we expect to be AI capacity constrained in Q3, by the end of FY ’25, we should be roughly in line with near-term demand given our significant capital investments…

…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year.

More than half of Microsoft’s cloud and AI-related capex in 2024 Q4 (FY2025 Q2) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q1 (FY2025 Q3) and 2025 Q2 (FY2025 Q4) to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex will grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; Microsoft’s long-lived infrastructure investments are fungible; the long-lived assets are land; the presence of Moore’s Law means that management does not want to invest too much capex in any one year because the hardware and software will become much better in just 1 year; management thinks Microsoft’s AI infrastructure should be continuously upgraded to take advantage of Moore’s Law; Microsoft’s AI capex growth going forward will be tagged to customer contract delivery; the fungibility of Microsoft’s AI infrastructure investments relates to not just inference (which is the primary use case), but also training, post training, and running the commercial cloud business

More than half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog…

…Next, capital expenditures. We expect quarterly spend in Q3 and Q4 to remain at similar levels as our Q2 spend. In FY ’26, we expect to continue investing against strong demand signals, including customer contracted backlog we need to deliver against across the entirety of our Microsoft Cloud. However, the growth rate will be lower than FY ’25 and the mix of spend will begin to shift back to short-lived assets, which are more correlated to revenue growth. As a reminder, our long-lived infrastructure investments are fungible, enabling us to remain agile as we meet customer demand globally across our Microsoft Cloud, including AI workloads…

…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year…

…You don’t want to buy too much of anything at one time because, in Moore’s Law, every year is going to give you 2x, your optimization is going to give you 10x. You want to continuously upgrade the fleet, modernize the fleet, age the fleet and, at the end of the day, have the right ratio of monetization and demand-driven monetization to what you think of as the training expense…

…I do think the way I want everyone to internalize it is that the CapEx growth is going through that cycle pivot, which is far more correlated to customer contract delivery, no matter who the end customer is…

…  the other thing that’s sometimes missing is when we say fungible, we mean not just the primary use, which we’ve always talked about, which is inference. But there is some training, post training, which is a key component. And then they’re just running the commercial cloud, which at every layer and every modern AI app that’s going to be built will be required. It will be required to be distributed, and it will be required to be global. And all of those things are really important because it then means you’re the most efficient. And so the investment you see us make in CapEx, you’re right, the front end has been this sort of infrastructure build that lets us really catch up not just on the AI infrastructure we needed, but think about that as the building itself, data centers, but also some of the catch-up we need to do on the commercial cloud side. And then you’ll see the pivot to more CPU and GPU. 

Microsoft’s management thinks DeepSeek had real innovations, but those are going to be commoditized and become broadly used; management thinks that innovations in AI that reduce the cost of inference will drive more consumption and more apps being developed, and make AI more ubiquitous, which are all positive forces for Microsoft

I think DeepSeek has had some real innovations. And that is some of the things that even OpenAI found in ’01. And so we are going to — obviously, now that all gets commoditized and it’s going to get broadly used. And the big beneficiaries of any software cycle like that is the customers, right? Because at the end of the day, if you think about it, right, what was the big lesson learned from client server to cloud? More people bought servers, except it was called cloud. And so when token prices fall, inference computing prices fall, that means people can consume more, and there will be more apps written. And it’s interesting to see that when I referenced these models that are pretty powerful, it’s unimaginable to think that here we are in sort of beginning of ’25, where on the PC, you can run a model that required pretty massive cloud infrastructure. So that type of optimization means AI will be much more ubiquitous. And so therefore, for a hyperscaler like us, a PC platform provider like us, this is all good news as far as I’m concerned.

Microsoft has been reducing prices of GPT models over the years through inference optimizations

We are working super hard on all the software optimizations, right? I mean, just not the software optimizations that come because of what DeepSeek has done, but all the work we have done to, for example, reduce the prices of GPT models over the years in partnership with OpenAI. In fact, we did a lot of the work on the inference optimizations on it, and that’s been key to driving, right?

Microsoft’s management is aware that launching a frontier AI model that is too expensive to serve is useless

One of the key things to note in AI is you just don’t launch the frontier model, but if it’s too expensive to serve, it’s no good, right? It won’t generate any demand.

Microsoft’s management is seeing many different AI models being used for any one application; management thinks that there will always be a combination of different models used in any one application

What you’re seeing is effectively lots of models that get used in any application, right? When you look underneath even a Copilot or a GitHub Copilot or what have you, you already see lots of many different models. You build models. You fine-tune models. You distill models. Some of them are models that you distill into an open source model. So there’s going to be a combination…

….There’s a temporality to it, right? What you start with as a given COGS profile doesn’t need to be the end because you continuously optimize for latency and COGS and putting in different models.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s Data Center revenue again had incredibly strong growth in 2024 Q4, driven by demand for the Hopper GPU computing platform and the ramping of the Blackwell GPU platform 

In the fourth quarter, Data Center revenue of $35.6 billion was a record, up 16% sequentially and 93% year-on-year, as the Blackwell ramp commenced and Hopper 200 continued sequential growth. 

Blackwell’s sales exceeded management’s expectations and is the fastest product ramp in NVIDIA’s history; it is common for Blackwell clusters to start with 100,000 GPUs or more and NVIDIA has started shipping for multiple such clusters; management architected Blackwell for inference; Blackwell has 25x higher token throughput and 20x lower cost for AI reasoning models compared to the Hopper 100; Blackwell has a NVLink domain that handles the growing complexity of inference at scale; management is seeing great demand for Blackwell for inference, with many of the early GB200 (GB200 is based on the Blackwell family of GPUs) deployments earmarked for inference; management expects NVIDIA’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; management expects a significant ramp of Blackwell in 2025 Q1; the Blackwell Ultra, the next generation of GPUs within the Blackwell family, is slated for introduction in 2025 H2; the system architecture between Blackwell and Blackwell Ultra is exactly the same

In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp in our company’s history, unprecedented in its speed and scale…

…With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size…

…Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Its revolutionary transformer engine is built for LLM and mixture of experts inference. And its NVLink domain delivers 14x the throughput of PCIe Gen 5, ensuring the response time, throughput and cost efficiency needed to tackle the growing complexity of inference at scale…

…Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture…

…As Blackwell ramps, we expect gross margins to be in the low 70s. Initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure. When fully ramped, we have many opportunities to improve the cost and gross margin will improve and return to the mid-70s, late this fiscal year…

…Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1…

…Blackwell Ultra is second half…

…The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories and of course, new processors, and all of that is coming online…

…This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It’s a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition. But the next transition will slot right in. Blackwell Ultra will slot right in.

NVIDIA’s management sees post-training and model customisation has demanding orders of magnitude more compute than pre-training

The scale of post-training and model customization is massive and can collectively demand orders of magnitude, more compute than pretraining.

NVIDIA’s management is seeing accelerating demand for NVIDIA GPUs for inference, driven by test-time scaling and new reasoning models; management thinks reasoning models require 100x more compute per task than one-shot inference models; management is hopeful that future generation of reasoning models will require millions of times more compute; management is seeing that the vast majority of NVIDIA’s compute today is inference

Our inference demand is accelerating, driven by test-time scaling and new reasoning models like OpenAI o3, DeepSeek-R1 and Grok 3. Long thinking reasoning AI can require 100x more compute per task compared to one-shot inferences…

…. The amount of tokens generated, the amount of inference compute needed is already 100x more than the one-shot examples and the one-shot capabilities of large language models in the beginning. And that’s just the beginning. This is just the beginning. The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future…

……The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level.

Companies such as ServiceNow, Perplexity, Microsoft, and Meta are using NVIDIA’s software and GPUs to achieve lower costs and/or better performance with their inference workloads

ServiceNow tripled inference throughput and cut costs by 66% using NVIDIA TensorRT for its screenshot feature. Perplexity sees 435 million monthly queries and reduced its inference costs 3x with NVIDIA Triton Inference Server and TensorRT-LLM. Microsoft Bing achieved a 5x speed up at major TCO savings for Visual Search across billions of images with NVIDIA TensorRT and acceleration libraries…

…Meta’s cutting-edge Andromeda advertising engine runs on NVIDIA’s Grace Hopper Superchip serving vast quantities of ads across Instagram, Facebook applications. Andromeda harnesses Grace Hopper’s fast interconnect and large memory to boost inference throughput by 3x, enhanced ad personalization and deliver meaningful jumps in monetization and ROI.

NVIDIA has driven a 200x reduction in inference costs in the last 2 years

We’re driven to a 200x reduction in inference costs in just the last 2 years.

Large cloud service providers (CSPs) were half of NVIDIA’s Data Centre revenue in 2024 Q4, and up nearly 2x year-on-year; large CSPs were the first to stand up Blackwell systems

In Q4, large CSPs represented about half of our data center revenue, and these sales increased nearly 2x year-on-year. Large CSPs were some of the first to stand up Blackwell with Azure, GCP, AWS and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI. 

Regional clouds increased as a percentage of NVIDIA’s Data Center revenue in 2024 Q4, driven by AI data center build outs globally; management is seeing countries across the world building AI ecosystems

Regional cloud hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory build-outs globally and rapidly rising demand for AI reasoning models and agents where we’ve launched a 100,000 GB200 cluster-based incidents with NVLink Switch and Quantum-2 InfiniBand…

…Countries across the globe are building their AI ecosystems and demand for compute infrastructure is surging. France’s EUR 200 billion AI investment and the EU’s EUR 200 billion InvestAI initiatives offer a glimpse into the build-out to set redefined global AI infrastructure in the coming years.

NVIDIA’s revenue from consumer internet companies tripled year-on-year in 2024 Q4

Consumer Internet revenue grew 3x year-on-year, driven by an expanding set of generative AI and deep learning use cases. These include recommender systems, vision-language understanding, synthetic data generation, search and agentic AI.

NVIDIA’s revenue from enterprises nearly doubled year-on-year in 2024 Q4, partly with the help of agentic AI demand

Enterprise revenue increased nearly 2x year on accelerating demand for model fine-tuning, RAG and agentic AI workflows and GPU accelerated data processing.

NVIDIA’s management has introduced NIMs (NVIDIA Inference Microservices) focused on AI agents and leading AI agent platform providers are using these tools

We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications, including customer support, fraud detection and product supply chain and inventory management. Leading AI agent platform providers, including SAP and ServiceNow are among the first to use new models.

Healthcare companies are using NVIDIA’s AI products to power healthcare innovation

Health care leaders, IQVIA, Illumina and Mayo Clinic as well as ARC Institute are using NVIDIA AI to speed drug discovery, enhance genomic research and pioneer advanced health care services with generative and agentic AI.

Hyundai will be using NVIDIA’s technologies for the development of AVs (autonomous vehicles); NVIDIA’s automotive revenue had strong growth year-on-year and sequentially in 2024 Q4, driven by ramp in AVs; automotive companies such as Toyota, Aurora, and Continental are working with NVIDIA to deploy AV technologies; NVIDIA’s AV platform has passed 2 of the automotive industry’s foremost authorities for safety and cybersecurity

 At CES, Hyundai Motor Group announced it is adopting NVIDIA technologies to accelerate AV and robotics development and smart factory initiatives…

…Now moving to Automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year-on-year…

…Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis. At CES, we announced Toyota, the world’s largest auto maker will build its next-generation vehicles on NVIDIA Orin running the safety certified NVIDIA DriveOS. We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA DRIVE Thor. Finally, our end-to-end autonomous vehicle platform NVIDIA DRIVE Hyperion has passed industry safety assessments like TÜV SÜD and TÜV Rheinland, 2 of the industry’s foremost authorities for automotive-grade safety and cybersecurity. NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments.

NVIDIA’s management has introduced the NVIDIA Cosmos World Foundation Model platform for the continued development of autonomous robots; Uber is one of the first major technology companies to adopt the NVIDIA Cosmos World Foundation Model platform

At CES, we announced the NVIDIA Cosmos World Foundation Model Platform. Just as language, foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics. Leading robotics and automotive companies, including ridesharing giant Uber, are among the first to adopt the platform.

As a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China is well below levels seen prior to the US government’s export controls; management expects the Chinese market to be very competitive

Now as a percentage of total data center revenue, data center sales in China remained well below levels seen on the onset of export controls. Absent any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive.

NVIDIA’s networking revenue declined sequentially in 2024 Q4, but the networking-attach-rate to GPUs remains robust at 75%; NVIDIA is transitioning to NVLink 72 with Spectrum-X (Spectrum-X is NVIDIA’s Ethernet networking solution); management expects networking revenue to resume growing in 2025 Q1; management sees AI requiring a new class of networking, for which the company’s NVLink, Quantum Infiniband, and Spectrum-X networking solutions are able to provide; large AI data centers, including OpenAI’s Stargate project, will be using Spectrum X

Networking revenue declined 3% sequentially. Our networking attached to GPU compute systems is robust at over 75%. We are transitioning from small NVLink 8 with InfiniBand to large NVLink 72 with Spectrum-X. Spectrum-X and NVLink Switch revenue increased and represents a major new growth vector. We expect networking to return to growth in Q1. AI requires a new class of networking. NVIDIA offers NVLink Switch systems for scale-up compute. For scale out, we offer Quantum InfiniBand for HPC supercomputers and Spectrum-X for Ethernet environments. Spectrum-X enhances the Ethernet for AI computing and has been a huge success. Microsoft Azure, OCI, CoreWeave and others are building large AI factories with Spectrum-X. The first Stargate data centers will use Spectrum-X. Yesterday, Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure. With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.

NVIDIA’s management is seeing 3 scaling laws at play in the development of AI models, namely pre-training scaling, post-training scaling, and test-time compute scaling

There are now multiple scaling laws. There’s the pre-training scaling law, and that’s going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining. And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning, verifiable rewards. The amount of computation you use for post training is actually higher than pretraining. And it’s kind of sensible in the sense that you could, while you’re using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens. AI models are basically generating tokens to train AI models. And that’s post-training. And the third part, this is the part that you mentioned is test-time compute or reasoning, long thinking, inference scaling. They’re all basically the same ideas. And there you have a chain of thought, you’ve search.

NVIDIA’s management thinks the popularity of NVIDIA’s GPUs stems from its fungibility across all kinds of AI model architectures and use cases; NVIDIA’s management thinks that NVIDIA GPUs have an advantage over the ASIC (application-specific integrated circuit) AI chips developed by others because of (1) the general-purpose nature of NVIDIA GPUs, (2) NVIDIA’s rapid product development roadmap, (3) the software stack developed for NVIDIA GPUs that is incredibly hard to replicate

The question is how do you design such an architecture? Some of it — some of the models are auto regressive. Some of the models are diffusion-based. Some of it — some of the times you want your data center to have disaggregated inference. Sometimes it is compacted. And so it’s hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA’s architecture is so popular. We run every model. We are great at training…

…When you have a data center that allows you to configure and use your data center based on are you doing more pretraining now, post training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways. And so we’re seeing, in fact, much, much more concentration of a unified architecture than ever before…

…[Question] We heard a lot about custom ASICs. Can you kind of speak to the balance between custom ASIC and merchant GPU?

[Answer] We build very different things than ASICs, in some ways, completely different in some areas we intercept. We’re different in several ways. One, NVIDIA’S architecture is general whether you’re — you’ve optimized for auto regressive models or diffusion-based models or vision-based models or multimodal models or text models. We’re great in all of it. We’re great at all of it because our software stack is so — our architecture is flexible, our software stack ecosystem is so rich that we’re the initial target of most exciting innovations and algorithms. And so by definition, we’re much, much more general than narrow…

…The third thing I would say is that our performance and our rhythm is so incredibly fast. Remember that these data centers are always fixed in size. They’re fixed in size or they’re fixed in power. And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues. And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4x or 8x higher, your revenues for that gigawatt data center is 8x higher. And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated. And so the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROI…

…The last thing that I would say is the software stack is incredibly hard. Building an ASIC is no different than what we do. We build a new architecture. And the ecosystem that sits on top of our architecture is 10x more complex today than it was 2 years ago. And that’s fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem on top of multiple chips is hard.

NVIDIA’s management thinks that only consumer AI and search currently have well-developed AI use cases, and the next wave will be agentic AI, robotics, and sovereign AI

We’ve really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software. The next wave is coming, agentic AI for enterprise, physical AI for robotics and sovereign AI as different regions build out their AI for their own ecosystems. And so each one of these are barely off the ground, and we can see them.

NVIDIA’s management sees the upcoming Rubin family of GPUs as being a big step-up from the Blackwell family

The next transition will slot right in. Blackwell Ultra will slot right in. We’ve also already revealed and been working very closely with all of our partners on the click after that. And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we’re going to provide a big, huge step-up.

NVIDIA’s management sees AI as having the opportunity to address a larger part of the world’s GDP than any other technology has ever had

No technology has ever had the opportunity to address a larger part of the world’s GDP than AI. No software tool ever has. And so this is now a software tool that can address a much larger part of the world’s GDP more than any time in history.

NVIDIA’s management sees customers still actively using older families of NVIDIA GPUs because of the high level of programmability that CUDA has

People are still using Voltas and Pascals and Amperes. And the reason for that is because there are always things that — because CUDA is so programmable you could use it — one of the major use cases right now is data processing and data curation. You find a circumstance that an AI model is not very good at. You present that circumstance to a vision language model, let’s say, it’s a car. You present that circumstance to a vision language model. The vision language model actually looks at the circumstances and said, “This is what happened and I wasn’t very good at it.” You then take that response — the prompt and you go and prompt an AI model to go find in your whole lake of data, other circumstances like that, whatever that circumstance was. And then you use an AI to do domain randomization and generate a whole bunch of other examples. And then from that, you can go train the model. And so you could use the Amperes to go and do data processing and data curation and machine learning-based search. And then you create the training data set, which you then present to your Hopper systems for training. And so each one of these architectures are completely — they’re all CUDA-compatible and so everything runs on everything. But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past. All of our GPUs are very well employed.

Paycom Software (NYSE: PAYC)

Paycom’s management rolled out an AI agent six months ago to its service team, and has seen higher immediate response rates to clients and eliminated service tickets by 25% from a year ago; Paycom’s AI agent is driving internal efficiencies, higher client satisfaction, and higher Net Promoter Scores

Paycom’s AI agent, which was rolled out to our service team 6 months ago, utilizes our own knowledge-based semantic search model to provide faster responses and help our clients more quickly and consistently than ever before. As responses continuously improve over time, our client interactions become more valuable, and we connect them faster to the right solution. As a result, we are seeing improved immediate response rates and have eliminated service tickets by over 25% compared to a year ago…

…With automations like AI agent, we are realizing internal efficiencies, driving increasing client satisfaction and seeing higher Net Promoter Scores.

PayPal (NASDAQ: PYPL)

One of the focus areas for PayPal’s management in 2025 will be on raising efficiency with the help of AI 

Fourth is efficiency and effectiveness. In 2024, we reduced headcount by 10%. We made deliberate investments in AI and automation, which are critical to our future. This year, we are prioritizing the use of AI to improve the customer experience and drive efficiency and effectiveness within PayPal.

PayPal’s management sees AI being a huge opportunity for PayPal given the company’s volume of data; PayPal is using AI on its customer facing side to more efficiently process customer support cases and interactions with customers (PayPal Assistant has been rolled out and it has cut down phone calls and active events for PayPal); PayPal is using AI to personalise the commerce journey for consumers; PayPal is also using AI for back-office productivity and risk decisions

[Question] The ability to use AI for more operating efficiency. And are those initiatives that are requiring some incremental investment near term? Or are you already seeing sort of a positive ROI from that?

[Answer] AI is opening up a huge opportunity for us. First, at our scale, we saw 26 billion transactions on our platform last year. We have a massive data set that we are actively working and investing in to be able to drive our effectiveness and efficiency…

First, on the customer-facing side, we’re leveraging AI to really become more efficient in our support cases and how we interact with our customers. We see tens of millions of support cases every year, and we’ve rolled out our PayPal Assistant, which is now really cutting down phone calls and active events that we have. 

We also are leveraging AI to personalize the commerce journey, and so working with our merchants to be able to understand and create this really magical experience for consumers. When they show up at a checkout, it’s not just a static button anymore. This really can become a dynamic, personalized button that starts to understand the profile of the consumer, the journey that they’ve been on, perhaps across merchants, and be able to enable a reward or a cash-back offer in the moment or even a Buy Now, Pay Later offer in a dynamic experience…

In addition, we also are looking at our back office and ensuring that not just on the engineering and employee productivity side, but also in things like our risk decisions. We see billions and billions of risk decisions that often, to be honest, we’re very manual in the past. We’re now leveraging AI to be able to understand globally what are the nature of these risk decisions and how do we automate these across both risk models as well as even just ensuring that customers get the right response at the right time in an automated fashion.

Salesforce (NYSE: CRM)

Salesforce ended 2024 (FY2025) with $900 million in Data Cloud and AI ARR (annual recurring revenue), up 120% from a year ago; management has never seen products grow at this rate before, especially Agentforce

We ended this year with $900 million in Data Cloud and AI ARR. It grew 120% year-over-year. We’ve never seen products grow at these levels, especially Agentforce.

Salesforce’s management thinks building digital labour (AI agents) is a much bigger market than just building software

I’m sure you saw those ARK slides that got released over the weekend where she said that she thought this digital labor revolution, which is really like kind of what we’re in here now, this digital labor revolution, this looks like it’s anywhere from a few trillion to $12 trillion. I mean, I kind of agree with her. I think this is much, much bigger than software. I mean, for the last 25 years, we’ve been doing software to help our customers manage their data. That’s very exciting. I think building software that kind of prints and deploys digital workers is more exciting.

Salesforce’s unified platform, under one piece of code, combining customer data and an agentic platform, is what gives Agentforce its accuracy; Agentforce already has 3,000 paying customers just 90 days after going live; management thinks Agentforce is unique in the agentic capabilities it is delivering; Salesforce is Customer Zero for Agentforce; Agentforce has already resolved 380,000 service requests for Salesforce, with an 84% resolution rate, and just 2% of requests require human escalation; Agentforce has accelerated Salesforce’s sales-quoting cycles by 75% and increased AE (account executive) capacity by 7%; Agentforce is helping Salesforce engage more than 50 leads per day, freeing up the sales team for higher-value conversations; management wants every Salesforce customer to be using Agentforce; Data Cloud is at the heart of Agentforce; management is seeing customers across every industry deploying Agentforce; management thinks Salesforce’s agentic technology works better than many other providers, and that other providers are just whitewashing their technology with the “agent” label; Agentforce is driving growth across Salesforce’s portfolio; Salesfroce has prebuilt 170 specialised Agentforce industry skills; Agentforce’s 3,000 customers come from a diverse set of industries

Our formula now really for our customers is this idea that we have these incredible Customer 360 apps. We have this incredible Data Cloud, and this incredible agentic platform. These are the 3 layers. But that it is a deeply unified platform, it’s a deeply unified platform, it’s just one piece of code, that’s what makes it so unique in this market…

…It’s this idea that it’s a deeply unified platform with one piece of code all wrapped in a beautiful layer of trust. And that’s what gives Agentforce this incredible accuracy that we’re seeing…

…Just 90 days after it went live, we’ve already have 3,000 paying Agentforce customers who are experiencing unprecedented levels of productivity, efficiency and cost and cost savings. No one else is delivering at this level of capability…

…We’re seeing some amazing results on Salesforce as Customer Zero for Agentforce. Our digital labor force is resolving tens of thousands of customer service inquiries, freeing our human employees to focus on the most nuanced issues and customer relationships. We’re seeing tremendous momentum and success stories emerge as we execute our vision to make every company, every single company, every customer of ours, an Agentforce company, that is, we want every customer to be an Agentforce customer…

…We also continued phenomenal growth with Data Cloud this year, which is the heart of Agentforce. Data Cloud is the fuel that powers Agentforce and our customers are investing in it…

…We’re seeing customers deploy Agentforce across every industry…

…You got to be aware of the false agent because the false agent is out there where people can use the word agent or they kind of — they’re trying to whitewash all the agent, the thing, everywhere. But the reality is there is the real agents and there are the false agents, and we’re very fortunate to have the real stuff going on here. So we’ve got a lot more groundbreaking AI innovation coming…

…Today, we’re live on Agentforce across service and sales, our business technology organization, customer support and more. And the results are phenomenal. Since launching on our Salesforce help portal in October, Agentforce has autonomously handled 380,000 service requests, achieving an incredible 84% resolution rate and only 2% of the requests require human escalation. And we’re using Agentforce for quoting, accelerating our quoting cycles by more than 75%. In Q4, we increased our AE [account executive] capacity while still driving productivity up 7% year-over-year. Agentforce is transforming how we do outbound prospecting, already engaging more than 50 leads per day with personalized outreach and timely follow-ups, freeing up our teams to focus on high-value conversation. Our reps are participating in thousands of sales coaching training sessions each month…

…Agentforce is revolutionizing how our customers work by bringing AI-powered insights and actions directly into the workflows across the Customer 360 applications. This is driving strong growth across our portfolio. Sales Cloud and Service Cloud both achieved double-digit growth again in Q4. We’re seeing fantastic momentum with Slack, with customers like ZoomInfo, Remarkable and MIMIT Health using Agentforce and Slack to boost productivity…

…We’ve prebuilt over 170 specialized Agentforce industry skills and a team of 400 specialists, supporting transformations across sectors and geographies…

…We closed more than 3,000 paid Agentforce deals in the quarter. As customers continue to harness the value of AI deeply embedded across our unified platform, it is no surprise that these customers average nearly 4 clouds. And these customers came from a diverse set of industries with more than half in technology, manufacturing, financial services and HLS.

Lennar, the USA’s largest homebuilder, has been a Salesforce customer for 8 years and it is deploying Agentforce to fulfill their management’s vision of selling all kinds of new products; jewelry company, Pandora, an existing Salesforce customer, is deploying Agentforce with the aim of handling 30%-60% of its service cases with Agentforce; pharmaceutical giant Pfizer is using Agentforce to augment its sales teams; Singapore-based airline, Singapore Airlines, is now a customer of Agentforce and wants to deliver service through it; Goodyear is using Agentforce to automate and increase the effectiveness of its sales efforts; Accenture is using Agentforce to coach its sales team and expects to achieve higher win rates; Deloitte is using Agentforce and expects to achieve significant productivity gains

We’ve been working with Lennar, the nation’s largest homebuilder. And most of you know Lennar is really an incredible company, and they’ve been a customer of ours for about 8 years…

…You probably know Stuart Miller, Jon Jaffe, amazing CEOs. And those co-CEOs called me and said, “Listen, these guys have done a hackathon around Agentforce. We’ve got 5 use cases. We see incredible opportunities on our margin, incredible opportunities in our revenue. And do you have our back if we’re going to deploy this?” And we said, “Absolutely. We’ve deployed it ourselves,” which is the best evidence that this is real. And they are just incredible, their vision as a homebuilder providing 24/7 support, sales leads through all their digital channels. They’re able to sell all kinds of new products. I think they’re going to sell mortgages and insurance and all kinds of things to their customers. And the cool thing is they’re using our sales product, our service product, marketing, MuleSoft, Slack, Tableau, they use everything. But they are able to leverage it all together by realizing that just by turning it on, they get this incredible Agentforce capability…

…I don’t know how many of you know about Pandora. If you’ve been to a shopping center, you will see the Pandora store. You walk in, they have this gorgeous jewelry. They have these cool charm bracelets. They have amazing products. And if you know their CEO, Alex, he’s absolutely phenomenal…

…They’re in 100 countries. They employ 37,000 people worldwide. And Alex has this great vision to augment their employees with digital labor. And this idea that whether you’re on their website or in their store, or whatever it is, that they’re going to be able to do so much more with Agentforce. They already use — first of all, they already use Commerce Cloud. So if you’ve been to pandora.com and bought their products — and if you have it, by the way, it’s completely worthwhile. It’s great. And you can experience our Commerce Cloud, but it’s deeply integrated with our Service Cloud, with Data Cloud. It’s the one unified platform approach. And now they’re just flipping the switch, turning agents on, and they’re planning to deliver 30% to 60% of their service cases with Agentforce. That is awesome. And I really love Alex’s vision of what’s possible….

…The last customer I really want to hit on, which I’m so excited about, is Pfizer. And Albert is an incredible CEO. They are doing unbelievable things. They’ve been a tremendous customer. But now they’re really going all in on our Life Sciences Cloud…

…And with Agentforce, sales agents, for example, with Pfizer, that’s — they’ve got 20,000 customer-facing employees and customer-facing folks. That is just a radical extension for them with agents…

…I’m sure a lot of you — like, I have flown in Singapore Air. You know what? It’s a great airline. The CEO, Goh, is amazing. And he has a huge vision that also came out of Dreamforce, where — they’ve already delivered probably the best service of any airline in the world — they want to deliver it through agents. So whether you’re doing it with service or sales or marketing or commerce or all the different things that Singapore Air is doing with us, you’re going to be able to do this right on Singapore Air…

…Goodyear is partnering with us on their transformation, using Agentforce to automate and increase the effectiveness of their sales efforts. With Agentforce for Field Service, Goodyear will be able to reduce repair time by assisting technicians with answers to vehicle-related questions and autonomously scheduling field tech appointments…

…Accenture is using Agentforce Sales Coach, which provides personalized coaching and recommendations for sales teams, which is expected to lead to higher win rates. And Deloitte is projecting significant productivity gains and saved workforce hours as they roll out Agentforce over the next few years.

Salesforce’s management expects modest revenue contribution from Agentforce in 2025 (FY2026); contribution from Agentforce is expected to be more meaningful in 2026 (FY2027)

Starting with full fiscal year ’26. We expect revenue of $40.5 billion to $40.9 billion, growth of approximately 7% to 8% year-over-year in nominal and constant currency. And for subscription and support revenue, we expect growth of approximately 9% year-over-year in constant currency…

…On Agentforce, we are incredibly excited about the customer momentum we are seeing. However, the adoption cycle is still early as we focus on deployment with our customers. As a result, we are assuming a modest contribution to revenue in fiscal ’26. We expect the momentum to build throughout the year, driving a more meaningful contribution in fiscal ’27.

Salesforce has long had a mix of per-seat and consumption pricing models; for now, Agentforce is a consumption product, but management sees Agentforce evolving to a mix of per-seat and consumption pricing models; there was a customer that bought Agentforce in 2024 Q4 (FY2025 Q4) along with other Salesforce products and the customer signed a $7 million Agentforce contract and a $13 million contract for the other products; based on early days of engagement with Agentforce customers, management sees significant future upside to Salesforce’s pricing structure; Agentforce’s pricing will also take into account whether Agentforce will bring other human-based clouds into the customer; Agentforce is currently creating some halo around Salesforce’s other products

We’ve kind of started the company out with the per user pricing model, and that’s about humans. We price per human, so you’re kind of pricing per human. And then we have products, though, that are also in the consumption world as well. And of course, those started in the early days, things like our sandboxes, even things like our Commerce Cloud, even our e-mail marketing product, our Marketing Cloud. These are consumption-based products we’ve had for years…

…Now we have these kind of products that are for agents also, and agents are also a consumption model. So when we look at our Data Cloud, for example, that’s a consumption product. Agentforce is a consumption product. But it’s going to be a mix. It’s going to be a mix between what’s going on with our customers with how many humans do they have and then how many agents are they deploying…

…In the quarter, we did a large transaction with a large telecommunications company… we’re rebuilding this telecommunications company. So it’s Sales Cloud, it’s Service Cloud, it’s Marketing Cloud. It’s all of our core clouds, but then also it’s Agentforce. And the Agentforce component, I think, was maybe $7 million in the transaction. So she was buying $7 million of Agentforce. She bought $13 million in our products for humans, and I think that was about $20 million in total…

…We will probably move into the near future from conversations as we price most of our initial deals to universal credit. It will allow our customers far more flexibility in the way they transact with us. But we see this as a significant upside to our pricing structures going forward. And that’s what we’ve seen in the early days with our engagement with customers…

…Here’s a transaction that you’re doing, let’s say, a customer comes in, they’re very interested in building an agentic layer on their company, is that bringing other human-based clouds along with it?…

…[Question] Is Agentforce having a bit of a halo effect around some of your other products, meaning, as we are on the journey to get more monetization from Agentforce, are you seeing pickups or at least higher activity levels in some of your other products?

[Answer] That’s exactly right. And we’re seeing it in the way that our customers are using our technology, new ideas, new workflows, new engagements. We talked about Lennar as an example, their ability to handle leads after hours that they weren’t able to get back to or respond to in a quick time frame are now able to touch and engage with those leads. And that, of course, flows into their Salesforce automation system. And so we are seeing this halo effect with our core technology. It is making every single one of our core apps better as they deliver intelligence, underpinning these applications.

Salesforce’s management sees the combination of apps, data, and agents as the winning combination in an AI-world; management disputes Microsoft’s narrative that software apps will become a dumb database layer in an AI-dominated world, because it is the combination of apps, data, and agents that is important

I don’t know any company that’s 100% agents. I don’t know of any company that doesn’t need automation for its humans. I don’t know of any company that doesn’t need a data cloud where it needs a consistent common data repository for all of its agents to gain their intelligence. And I don’t know of any company that’s not going to need an agentic layer. And that idea of having apps, data and agents, I think, is going to be the winning combination…

…[Question] As part of that shift to agentic technology, there’s been a lot of debate about the SaaS technology and the business model. The SaaS tech stack that you built and pioneered, how does that fit into the agentic world? Is there a risk that SaaS just becomes a CRUD database?

[Answer] I’ve heard that Microsoft narrative, too. So I watched the podcast you watched, and that’s a very interesting idea. Here’s how I look at it, which is, I believe there is kind of a holy trinity here of AI CRM, which is the apps, the data and the agents. And these three things have to kind of work together. And I kind of put my money where our mouth is where we kind of built it and we delivered it. And you can see the 380,000 conversations that we had as point of evidence here in the last 90 days on our service and with a very high resolution rate of 84%. You can go to help.salesforce.com, and you can see that today.

Now Microsoft has had Copilot available for, I think, about 2 years or more than 2 years. And I know that they’re the reseller of OpenAI and they’ve invested, they kind of repackaged this ChatGPT, whatever. But where on their side are they delivering agents? Where in their company have they done this? Are they a best practice? Because I think that while they can say such a thing, do they have humans and agents working together to create customer success? Are they rebalancing their workforce with humans and agents? I think that it’s a very interesting point that, yes, the agentic layer is very important, but it doesn’t operate by itself. It operates with data, with a Data Cloud that has to be federated through your company, to all your data sources. And humans, we’re still here

Salesforce’s management is seeing Agentforce deliver tremendous efficiency in Salesforce’s customer support function that they may rebalance some customer-support roles into other roles; management is currently seeing AI coding tools improve the productivity of Salesforce’s engineering team by 30% and thinks even more productivity can be found; management will not be expanding Salesforce’s engineering team this year, but will grow the sales team

We really are seeing tremendous efficiency with help.salesforce.com. So we may see the opportunity to rebalance some of those folks into sales and marketing and other functions…

…We definitely have seen a lot of efficiency with engineering and with some of the new tools that I’ve seen, especially some of these high-performance coding tools. One of the key members of my staff who’s here in the room with us has just showed me one of his new examples of what we’re able to do with these coding tools, pretty awesome. And we’re not going to hire any new engineers this year. We’re seeing 30% productivity increase on engineering. And we’re going to really continue to ride that up…

…We’re going to grow sales pretty dramatically this year. Brian has got a big vision for how to grow the sales organization. probably another 10% to 20%, I hope, this year because we’re seeing incredible levels of demand.

Salesforce’s management thinks that AI agents is one of the catalysts to drive GDP growth

So if you want productivity to go up and you want GDP to grow up and you want growth, I think that digital labor is going to be one of the catalysts to make that happen.

Shopify (NASDAQ: SHOP)

Shopify launched its first AI-powered search integration with Perplexity in 2024

Last year, we… launched our first AI-powered search integration with Perplexity, enabling new ways for buyers to find merchants.

One of Shopify’s management’s focus areas in 2025 is to continue embracing AI by investing more in Sidekick and other AI capabilities that help merchants launch and grow faster; management wants to shift Shopify towards producing goal-oriented software; management believes Shopify is well-positioned as a leader for commerce in an AI-driven world

We will continue to embrace the transformative potential of AI. This technology is not just a part of the future, it is redefining it. We’ve anticipated this. So we’re already transforming Shopify into a platform where users and machines work seamlessly together. We plan to deepen our investment in Sidekick and other AI capabilities to help not just brand-new merchants to launch, but also to help larger merchants scale faster and drive greater productivity. Our efforts to shift towards more goal-oriented software will further help to streamline operations and improve decision-making. This focus on embracing new ways of thinking and working positions us not only as the platform of choice today, but also as a leader for commerce in the AI-driven era with a relentless focus on cutting-edge technology.

Shopify’s management believes Shopify will be one of the major net beneficiaries in the AI era as the company is leveraging AI really well, such as its partnerships with Perplexity and OpenAI

I actually think Shopify will very much be one of the major net beneficiaries in this new AI era. I think we are widely recognized as one of the best companies that foster long-term partnership. And so when it comes to partnership in AI, whether it’s Perplexity, where we’re now powering their search results with incredible product across the Shopify product catalog or OpenAI where we’re using — we have a direct set of their APIs to help us internally, we are really leveraging it as best as we can.

In terms of utilising AI, Shopify’s management sees 2 angles; the 1st angle is Shopify using AI to help merchants with mundane tasks and allow merchants to focus only on the things they excel at; the 2nd angle is Shopify using AI internally to make developers and customer-support teams more effective (with customer-support teams, Shopify is using AI to handle low-quality conversations with customers)

[Question] A question in regards to AI and the use of AI internally. Over the last year or so, you’ve made significant investments. Where are you seeing it operationally having the most impact? And then what has been the magnitude of productivity gains that you’ve seen?

[Answer] We think about it in sort of 2 ways. The first is from a merchant perspective, how can we make our merchants way more successful, get them to do things faster, more effectively. So things like Sidekick or media editor or Shopify Inbox, Semantic Search, Sidekick, these are things that now — that every merchant should want when they’re not just getting started, but also scaling their business. And those are things that are only available from Shopify. So we’re trying to make some of the more mundane tasks far more easy to do and get them to focus on things that only they can — only the merchants can do. And I think that’s an important aspect of what Shopify will bring…

…Internally, however, this is where it gets really interesting, because not only can we use it to make our developers more effective, but also, if you think about our support organization, now we can ensure that our support team is actually having very high-quality conversations with merchants, whereas a lot of low-quality conversations, things like configuring a domain or a C name or a user name and password issue, that can be handled really elegantly by AI.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s AI accelerators revenue more than tripled in 2024 and was mid-teens percent of overall revenue in 2024, but management expects AI accelerators revenue to double in 2025; management sees really strong AI-related demand in 2025

Revenue from AI accelerators, which we now define as AI GPU, AI ASICs and HBM controller for AI training and inference in the data center, accounted for close to mid-teens percent of our total revenue in 2024. Even after more than tripling in 2024, we forecast our revenue from AI accelerator to double in 2025 as the strong surge in AI-related demand continues…

…[Question] Try to get a bit more clarity on the cloud growth for 2025. I think, longer term, without a doubt, the technology definitely has lots of potential for demand opportunities, but I think — if we look at 2025 and 2026, I think there could be increasing uncertainties coming from maybe [indiscernible] spending, macro or even some of the supply chain challenges. And so I understand the management just provided a pretty good guidance for this year for sales to double. And so if you look at that number, do you think there is still more upside than downside as we go through 2025?

[Answer] I certainly hope there is upside, but I hope I get — my team can supply enough capacity to support it. Does that give you enough hint? 

TSMC’s management saw a mixed year of recovery for the global semiconductor industry in 2024 with strong AI-related demand but mild recovery in other areas

2024 was a mixed year of recovery for the global semiconductor industry. AI-related demand was strong, while other applications saw only a very mild recovery as macroeconomics condition weighed on consumer sentiment and the end market demand. 

TSMC’s management expects mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024 (previous forecast was for 50% CAGR for the 5-years starting from 2024, but off a lower base); management expects AI accelerators to be the strongest growth driver for TSMC’s overall HPC  platform and overall revenue over the next few years

Underpinned by our technology leadership and broader customer base, we now forecast the revenue growth from AI accelerators to approach a mid-40% CAGR for the 5-year period starting off the already higher base of 2024. We expect AI accelerators to be the strongest driver of our HPC platform growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.

TSMC’s management expects 20% revenue CAGR in USD terms in the 5-years starting from 2024, driven by growth across all its platforms; management thinks that in the next few years, TSMC’s smartphone and PC end-markets will have higher silicon content and faster replacement cycle, driven by AI-related demand, which will in turn drive robust demand for TSMC’s chip manufacturing service; the AI-related demand in the smartphone and PC end-markets are related to edge-AI

Looking ahead, as the world’s most reliable and effective capacity provider, TSMC is playing a critical and integral role in the global semiconductor industry. With our technology leadership, manufacturing excellence and customer trust, we are well positioned to address the growth from the industry megatrend of 5G, AI and HPC with our differentiated technologies. For the 5-year period starting from 2024, we expect our long-term revenue growth to approach a 20% CAGR in U.S. dollar term, fueled by all 4 of our growth platform, which are smartphone, HPC, IoT and automotive…

…[Question] I believe that 20% starting from a very — already very high base in 2024 is a really good long-term objective but just wondering that, aside from the strong AI demand, what’s your view on the traditionals, applications like PC and the smartphone, growth and particularly for this year.

[Answer] This year is still mild growth for PC and smartphone, but everything is AI related, all right, so you can start to see why we have confidence to give you a close to 20% CAGR in the next 5 years. AI: You look at a smartphone. They will put AI functionality inside, and not only that. So the silicon content will be increased. In addition to that, actually the replacement cycle will be shortened. And also they need to go into the very advanced technology because of, if you want to put a lot of functionality inside a small chip, you need a much more advanced technology to put those [indiscernible]. Put all together, that even smartphone, the unit growth is almost low single digit, but then the silicon and the replacement cycle and the technology migration, that give us more growth than just unit growth; similar reason for PC…

…On the edge AI, in our observation, we found out that our customers start to put up more neural processing inside. And so we estimated the 5% to 10% more silicon being used. [ Can it be ] every year 5% to 10%? Definitely it is no, right? So they will move to next node, the technology migration. That’s also to TSMC’s advantage. Not only that, I also say that, the replacement cycle, I think it will be shortened because of, when you have a new toy that — with AI functionality inside it, everybody want replacing, replace their smartphone, replace their PCs. And [ I count that one ] much more than the — a mere 5% increase.

TSMC’s upcoming A16 process technology is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads

We will also introduce A16 featuring Super Power Rail or SPR as separate offering. TSMC’s SPR is a innovative, best-in-class backside power delivery solution that is first in the industry to incorporate a novel backside metal scheme that preserves gate density and device width flexibility to maximize product benefit. Compared with N2P, A16 provide a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed, and additional 7% to 10% chip density gain. A16 is the best suitable for specific HPC product with a complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.

TSMC’s management thinks that the US government’s latest AI restrictions will only have a minimal impact on the company’s business

[Question] Overnight, the U.S. seems to put a new framework on restricting China’s AI business, right? So I’m wondering whether that will create some business impact to your China business.

[Answer] We don’t have all analysis yet, but the first look is not significantly. It’s manageable. So that meaning that, my customers who are being restricted [ or something ], we are applying for the special permit for them. And we believe that we have confidence that they will get some permission, so long as they are not in the AI area, okay, especially automotive industry. Or even you talk about crypto mining, yes.

TSMC’s management does not want to reveal the level of demand for AI-related ASICs (application-specific integrated circuits) from the cloud hyperscalers, but they are confident that the demand is real, and that the cloud hyperscalers will be working with TSMC as they all need leading-edge technology for their AI-related ASICs

[Question] Broadcom’s CEO recently laid out a large SAM for AI hyperscalers building out custom silicon. I think he was talking about million clusters from each of the customers he has in the next 2 or 3 years. What’s TSMC’s perspective on all this? 

[Answer] I’m not going to answer the question of the specific number, but let me assure you that, whether it’s ASIC or it’s graphic, they all need a very leading-edge technology. And they’re all working with TSMC, okay, so — and the second one is, is the demand real. Was — is — as a number that my customers said. I will say that the demand is very strong.

AI makes up all of the current demand for CoWoS (chip on wafer on substrate) capacity that TSMC’s management is seeing, but they think non-AI-related demand for CoWoS will come in the near future from CPUs and servers; there are rumours of a cut in orders for CoWoS, but management is not seeing any cuts; it appears that HBM (high bandwidth memory) is the key constraint on AI demand, instead of CoWoS; CoWoS was over 8% of TSMC’s revenue in 2024 and will be over 10% in 2025; CoWoS gross margin is better than before, but still lower than the corporate average

[Question] When can we see non-AI application such as server, smartphone or anything else can be — can start to adopt CoWoS capacity in case there is any fluctuation in the AI demand?

[Answer] Today is all AI focused. And we have a very tight capacity and cannot even meet customers’ need, but whether other products will adopt this kind of CoWoS approach, they will. It’s coming and we know that it’s coming.

[Question] When?

[Answer] It’s coming… On the CPU and on the server chip. Let me give you a hint…

…[Question] About your CoWoS and SoIC capacity ramp. Can you give us more color this year? Because recently there seemed to be a lot of market noises. Some add orders. Some cut orders, so I would like to see your view on the CoWoS ramp.

[Answer] That’s a rumor. I assure you. We are working very hard to meet the requirement of my customers’ demand, so “cut the order,” that won’t happen. We actually continue to increase, so we are — again I will say that. We are working very hard to increase the capacity…

…[Question] A question on AI demand. Is there a scenario where HBM is more of a constraint on the demand, rather than CoWoS which seems to be the biggest constraint at the moment? 

[Answer] I don’t comment on other supplier, but I know that we have a very tight capacity to support the AI demand. I don’t want to say I’m the bottleneck. TSMC, always working very hard with customer to meet their requirement…

…[Question] So we have observed an increasing margin of advanced packaging. Could you remind us the CoWoS contribution of last year? And do you expect the margin to kind of approach the corporate average or even exceed it after the so-called — the value reflection this year?

[Answer] Overall speaking, advanced packaging accounted for over 8% of revenue last year. And it will account for over 10% this year. In terms of gross margins, it is better. It is better than before but still below the corporate average. 

AI makes up all of the current demand for SoIC (system on integrated chips) that TSMC’s management is seeing, but they think non-AI-related demand for SoIC will come in the future

Today, SoIC’s demand is still focused on AI applications, okay? For PC or for other area, it’s coming but not right now.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks Tesla’s FSD (Full Self Driving) technology has grown up a lot in the past few years; management thinks that car-use can grow from 10 hours per week to 55 hours per week with autonomous vehicles; autonomous vehicles can be used for both cargo and people delivery; FSD currently works very well in the USA , and will soon work well everywhere else; the constraint Tesla is currently experiencing with autonomous vehicles is in battery packs; FSD makes traffic commuting safer; FSD is currently on Version 13, and management believes Version 14 will have a significant step-improvement; Tesla has launched the Cortex training cluster at Gigafactory Austin, and it has played a big role in advancing FSD; Tesla will launch unsupervised FSD in June 2025 in Austin; Tesla already has thousands of its cars driving autonomously daily in its factories in Fremont and Texas, and Tesla will soon do that in Austin and elsewhere in the world; Tesla’s solution for autonomous vehicles is a generalised AI solution which does not need high-precision maps; Tesla’s unsupervised FSD work outside of Austin even when it’s launched only in June 2025 in Austin, but management just wants to be cautious; management thinks Tesla will release unsupervised FSD in many parts of the USA by end-2025; management’s safety-standard for FSD is for it to be far, far, far superior to humans; management thinks Tesla will have unsupervised FSD in almost every market this year

For a lot of people, like their experience of Tesla autonomy is like if it’s even a year old, if it’s even 2 years old, it’s like meeting someone when they’re like a toddler and thinking that they’re going to be a toddler forever. But obviously not going to be a toddler forever. They grow up. But if their last experience was like, “Oh, FSD was a toddler.” It’s like, well, it’s grown up now. Have you seen it? It’s like walks and talks…

…My #1 recommendation for anyone who doubts is simply try it. Have you tried it? When is the last time you tried it? And the only people who are skeptical, the only people who are skeptical are those who have not tried it.

So a car goes — a passenger car typically has only about 10 hours of utility per week out of 168, a very small percentage. Once that car is autonomous, my rough estimate is that it is in use for at least 1/3 of the hours per week, so call it, 50, maybe 55 hours of the week. . And it can be used for both cargo delivery and people delivery…

That same asset, the thing that — these things that already exist with no incremental cost change, just a software update, now have 5x or more the utility than they currently have. I think this will be the largest asset value increase in human history…

…So look, the reality of autonomy is upon us. And I repeat my advice, try driving the car or let it drive you. So now it works very well in the U.S., but of course, it will, over time, work just as well everywhere else…

…Our current constraint is battery packs this year but we’re working on addressing that constraint. And I think we will make progress in addressing that constraint…

…So a bit more on full self-driving. Our Q4 vehicle safety report shows continued year-over-year improvement in safety for vehicles. So the safety numbers, if somebody has supervised full self-driving turn on or not, the safety differences are gigantic…

…People have seen the immense improvement with version 13, and with incremental versions in version 13 and then version 14 is going to be yet another step beyond that, that is very significant. We launched the Cortex training cluster at Gigafactory Austin, which was a significant contributor to FSD advancement…

…We’re going to be launching unsupervised full self-driving as a paid service in Austin in June. So I talked to the team. We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June…

…We already have Teslas operating autonomously unsupervised full self-driving at our factory in Fremont, and we’ll soon be doing that at our factory in Texas. So thousands of cars every day are driving with no one in them at our Fremont factory in California, and we’ll soon be doing that in Austin and then elsewhere in the world with the rest of our factories, which is pretty cool. And the cars aren’t just driving to exactly the same spot because, obviously, it all — [ implied ] at the same spot. The cars are actually programmed with where — with what lane they need to park into to be picked up for delivery. So they drive from the factory end of line to their destination parking spot and to be picked up for delivery to customers and then doing this reliably every day, thousands of times a day. It’s pretty cool…

…Our solution is a generalized AI solution. It does not require high-precision maps of locality. So we just want to be cautious. It’s not that it doesn’t work beyond Austin. In fact, it does. We just want to be — put our toe in the water, make sure everything is okay, then put a few more toes in the water, then put a foot in the water with safety of the general public as and those in the car as our top priority…

…I think we will most likely release unsupervised FSD in many regions of the country of the U.S. by the end of this year…

…We’re looking for a safety level that is significantly above the average human driver. So it’s not anywhere like much safer, not like a little bit safer than human, way safer than human. So the standard has to be very high because the moment there’s any kind of accident with an autonomous car, that immediately gets worldwide headlines, even though about 40,000 people die every year in car accidents in the U.S., and most of them don’t even get mentioned anywhere. But if somebody [ scrapes a shed ] within autonomous car, it’s headline news…

…But I think we’ll have unsupervised FSD in almost every market this year, limited simply by regulatory issues, not technical capability. 

Tesla’s management thinks the compute needed for Optimus will be 10x that of autonomous vehicles, even though a humanoid robot has 1,000x more uses than an autonomous vehicle; management has seen the cost of training Optimus (or AI, in general) dropping dramatically over time; management thinks Optimus can produce $10 trillion in revenue, and that will still make the training needs of $500 billion in compute a good investment; management realises their revenue projections for Optimus sound insane, but they believe in it (sounds like a startup founder trying to get funding from VCs); it’s impossible for management to predict the exact timing for Optimus because everything about the robot has to be designed and built from the ground up by Tesla (nothing could be bought off-the-shelf by Tesla), but management thinks Tesla will build a few thousand Optimus robots by end-2025, and that these robots will be doing useful work in Tesla’s factories in the same timeframe; management’s goal is to ramp up Optimus production at a far faster rate than anything has ever been ramped; Optimus can even do delicate things such as play the piano; Optimus is still not design-locked for production; Tesla might be able to deliver Optimus to external clients by 2026 H2; management is confident that at scale, Optimus will be cheaper to produce than a car

The training needs for Optimus, our Optimus humanoid robot are probably at least ultimately 10x what is needed for the car, at least to get to the full range of useful role. You can say like how many different roles are there for a humanoid robot versus a car? A humanoid robot has probably 1,000x more uses and more complex things than in a car. That doesn’t mean the training scales by 1,000 but it’s probably 10x…

…It doesn’t mean like — or Tesla’s going to spend like $500 billion in training compute because we will obviously train Optimus to do enough tasks to match the output of Optimus robots. And obviously, the cost of training is dropping dramatically with time. But it is — it’s one of those things where I think long-term, Optimus will be — Optimus has the potential to be north of $10 trillion in revenue, like it’s really bananas. So that you can obviously afford a lot of training compute in that situation. In fact, even $500 billion training compute in that situation will be quite a good deal…

…With regard to Optimus, obviously, I’m making these revenue predictions that sound absolutely insane, I realize that. But they are — I think they will prove to be accurate…

…There’s a lot of uncertainty on the exact timing because it’s not like a train arriving at the station for Optimus. We are designing the train and the station and in real time while also building the tracks. And sort of like, why didn’t the train arrive exactly at 12:05? And like we’re literally designing the train and the tracks and the station in real-time while like how can we predict this thing with absolute precision? It’s impossible. The normal internal plan calls for roughly 10,000 Optimus robots to be built this year. Will we succeed in building 10,000 exactly by the end of December this year? Probably not. But will we succeed in making several thousand? Yes, I think we will. Will those several thousand Optimus robots be doing useful things by the end of the year? Yes, I’m confident they will do useful things…

…Our goal is to run Optimus production faster than maybe anything has ever been ramped, meaning like aspirationally in order of magnitude, ramp per year. Now if we aspire to an order of magnitude ramp per year, perhaps, we only end up with a half order of magnitude per year. But that’s the kind of growth that we’re talking about. It doesn’t take very many years before we’re making 100 million of these things a year, if you go up by let’s say, a factor by 5x per year…

This is an entirely new supply chain, it’s entirely new technology. There’s nothing off the shelf to use. We tried desperately with Optimus to use any existing motors, any actuators, sensors. Nothing worked for a humanoid robot at any price. We had to design everything from physics-first principles to work for humanoid robot and with the most sophisticated hand that has ever been made before by far. Optimus will be also able to play the piano and be able to thread a needle. I mean this is the level of precision no one has been able to achieve…

…Optimus is not design-locked. So when I say like we’re designing the train as it’s going — we’re redesigning the train as it’s going down the tracks while redesigning the tracks and the train stations…

…I think probably with version 2, it is a very rough guess because there’s so much uncertainty here, very rough guess that we start delivering Optimus robots to companies that are outside of Tesla in maybe the second half of next year, something like that…

I’m confident at 1 million units a year, that the production cost of Optimus will be less than $20,000. If you compare the complexity of Optimus to the complexity of a car, so just the total mass and complexity of Optimus is much less than a car.

The buildout of Cortex accelerated the rollout of FSD Version 13; Tesla has invested $5 billion so far in total AI-related capex

The build-out of Cortex was accelerated because of the role — actually accelerate the rollout of FSD Version 13. Our cumulative AI-related CapEx, including infrastructure, so far has been approximately $5 billion. 

Tesla’s management is seeing significant interest from some car manufacturers in licensing Tesla’s FSD technology; management thinks that car manufacturers without FSD technology will go bust; management will only entertain situations where the volume would be very high

What we’re seeing is at this point, significant interest from a number of major car companies about licensing Tesla full self-driving technology…

…We’re only going to entertain situations where the volume would be very high. Otherwise, it’s just not worth the complexity. And we will not burden our engineering team with laborious discussions with other engineering teams until we obviously have unsupervised full self-driving working throughout the United States. I think the interest level from other manufacturers to license FSD will be extremely high once it is obvious that unless you have FSD, you’re dead.

Compared to Version 13, Version 14 of FSD will have a larger model size, longer context length, more memory, more driving-context, and more data on tricky corner cases

[Question] What technical breakthroughs will define V14 of FSD, given that V13 already covered photon to control? 

[Answer] So continuing to scale the model size a lot. We scale a bunch in V13 but then there’s still room to grow. So we’re going to continue to scale the model size. We’re going to increase the context length even more. The memory sort of like limited right now. We’re going to increase the amount of memory [indiscernible] minutes of context for driving. They’re going to add audio and emergency vehicles better. Add like data of the tricky corner cases that we get from the entire fleet, any interventions or any kind of like user intervention. We just add to the data set. So scaling in basically every access, training compute, [ asset ] size, model size, model context and also all the reinforcement learning objectives.

Tesla has difficulties training AI models for autonomous vehicles in China because the country previously did not allow Tesla to transfer training videos outside of China while the US government did not allow Tesla to do training in China; a workaround Tesla did was to train on publicly available videos of streets in China; Tesla also had to build a simulator for its AI models to train on bus lanes in China because they are complicated 

In China, which is a gigantic market, we do have some challenges because they weren’t — they currently allow us to transfer training video outside of China. And then the U.S. government won’t let us do training in China. So we’re in a bit of a buying there. It’s like a bit of a quandary. So what we’re really solving then is by literally looking at videos of streets in China that are available on the Internet to understand and then feeding that into our video training so that publicly available video of street signs and traffic rules in China can be used for training and then also putting it in a very accurate simulator. And so it will train using SIM for bus lanes in China. Like bus lanes in China, by the way, one of our biggest challenges in making FSD work in China is the bus lanes are very complicated. And there’s like literally like hours of the day that you’re allowed to be there and not be there. And then if you accidently go in at a bus lane at the wrong time, you get an automatic ticket instantly. And so it was kind of a big deal, bus lanes in China. So we put that into our simulator train on that, the car has to know what time of the day it is, read the sign. We’ll get this solved.

Elon Musk knows LiDAR technology really well because he built a LiDAR system for SpaceX that is in-use at the moment, but he thinks LiDAR is simply the wrong technology to use for autonomous vehicles because it has issues, and because humans are driving vehicles simply with our eyes and our biological neural nets

[Question] You’ve said in the past about LiDAR, for EVs, that LiDAR is a crutch, a fool’s errand. I think you even told me once, even if it was free, you’d say you wouldn’t use it. Do you still feel that way?

[Answer] Obviously humans drive without shooting lasers out of their eyes. I mean unless you’re superman. But like humans drive just with passive visual — humans drive with eyes and a neural net — and a brain neural net, sort of biological — so the digital equivalent of eyes and a brain are cameras and digital neural nets or AI. So that’s — the entire road system was designed for passive optical neural nets. That’s how the whole real system was designed and what everyone is expecting, that’s how we expect other cars to behave. So therefore, that is very obviously a solution for full self-driving as a generalized — but the generalized solution for full self-driving as opposed to the very specific neighborhood-by-neighborhood solution, which is very difficult to maintain, which is what our competitors are doing…

…LiDAR has a lot of issues. Like SpaceX Dragon docks with the space station using LiDAR, that’s a program that I personally spearheaded. I don’t have some fundamental bizarre dislike of LiDAR. It’s simply the wrong solution for driving cars on roads…

…I literally designed and built our own red LiDAR. I oversaw the project, the engineering thing. It was my decision to use LiDAR on Dragon. And I oversaw that engineering project directly. So I’m like we literally designed and made a LiDAR to dock with the space station. If I thought it was the right solution for cars, I would do that, but it isn’t.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s management continues to invest in AI and thinks that AI is game-changing for forecasting and insights on identity and measurement; Trade Desk’s AI efforts started in 2017 with Koa, but management sees much bigger opportunities today; management is asking every development team in Trade Desk to look for opportunities to introduce AI into Trade Desk’s platform; there are already hundreds of AI enhancements to Trade Desk’s platform that have been shipped or are going to be shipped in 2025

AI is providing next-level performance in targeting and optimization, but it is also particularly game-changing in forecasting and identity and measurement. We continue to look at our technology stack and ask, where can we inject AI and enhance our product and client outcomes? Over and over again, we are finding new opportunities to make AI investments…

…We started our ML and AI efforts in 2017 with the launch of Koa, but today, the opportunities are much bigger. We’re asking every scrum inside of our company to look for opportunities to inject AI into our platform. Hundreds of enhancements recently shipped and coming in 2025 would not be possible without AI. We must keep the pedal to the metal, not to chest them on stages, which everyone else seems to be doing, but instead to produce results and win share.

Wix (NASDAQ: WIX)

Wix’s AI Website Builder was launched in 2024 and has driven stronger conversion and purchase behaviour from users; more than 1 million sites have been created and published with AI Website Builder; most new Wix users today are creating their websites through Wix’s AI tools and AI Website Builder and these users have higher rates of converting to paid subscribers

2024 was also the year of AI innovation. In addition to the significant number of AI tools introduced, we notably launched our AI Website Builder, the new generation of our previous AI site builder introduced in 2016. The new AI Website Builder continues to drive demonstrably stronger conversion and purchase behavior…

…Over 1 million sites have been created and published with the Website Builder…

…Most new users today are creating their websites through our AI-powered onboarding process and Website Builder which is leading to a meaningful increase in conversion of free users to paid subscriptions, particularly among Self Creators.

Wix’s management launched Wix’s first directly monetised AI product – AI Site Chat – in December 2024; AI Site Chat will help businesses converse with customers round the clock; users of AI Site Chat have free limited access, with an option to pay for additional usage; AI Site Chat’s preliminary results look very promising

In December, we also rolled out our first directly monetized AI product – the AI Site Chat…

…The AI Site-Chat was launched mid-December to Wix users in English, providing businesses with the ability to connect with visitors 24/7, answer their questions, and provide relevant information in real time, even when business owners are unavailable. By enhancing availability and engagement on their websites, the feature empowers businesses to meet the needs of their customers around the clock, ultimately improving the customer experience and driving potential sales. Users have free limited access with the option to upgrade to premium plans for additional usage…

…So if you’re a Wix customer, you can now install a chat, AI-powered chat on your website, and this will handle customer requests, product inquiries and support request. And from — and again, it’s very early in days and the preliminary results, but it looks very promising. 

AI agents and assistants are an important part of management’s product roadmap for Wix in 2025; Wix is testing (1) an AI assistant for its Wix Business Manager dashboard, and (2) Marketing Agent, a directly monetizable AI agent that helps users accomplish marketing tasks; Marketing Agent is the first of a number of specialised AI agents management will roll out in 2025; management intends to test monetisation opportunities with the new AI agents

AI remains a major part of our 2025 product roadmap with particular focus on AI-powered agents and assistants…

… Currently, we are testing our AI Assistant within the Wix Business Manager as well as our AI Marketing Agent.

The AI Assistant in the Wix Business Manager is a seamlessly integrated chat interface within the dashboard. Acting as a trusted aide, this assistant guides users through their management journey by providing answers to questions and valuable insights about their site. With its comprehensive knowledge, the AI Assistant empowers users to better understand and leverage available resources, assisting with site operations and business tasks. For instance, it can suggest content options, address support inquiries, and analyze analytics—all from a single entry point.

The AI Marketing Agent helps businesses to market themselves online by proactively generating tailored marketing plans that align with users’ goals and target audiences. By analyzing data from their website, the AI delivers personalized strategies to enhance SEO, create engaging content, manage social media, run email campaigns and optimize paid advertising—all with minimal effort from the user. This solution not only simplifies marketing but also drives Wix’s monetization strategy, seamlessly guiding users toward high-impact paid advertising and premium marketing solutions. As businesses invest in growing their online presence, Wix benefits through a share of ad spend and premium feature adoption—fueling both user success and revenue growth.

We will continue to release and optimize specialized AI agents that assist our users in building the online presence they envision. We are exploring various monetization strategies as we fully roll out these agents and adoption increases.

Wix’s management is seeing Wix’s gross margin improve because of AI integration in customer care

Creative Subscriptions non-GAAP gross margin improved to 85% in Q4’24 and to 84% for the full year 2024, up from 82% in 2023. Business Solutions non-GAAP gross margin increased to 32% in Q4’24 and to slightly above 30% for the full year 2024. Continued gross margin expansion is the product of multiple years of cost structure optimization and efficiencies from AI integration across our Customer Care operations.

Wix’s management believes the opportunity for Wix in the AI era is bigger than what came before

There’s a lot of discussions about a lot of theories about it. But I really believe that the opportunity there is bigger than anything else because what we have today are going to continue to dramatically evolve into something that is probably more powerful and more enabling for small businesses to be successful. Overall, the Internet has a tendency to do it every 10 years or so, right, in the ’90s, the Internet started and became HTML, then it became images and then later on videos and then it became mobile, right? And I think they became interactive, everything become an application, kind of an application. And I think how website will look at the AI universe is the next step, and I think there’s a lot of exciting things we can offer our users there.

Visa (NYSE: V)

Visa is an early adopter of AI and management continues to drive adoption; Visa has seen material gains in engineering productivity; Visa has deployed AI in many functions, such as analytics, sales, finance, and marketing

We were very early adopters of artificial intelligence, and we continue to drive hard at the adoption of generative AI as we have for the last couple of years. So we’ve been working to embed AI and AI tooling into our company’s operations, I guess, broadly. We’ve seen material gains in productivity, particularly in our engineering teams. We’ve deployed AI tooling in client services, sales, finance, marketing, really everywhere across the company. And we were a very early adopter of applied AI in the analytics and modeling space, very early by like decades, we’ve been using AI in that space. So our data science and risk management teams have, at this point, decades of applied experience with AI, and they’re aggressively adopting the current generations of AI technology to enhance both our internal and our market-facing predictive and detective modeling capabilities. Our product teams are also aggressively adopting gen AI to build and ship new products.

Zoom Communications (NASDAQ: ZM)

Zoom AI Companion’s monthly active users (MAUs) grew 68% quarter-on-quarter; management has added new agentic AI capabilities to Zoom AI Companion; management will launch the Custom AI Companion add-on in April 2025; management will launch AI Companion for clinicians in March 2025; Zoom AI Companion is added into a low-end Zoom subscription plan at no added cost, and customers do not want to leave their subscriptions because of the added benefit of Zoom AI Companion; Zoom will be monetising Zoom AI Companion from April 2025 onwards through the Custom AI Companion add-on; the Custom AI Companion add-on would be $12 a seat when it’s launched in April 2025 and management thinks this price would provide a really compelling TCO (total cost of ownership) for customers; management thinks Custom AI Companion would have a bigger impact on Zoom’s revenue in 2026 (FY2027) than in 2025 (FY2026); see Point 28 for use cases for Custom AI Companion

Growth in monthly active users of Zoom AI Companion has accelerated to 68% quarter-over-quarter, demonstrating the real value AI is providing customers…

As part of AI Companion 2.0, we added advanced agentic capabilities, including memory, reasoning, orchestration and a seamless integration with Microsoft and Google services. In April, we’re launching Custom AI Companion add-on to automate workplace tasks through custom agents. This will personalize AI to fit customer needs, connect with their existing data, and work seamlessly with their third-party tools. We’re also enhancing Zoom Workplace for Clinicians with an upgraded AI Companion that will enable clinical note-taking capabilities and specialized medical features for healthcare providers starting in March…

…If you look at our low SMB customer online buyers, AI Companion is part of that at no additional cost, made our service very sticky and also the customers give a very basic example, like meeting summary, right? It works so well, more and more customers follow the value…

For high end, for sure, and we understand that today’s AI Companion and additional cost we cannot monetize. However, in April, we are going to announce the customized Companion for interested customers. We can monetize…

…[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?

[Answer] In regards to your question about what are sort of the assumptions or what’s the targeting in our [ head ] with the $12 Custom AI Companion SKU. I would say, starting with enterprise customers, obviously, the easiest place to sort of pounce on them is our own customer base and talk about that, but certainly not just limited to that. But we’ll be probably giving a lot more, I would say, at Enterprise Connect, which you can see on the thing there. But I would say we’ve assumed some degree of monetization in FY ’26, I think you’ll see more of it in ’27. And we think that the $12 price point is going to be a really compelling TCO story for our customers, it’s differentiated from what others in the market are pricing now. 

The Zoom Virtual Agent feature will soon be able to handle complex tasks

Zoom Virtual Agent will soon expand reasoning abilities to handle complex tasks while maintaining conversational context for more natural and helpful outcomes.

Zoom’s management believes Zoom is uniquely positioned to win in agentic AI for a few reasons, including Zoom having exception context of users’ ongoing conversations, and Zoom’s federated AI approach where the company can use the best models for each task

We’re uniquely positioned to succeed in agentic AI for several reasons:

● Zoom is a system of engagement for our users with recent information in ongoing conversations. This exceptional context along with user engagement allows us to drive greater value for customers.

● Our federated AI approach lets us combine the best models for each task. We can use specialized small language models where appropriate, while leveraging larger models for more complex reasoning – driving both quality and cost efficiency

Zoom’s management is seeing large businesses want to use Zoom because of the AI features of its products

You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth. 

Zoom’s management sees Zoom’s AI business services as a great way to monetise AI

You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth. But all those business services, that’s another great way for us to monetize AI.

Zoom’s management thinks Zoom’s cost of ownership with AI is lower than what competitors are offering

And I look at our AI Companion, all those AI Companion core features today at no additional cost, right? And customer really like it because of the quality, they’re getting better and better every quarter and very useful, right? Not like some other competitors, right? They talk about their AI strategy and when customers realize that, wow, it’s very expensive. And the total cost of ownership is not getting better because cost of the value is not [ great ], but also it’s not [ free ] and they always try to increase price.

A good example of a use case for Custom AI Companion

[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?

[Answer] So regarding the Custom AI Combined on use cases, high levels, we give a customer ability to customize their needs. I’ll give a few examples. One feature like we have a Zoom Service Call video clip, and we are going to support the standard template, right? How to support every customer? They have a customized template for each of the users, and this is a part of combining AI Studio, right? And also all kinds of third-party integration, right? And they like they prefer, right, some of those kind of sort of third-party application integration. With their data, with the knowledge, whether the [ big scenery ], a lot of things, right? Each company is different, they would not customized, so we can leverage our combining studio to work together with the customer to support their needs and also at same time commodities.

Zoom’s management expects the cost from AI usage to increase and so that will impact Zoom’s margins in the future, but management is also building efficiencies to offset the higher cost of AI

[Question] As we think about a shift more towards AI contribution, aren’t we shifting more towards a consumption model rather than a seat model over time, why wouldn’t we see margin compression longer term?

[Answer] Around how to think about margins and business models and why we don’t see compression. And what I would say is that — what we expect to see is similar to what you saw in FY ’25, which is we’re seeing obvious increase in cost from AI.  And that we have an ongoing methodical kind of efficiency list to offset, and we certainly expect that broadly to continue into FY ’26. So I think we feel good about our ability to kind of moderate that. There’s other things we do more holistically where we can offset stuff that’s maybe not in AI in our margins, things like [ colos ], et cetera, that we’ve talked about previously. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Microsoft, Paycom Software, PayPal, Salesforce, Shopify, TSMC, Tesla, The Trade Desk, Wix, Visa, and Zoom. Holdings are subject to change at any time.

Where Are US Stocks Headed In 2025?

Beware of bad forecasts (and the most honest forecast you can find)

We’re at the start of a new year in 2025, and there has been a deluge of forecasts in recent weeks for where the US stock market will end the year. 

For me, the most important thing to know about the forecasts is just how often they have been right. Unfortunately, the collective forecasting track record of even the USA’s biggest banks and investment firms have been poor.

Morgan Housel once studied their annual forecasts for the S&P 500 – a widely followed index for the US stock market – from 2000 to 2014. He found that a simple assumption of the S&P 500 going up by 9% a year (the 9% figure was chosen because it represented the index’s long-term annualised return) was more accurate than the forecasts provided by the banks and investment firms; the former was off by an average of 14.1 percentage points per year while the latter was off by 14.7 percentage points per year.

When thinking about the future return of stocks, Housel once wrote that it can be boiled down simply to the “dividend yield + earnings growth +/- change in the earnings multiple (valuations).” I agree, it really is that simple. The dividend yield and earnings growth can be estimated with a reasonable level of accuracy. What’s tricky here is the change in the earnings multiple. Housel explained:

“Earnings multiples reflect people’s feelings about the future. And there’s just no way to know what people are going to think about the future in the future. How could you?”

To compound the problem, over short periods of time, such as a year, it’s the change in the earnings multiple that has an overwhelming impact on how stock prices move. In Housel’s dataset when he was looking at market forecasts, 2002 was a year with one of the largest declines for the S&P 500 – it fell by more than 20%. According to data from economist and Nobel Laureate Robert Shiller, the S&P 500’s earnings actually grew by 12% in 2002. It was the decline in the index’s earnings multiple by 30% from 46 to 33 that led to the sharp drop in its price during the year. The forecasters were predicting that the S&P 500 would increase by a mid-teens percentage in price in 2002, which was close to the index’s earnings growth for the year – I believe what the forecasters failed to anticipate was the sharp drop in the earnings multiple. 

If you really need a forecast for where the US stock market will end up in 2025, check out the table below. It shows where the S&P 500 will be given various assumptions for its earnings growth and its earnings multiple. For reference, the index ended the year at a price level of 5,882 with a price-to-earnings (P/E) ratio of 28. If the S&P 500’s earnings fell by 20% in 2025 and the P/E ratio shrank to 5, we’d be looking at a price level of 840 and a disastrous 86% price decline; if earnings growth was 20%, and the P/E ratio expanded to 40, we’d be looking at a price level of 10,083, and a handsome gain of 71%. 

The table contains a wide range of outcomes. But it’s possible for the S&P 500’s actual performance in 2025 to exceed the boundaries of the table. It’s hard to say where the limits are when it comes to the feelings of market participants. Nonetheless, of all the forecasts you’ve seen and are going to see about the US stock market for 2025, I’m confident the table in this article will be the most honest forecast you can find.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time. 

More Of The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management introduced multiple generative AI models in the Firefly family in 2024 and now has a generative video model; Adobe’s generative AI models are designed to be safe for commercial usage; the Firefly models are integrated across Adobe’s software products, which brings value to creative professionals across the world; Firefly has powered 16 billion generations (12 billion in 2024 Q2) since its launch in March 2023 and each month in 2024 Q3 has set a new record in generations; the new Firefly video model is in limited beta, but has already gathered massive customer interest (the model has driven a 70% increase in Premier Pro beta users since its introduction) and will be generally available in early-2025; recent improvements to the Firefly models include 4x faster image generation; enterprises such as Tapestry and Pepsi are using Firefly Services to scale content production; Firefly is the foundation of Adobe’s AI-related innovation; management is using Firefly to drive top-of-funnel user-acquisition for Adobe

2024 was also a transformative year of product innovation, where we delivered foundational technology platforms. We introduced multiple generative AI models in the Adobe Firefly family, including imaging, vector design and, most recently, video. Adobe now has a comprehensive set of generative AI models designed to be commercially safe for creative content, offering unprecedented levels of output quality and user control in our applications…

…The deep integration of Firefly across our flagship applications in Creative Cloud, Document Cloud, and Experience Cloud is driving record customer adoption and usage. Firefly-powered generations across our tools surpassed 16 billion, with every month this past quarter setting a new record…

…We have made major strides with our generative AI models with the introduction of Firefly Image Model 3 enhancements to our vector models, richer design models, and the all-new Firefly Video Model. These models are incredibly powerful on their own and their deep integration into our tools like Lightroom, Photoshop, Premiere, InDesign and Express have brought incredible value to millions of creative professionals around the world…

…The launch of the Firefly Video Model and its unique integration in Premier Pro and limited public beta garnered massive customer interest, and we look forward to making it more broadly available in early 2025.  This feature drove a 70% increase in the number of Premier Pro beta users since it was introduced at MAX. Enhancements to Firefly image, vector, and design models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Premiere Pro and Adobe Express…

…Firefly Services adoption continued to ramp as enterprises such as Pepsi and Tapestry use it to scale content production, given the robust APIs and ease of creating custom models that are designed to be commercially safe…

…This year, we introduced Firefly Services. That’s been — that’s off to a great start. We have a lot of customers that are using that. A couple we talked about on the call include Tapestry. They’re using it for scaled content production. Pepsi, for their Gatorade brand, is enabling their customers to personalize any merchandise that they’re buying in particular, starting with Gatorade bottles. And these have been very, very productive for them, and we are seeing this leveraged by a host of other companies for everything from localization at scale to personalization at scale to user engagement or just raw content production at scale as well…

…You’re exactly right in terms of Firefly is a platform and a foundation that we’re leveraging across many different products. As we talked about, everything from Express and Lightroom and even in Acrobat on mobile for a broad-based but then also in our core Creative products, Photoshop, Illustrator, Premiere. And as we’ve alluded to a number of times on this call, with the introduction of video, even a stand-alone offer for Firefly that we think will be more valuable from a tiering perspective there. And then into Firefly Services through APIs in connection to GenStudio. So we are looking at leveraging the power of this AI foundation in all the activities…

…We see that when we invest in mobile and web, we are getting some very positive signals in terms of user adoption and user conversion rate. So we’re using Firefly very actively to do that.

Adobe’s management has combined content and data in Adobe GenStudio to integrate content creation with marketing, leading to an end-to-end content supply chain solution; the Adobe GenStudio portfolio has a new addition in Adobe GenStudio for Performance Marketing, which has seen strong customer demand since becoming generally available recently; management is expanding the go-to-market teams to sell GenStudio solutions that cut across the Digital Media and Digital Experience segments and early success has been found, with management expecting acceleration in this pipeline throughout FY2025 and beyond

We set the stage to drive an AI content revolution by bringing content and data together in Adobe GenStudio integrating high-velocity creative expression with enterprise activation. The release of Adobe GenStudio for performance marketing integrates Creative Cloud, Express, and Experience Cloud and extends our end-to-end content supply chain solution, empowering freelancers, agencies, and enterprises to accelerate the delivery of content, advertising and marketing campaigns…

…We have brought our Creative and Experience Clouds together through the introduction of Firefly Services and GenStudio, addressing the growing need for scaled content production in enterprises…

… GenStudio enables agencies and enterprises to unlock new levels of creativity and efficiency across content creation and production, workflow and planning, asset management, delivery and activation and reporting and insights. 

Adobe GenStudio for Performance Marketing is a great addition to the GenStudio portfolio, offering an integrated application to create paid social ads, display ads, banners, and marketing e-mails by leveraging preapproved on-brand content. It brings together creative teams that define the foundational requirements of a brand, including guidelines around brand voice, channels, and images with marketing teams that need to deliver numerous content variations with speed and agility. We are seeing strong customer demand for Adobe GenStudio for Performance Marketing since its general availability at MAX…

… We’re expanding our enterprise go-to-market teams to sell these integrated solutions that cut across Digital Media and Digital Experience globally under the new GenStudio umbrella. We have seen early success for this strategy that included Express and Firefly Services in Q4. As we enable our worldwide field organization in Q1, we anticipate acceleration of this pipeline throughout the rest of the year and beyond.

Adobe’s management introduced AI Assistant in Acrobat and Reader in FY2024; users of AI Assistant completed their document-tasks 4x faster on average; AI Assistant is now available across desktop, web, and mobile; management introduced specialised AI for specific document-types and tasks in 2024 Q3 (FY2024 Q4); management saw AI Assistant conversations double sequentially in 2024 Q3; AI Assistant is off to an incredibly strong start and management sees it continuing to accelerate; AI Assistant allows users to have conversations with multiple documents, some of which are not even PDFs, and it turns Acrobat into a general-purpose productivity platform; the rollout of AI Assistant in more languages and documents gives Acrobat’s growth more durability

We took a major step forward in FY ’24 with the introduction of AI Assistant in Acrobat and Reader. AI Assistant and other AI features like Liquid Mode and Firefly are accelerating productivity through faster insights, smarter document editing and integrated image generation. A recent productivity study found that users leveraging AI Assistant completed their document-related tasks 4x faster on average. AI Assistant is now available in Acrobat across desktop, web, and mobile and integrated into our Edge, Chrome, and Microsoft Teams extensions. In Q4, we continued to extend its value with specialized AI for contracts and scanned documents, support for additional languages, and the ability to analyze larger documents…

… We saw AI Assistant conversations double quarter-over-quarter, driving deeper customer value…

… AI Assistant for Acrobat is off to an incredibly strong start and we see it continuing to accelerate…

…One of the big things that I think has been unlocked this year is moving, not just by looking at a PDF that you happen to be viewing, but being able to look at and have a conversation with multiple documents, some of which don’t even have to be PDF. So that transition and that gives us the ability to really take Acrobat and make it more of a general purpose productivity platform…

…The thing I’ll add to that is the durability of that, to your point, in languages, as we roll that out in languages, as we roll it out across multiple documents and as we roll it out in enterprises and B2B specifically. So again, significant headroom in terms of the innovation agenda of how Acrobat can be made even more meaningful as a knowledge tool within the enterprise.  

Adobe’s management will soon introduce a new higher-priced Firefly offering that includes the video models; management thinks the higher-priced Firefly offering will help to increase ARPU (average revenue per user); management sees video generation as a high-value activity, which gives Adobe the ability to introduce higher subscription tiers that come with video generation; management sees consumption of AI services adding to Adobe’s ARR (annual recurring revenue) in 2 ways in FY2025, namely, (1) pure consumption-based pricing, and (2) consumption leading to a higher pricing-tier; management has learnt from pricing experiments for AI services and found that the right model for Adobe is a combination of access to features and usage-limits

We will soon introduce a new higher-priced Firefly offering that includes our video models as a comprehensive AI solution for creative professionals. This will allow us to monetize new users, provide additional value to existing customers, and increase ARPU…

…Video generation is a much higher-value activity than image generation. And as a result, it gives us the ability to start to tier Creative Cloud more actively there…

…You’re going to see “consumption” add to ARR in 2 or maybe 3 ways more so in ’25 than in ’24. The first, and David alluded to this, is if you have a video offering and that video offering, that will be a pure consumption pricing associated with it. I think the second is in GenStudio and for enterprises and what they are seeing. With respect to Firefly Services, which, again, I think David touched on how much momentum we are seeing in that business. So that is, in effect, a consumption business as it relates to the enterprise so I think that will also continue to increase. And then I think you’ll see us with perhaps more premium price offering. So the intention is that consumption is what’s driving the increased ARR, but it may be as a result of a tier in the pricing rather than a consumption model where people actually have to monitor it. So it’s just another way, much like AI Assistant is of monetizing it, but it’s not like we’re going to be tracking every single generation for the user, it will just be at a different tier…

… What we’ve done over the last year, there’s been a bit of experimentation, obviously, in the core Creative applications. We’ve done the generative credits model. What we saw with Acrobat was this idea of a separate package and a separate SKU that created a tier that people were able to access the feature through. And as we learn from all of these, we think, as Shantanu had mentioned earlier, that the right tiering model for us is going to be a combination of feature, access to certain features and usage limits on it. So the higher the tier, the more features you get and the more usage you get of it.

The Adobe Experience Platform (AEP) AI Assistant helps marketers automate tasks and generate new audiences and journeys

Adobe Experience Platform AI Assistant empowers marketers to automate tasks and generate new audiences and journeys. Adobe Experience Manager generates variations, provides dynamic and personalized content creation natively through AEM, enabling customers to deliver more compelling and engaging experiences on their websites.

Adobe’s management thinks there are 3 foundational differences in the company’s AI models and what the rest are doing, namely, (1) commercially safe models, (2) incredible control of the models, and (3) the integration of the models into products

The foundational difference between what we do and what everyone else does in the market really comes down to 3 things: one is commercially safe, the way we train the models; two is the incredible control we bake into the model; and three is the integration that we make with these models into our products, increasingly, of course, in our CC flagship applications but also in Express and Legroom and these kinds of applications but also in Anil’s DX products as well. So that set of things is a critical part of the foundation and a durable differentiator for us as we go forward.

Adobe’s management is seeing that users are onboarded to products faster when using generative AI capabilities; management is seeing that users who use generative AI features have higher retention rates

We are seeing in the core Creative business, when people try something like Photoshop, the onboarding experience is faster to success because of the use of generative AI and generative capabilities. So you’ll start to see us continuing to drive more proliferation of those capabilities earlier in the user journeys, and that has been proven very productive. But we also noticed that more people use generative AI. Again, we’ve always had good retention rates, but the more people use generative AI, the longer they retain as well. 

MongoDB (NASDAQ: MDB)

MongoDB’s management is seeing a lot of large customers want to run workloads, even AI workloads, in on-premise format

We definitely see lots of large customers who are very, very committed to running workloads on-prem. We even see some customers want who are on to run AI workloads on-prem…

… I think you have some customers who are very committed to running a big part of the estate on-prem. So by definition, then if they’re going to build an AI workload, it has to be run on-prem, which means that they also need access to GPUs, and they’re doing that. And then other customers are leveraging basically renting GPUs from the cloud providers and building their own AI workloads.    

MongoDB’s initiative to accelerate legacy app modernisation with AI (Relational Migrator) has seen a 50% reduction in the cost to modernise in its early days; customer interest in this initiative is exceeding management’s expectations; management expects modernisation projects to include large services engagements and MongoDB is increasing its professional services delivery capabilities; management is building new tools to accelerate future monetisation of service engagements; management has growing confidence that the monetisation of modernisation capabilities will be a significant growth driver for MongoDB in the long term; there are a confluence of events, including the emergence of generative AI to significantly reduce the time needed for migration of databases, that make the modernisation opportunity attractive for MongoDB; the buildout of MongoDB’s professional services capabilities will impact the company’s gross margin

We are optimistic about the opportunity to accelerate legacy app modernization using AI and are investing more in this area. As you recall, we ran a few successful pilots earlier in this year, demonstrating that AI tooling combined with professional services and our relational migrator product, can significantly reduce the time, cost and risk of migrating legacy applications on to MongoDB. While it’s early days, we have observed a more than 50% reduction in the cost to modernize. On the back of these strong early results, additional customer interest is exceeding our expectations. 

Large enterprises in every industry and geography are experiencing acute pain from their legacy infrastructure and are eager for more agile performance and cost-effective solutions. Not only our customers excited to engage with us, they also want to focus on some of the most important applications in their enterprise further demonstrating the level of interest and size of the long-term opportunity.

As relational applications encompass a wide variety of database types, programming languages, versions and other customer-specific variables, we expect modernization projects to continue to include meaningful services engagements in the short and medium term. Consequently, we are increasing our professional services delivery capabilities, both directly and through partners. In the long run, we expect to automate and simplify large parts of the modernization process. To that end, we are leveraging the learnings from early service engagements to develop new tools to accelerate future monetization efforts. Although it’s early days and scaling our legacy app monetization capabilities will take time, we have increased conviction that this motion will significantly add to our growth in the long term…

…We’re so excited about the opportunity to go after legacy applications is that, one, it seems like there’s a confluence of events happening. One is that the increasing cost and tax of supporting and managing these legacy apps are just going up enough. Second, for many customers who are in regulated industries, the regulators are calling their the fact that they’re running on these legacy apps a systemic risk, so they can no longer kick the can down the road. Third, also because they no longer kick the can around, some vendors are going end of life, So they have to make a decision to migrate those applications to a more modern tech stack. Fourth, because Gen AI is so predicated on data and to build a competitive advantage, you need to leverage your proprietary data. People want to access that data and be able to do so easily. And so that’s another reason for them to want to modernize…

…we always could help them very easily move the data and map the schema from a relational schema to a document schema. The hardest part was essentially rewriting the application. Now with the advent of GenAI, you can now significantly reduce the time. One, you can use GenAI to analyze the existing code. Two, you can use GenAI to reverse engineer tests to test what the code does. And then three, you can use GenAI to build new code and then use this test to ensure that the new code produce the same results as the old code. And so all that time and effort is suddenly cut in a meaningful way…

…We’re really building out that capacity in order to meet the demand that we’re seeing relative to the opportunity. We’re calling it in particular because it has a gross margin impact because that’s where that will typically show up. 

MongoDB’s management thinks that the company’s database is uniquely suited for the query-rich and complex data structures commonly found in AI applications; AI-powered recommendation systems have to consider complex data structures, beyond a customer’s purchase history; MongoDB’s database unifies source data, metadata, operational data and vector data in all 1 platform, providing a better developer experience; management thinks MongoDB is well-positioned for AI agents because AI agents that perform tasks need to interact with complex data structures, and MongoDB’s database is well-suited for this

MongoDB is uniquely equipped to query-rich and complex data structures typical of AI applications. The ability of a database to query-rich and complex data structures is crucial because AI applications often rely on highly detailed, interrelated and nuanced data to make accurate predictions and decisions. For example, a recommendation system doesn’t just analyze a single customer’s purchase but also considers their browsing history, peer group behavior and product categories requiring a database that can query and ensuring these complex data structures. In addition, MongoDB’s architecture unified source data, metadata, operational data and vector data in all 1 platform, updating the need for multiple database systems and complex back-end architectures. This enables a more compelling developer experience than any other alternative…

…When you think about agents, there’s jobs, there’s sorry, there’s a job, this project and then this task. Right now, the agents that are being rolled out are really focused on task, like, say, something from Sierra or some other companies that are rolling out agents. But you’re right, what they deem to do is to deal with being able to create a rich and complex data structure.

Now why is this important for in AI is that AI models don’t just look at isolated data points, but they need to understand relationships, hierarchies and patterns within the data. They need to be able to essentially get real-time insights. For example, if you have a chat bot where someone’s clearing customers kind of trying to get some update on the order they placed 5 minutes ago because they may have not gotten any confirmation, your chatbot needs to be able to deal with real-time information. You need to be able to deal with basically handling very advanced use cases, understanding like do things like fraud detection, to understand behaviors on supply chain, you need to understand intricate data relationships. All these things are consistent with MongoDB offers. And so we believe that at the end of the day, we are well positioned to handle this.

And the other thing that I would say is that we’ve embedded in a very natural way, search and vector search. So we’re just not an OLTP [online transaction processing] database. We do tech search and vector search, and that’s all one experience and no other platform offers that, and we think we have a real advantage. 

In the AI market, MongoDB’s management is seeing most customers still being in the experimental stage, but the number of AI apps in production is increasing; MongoDB has thousands of AI apps on its platform, but only a small number have achieved enterprise-scale; there’s one AI app on MongoDB’s platform that has grown 10x since the start of 2024 and is a 7-figure workload today; management believes that as AI technology matures, there will be more AI apps that attain product-market fit but it’s difficult to predict when this will happen; management remains confident that MongoDB will capture its share of successful AI applications, as MongoDB is popular with developers building sophisticated AI apps; there are no compelling AI models for smartphones at the moment because phones do not have sufficient computing power

From what we see in the AI market today, most customers are still in the experimental stage as they work to understand the effectiveness of the underlying tech stack and build early proof-of-concept applications. However, we are seeing an increasing number of AI apps in production. Today, we have thousands of AI apps on our platform.  What we don’t yet see is many of these apps actually achieving meaningful product-market fit and therefore, significant traction. In fact, as you take a step back and look at the entire universe of AI apps, a very small percentage of them have achieved the type of scale that we commonly see with enterprise-specific applications. We do have some AI apps that are growing quickly, including one that is already a 7-figure workload that has grown 10x since the beginning of the year.

Similar to prior platform shifts as the usefulness of AI tech improves and becomes more cost-effective we will see the emergence of many more AI apps that do nail product market fit, but it’s difficult to predict when that will happen more broadly. We remain confident that we will capture our fair share of these successful AI applications as we see that our platform is popular with developers building more sophisticated AI use cases…

…Today, we don’t have a very compelling model designed for our phones, right? Because today, the phones don’t have the computing horsepower to run complex models. So you don’t see a ton of very, very successful consumer apps besides, say, ChatGPT or Claude.

MongoDB’s management is building enterprise-grade Atlas Vector Search functionality into the company’s platform so that MongoDB will be in an even better position to win AI opportunities; management is bringing vector search into MongoDB’s community and EA (Enterprise Advance, which is the company’s non-Atlas business) offerings

We continue investing in our product capabilities, including enterprise-grade Atlas Vector Search functionality to build on this momentum and even better position MongoDB to capture the AI opportunity. In addition, as previously announced, we are bringing search and vector service to our community and EA offerings, leveraging our run-anywhere competitive advantage in the world of AI…

…We are investing in our what we call our EA business. First, we’re starting by investing with Search and Vector Search and a community product. That does a couple of things for us. One, whenever anyone starts with MongoDB with the open source product, they need get all the benefits of that complete and highly integrated platform. Two, those capabilities will then migrate to EA. So EA for us is an investment strategy.

MongoDB’s management is expanding the MongoDB AI Applications Program (MAAP); the MAAP has signed on new partners, including with Meta; management expects more of the MAAP workloads to happen on Atlas initially

We are expanding our MongoDB AI Applications program, or MAAP, which helps enterprise customers build and bring AI applications into production by providing them with reference architectures, integrations with leading tech providers and coordinated services and support. Last week, we announced a new cohort of partners, including McKinsey, Confluent, CapGemini and Instructure as well as the collaboration with Meta to enable developers to build arenrich applications on MongoDB using Llama…

…[Question] On the MAAP program, are most of those workloads going to wind up in Atlas? Or will that be a healthy combination of EA and Atlas?

[Answer] I think it’s, again, early days. I would say — I would probably say more on the side of Atlas than EA in the early days. I think once we introduce Search and Vector Search into the EA product, you’ll see more of that on-prem. Obviously, people can use MongoDB for AI workloads using other technologies as well in conjunction with MongoDB for on-prem AI use cases. But I would say you’re probably going to see that happen first in Atlas.

Tealbook consolidated from Postgres, PG Vector, and Elastic Search to MongoDB; Tealbook has seen cost efficiencies and increased scalability with Atlas Vector Search for its application that uses generative AI to collect, verify and enrich supplier data across various sources

Tealbook, a supplier intelligence platform migrated from [ Postgres ], [ PG Vector ] and Elastic Search to MongoDB to eliminate technical debt and consolidate their tech stack. The company experienced workload isolation and scalability issues in PG vector, and we’re concerned with the search index inconsistencies, which were all resolved with the migration to MongoDB. With Atlas Vector search and dedicated search notes, Tealbook has realized improved cost efficiency and increase scalability for the supplier data platform, an application that uses GenAI to collect, verify and enrich supplier data across various sources.

MongoDB’s partnerships with all 3 major cloud providers – AWS, Azure, and GCP – for AI workloads are going well; management expects the cloud providers to bundle their own AI-focused database offerings with their other AI offerings, but management also thinks the cloud providers realise that MongoDB has a better offering and it’s better to partner with the company

With AWS, as you said, they just had their Reinventure last week. It remains very, very strong. We closed a ton of deals this past quarter, some of them very, very large deals. We’re doing integrations to some of the new products like Q and Bedrock and the engagement in the field has been really strong.

On Azure, I think we — as I’ve shared in the past, we start off with a little bit of a slower start. But in the words of the person who runs their partner leadership, the Azure MongoDB relationship has never been stronger. — we closed a large number of deals, we’re part of what’s called the Azure-native IC service program and have a bunch of deep integrations with Azure, including Fabric, Power BI, Visual Studio, Symantec Kernel and Azure OpenAI studio. And we’re also one of Azure’s largest marketplace partners.

And GCP does — we’ve actually seen some uptick in terms of co-sales that we’ve done this past quarter. GCP made some comp changes where that were favorable to working with MongoDB that we saw some results in the field and we’re focused on closing a handful of large deals with GCP in Q4. So in general, I would say things are going quite well.

And then in terms of, I guess, implying your question was like the hyperscalers and are they potentially bundling things along with their AI offerings? I mean, candidly, since day 1, the hyperscalers have been bundling their database offerings with every offering that they have. And that’s been their pretty predominant strategy. And we’ve — I think we’ve executed well against strategy because databases are not like a by-the-way decision. It’s an important decision. And I think the hyperscalers are seeing our performance and realize it’s better to partner with us. And as I said, customers understand the importance of the data layer, especially by our applications. And so the partnership across all 3 hyperscalers is strong.

A new MongoDB AI-related capability called Atlas Search Nodes is seeing very high demand; Atlas Search is being used by one of the world’s largest banks to provide a Google-like Search experience on payments data for customers; an AI-powered accounting software provider is using Atlas Search to allow end-users to perform ad-hoc analysis

On search, we introduced a new capability called Atlas Search nodes, which where you can asymmetrically scale your search nodes because if you have a search intensive use case, you don’t have to scale all your nodes because that have become quite expensive. And we’ve seen that this kind of groundbreaking capability really well received. The demand is quite high. And because customers like they can tune the configuration to the unique needs of their search requirements.

One of the world’s largest banks is using Atlas Search to provide like a Google-like search experience on payments data for massive corporate customers. So there’s a customer-facing application, and so performance and scalability are critical. A leading provider of AI-powered accounting software uses Atlas Search to Power’s invoice analytics future, which allows end users on finance teams to perform ad hoc analysis and easily find past due invoices and voices that contain errors.

Vector Search is only in its first full year of being generally available; uptake of Vector Search has been very high; MongoDB released a feature on Atlas Vector Search in 2024 Q3 that reduces memory requirements by up to 96% and this helps Atlas Vector Search support larger vector workloads at a better price-performance ratio; a multinational news organisation used Vector Search to create a generative AI tool to help producers and journalists sift through vast quantities of information; a security firm is using Vector Search for AI fraud; a global media company replaced Elastic Search with Vector Search for a user-recommendation engine

On Vector Search, again, and it’s been our kind of our first full year since going generally available and the product uptake has been actually very, very high. In Q3, we released quantization for Atlas Vector Search, which reduces the memory requirements by up to 96%, allowing us to support larger Vector workloads with vastly improved price performance.

For example, a multinational news organization created a GenAI powered tool designed to help producers and journalists efficiently search, summarize and verify information from vast and varied data sources. A leading security firm is using Atlas Vector certified AI fraud and a leading global media company replaced elastic search with hybrid search and vector search use case for a user recommendation engine that’s built to suggest that’s building to suggest articles to end users.

MongoDB’s management thinks the industry is still in the very early days of shifting towards AI applications

I do think we’re in the very, very early days. They’re still learning experimenting…  I think as people get more sophisticated with AI as the AI technology matures and becomes more and more useful, I think applications will — you’ll start seeing these applications take off. I kind of chuckle that today, I see more senior leaders bragging about the chips they are using versus the Appstore building. So it just tells you that we’re still in the very, very early days of this big platform shift.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue again had incredibly strong growth in 2024 Q3, driven by demand for the Hopper GPU computing platform; Nvidia’s H200 sales achieved the fastest ramp in the company’s history

Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional, and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest prod ramp in our company’s history.

Nvidia’s H200 product has 2x faster inference speed, and 50% lower total cost of ownership (TCO)

The H200 delivers up to 2x faster inference performance and up to 50% improved TCO. 

Cloud service providers (CSPs) were half of Nvidia’s Data Centre revenue in 2024 Q3, and up more than 2x year-on-year; CSPs are installing tens of thousands of GPUs to meet rising demand for AI training and inference; Nvidia Cloud Instances with H200s are now available, or soon-to-be-available, in the major CSPs

Cloud service providers were approximately half of our Data Center sales with revenue increasing more than 2x year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon.

North America, India, and Asia Pacific regions are ramping up Nvidia Cloud Instances and sovereign clouds; management is seeing an increase in momentum of sovereign AI initiatives; India’s CSPs are building data centers containing tens of thousands of GPUs and increasing GPU deployments by 10x in 2024 compared to a year ago; Softbank is building Japan’s most powerful AI supercomputer with Nvidia’s hardware 

Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs…

…Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs include product communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x…

…In Japan, SoftBank is building the nation’s most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA.

Nvidia’s revenue from consumer internet companies more than doubled year-on-year in 2024 Q3

Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. 

Nvidia’s management sees Nvidia as the largest inference platform in the world; Nvidia’s management is seeing inference really starting to scale up for the company; models that are trained on previous generations of Nvidia chips inference really well on those chips; management thinks that as Blackwell proliferates in the AI industry, it will leave behind a large installed base of infrastructure for inference; management’s dream is that plenty of AI inference happens across the world; management thinks that inference is hard because it needs high accuracy, high throughput, and low latency

NVIDIA’s Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements…

…We’re seeing inference really starting to scale up for our company. We are the largest inference platform in the world today because our installed base is so large. And everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we’re seeing inference demand go up…

… Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really exceeded is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing inference 24/7…

…Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high-throughput as well as low latency is incredibly hard to build. 

Nvidia’s management has driven a 5x improvement in Hopper inference throughput in 1 year via advancements in the company’s software; Hopper’s inference performance is set to increase by a further 2.4x shortly because of NIM (Nvidia Inference Microservices)

Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in 1 year and cut time to first token by 5x. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. 

Nvidia’s Blackwell family of chips is now in full production; Nvidia shipped 13,000 Blackwell samples to customers in 2024 Q3; the Blackwell family comes with a wide variety of customisable configurations; management sees all Nvidia customers wanting to be first to market with the Blackwell family; management sees staggering demand for Blackwell, with Oracle announcing the world’s first zetta-scale cluster with more than 131,000 Blackwell GPUs, and Microsoft being the first CSP to offer private-preview Blackwell instances; Blackwell is dominating GPU benchmarks; Blackwell performs 2.2x better than Hopper and is also 4x cheaper; Blackwell with NVLink Switch delivered up to a 30x improvement in inference speed; Nvidia’s management expects the company’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; Blackwell’s production is in full-steam ahead and Nvidia will deliver more Blackwells in 2024 Q4 than expected; demand for Blackwell exceeds supply

Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled. 

Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world’s first zetta-scale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.

Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling, throughput and response time that is excellent for running new reasoning inference applications like OpenAI’s o1 model…

…As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s…

… Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated…

…It is the case that demand exceeds our supply. And that’s expected as we’re in the beginnings of this generative AI revolution as we all know…

…In terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible…

…[Question] Do you think it’s a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar ’25?

[Answer] Yes, I think it is a reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.  

Nvidia’s management is seeing that hundreds of AI-native companies are already delivering AI services and thousands of AI-native startups are building new services

Hundreds of AI-native companies are already delivering AI services with great success. Though Google, Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor and the Bridge are seeing great success while thousands of AI-native start-ups are building new services. 

Nvidia’s management is seeing large enterprises build copilots and AI agents with Nvidia AI; management sees the potential for billions of AI agents being deployed in the years ahead; Accenture has an internal AI agent use case that reduces steps in marketing campaigns by 25%-35%

Industry leaders are using NVIDIA AI to build Copilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years…

… Accenture with over 770,000 employees, is leveraging NVIDIA-powered agentic AI applications internally, including 1 case that cuts manual steps in marketing campaigns by 25% to 35%.

Nearly 1,000 companies are using NIM (Nvidia Inference Microservices); management expects the Nvidia AI Enterprise platform’s revenue in 2024 to be double that from 2023; Nvidia’s software, service, and support revenue now has an annualised revenue run rate of $1.5 billion and management expects the run rate to end 2024 at more than $2 billion

Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over 2x from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.

Nvidia’s management is seeing an acceleration in industrial AI and robotics; Foxconn is using Nvidia Omniverse to improve the performance of its factories, and Foxconn’s management expects a reduction of over 30% in annual kilowatt hour usage in Foxconn’s Mexico facility

Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics…

…Foxconn, the world’s largest electronics manufacturer, is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce — a reduction of over 30% in annual kilowatt hour usage.

Nvidia saw sequential growth in Data Center revenue in China because of export of compliant Hopper products; management expects the Chinese market to be very competitive

Our data center revenue in China grew sequentially due to shipments of export-compliant Hopper products to industries…

…We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.

Nvidia’s networking revenue declined sequentially, but there was sequential growth in Infiniband and Ethernet switches, Smart NICs (network interface controllers), and BlueField DPUs; management expects sequential growth in networking revenue in 2024 Q4; management is seeing CSPs adopting Infiniband for Hopper clusters; Nvidia’s Spectrum-X Ethernet for AI revenue was up 3x year-on-year in 2024 Q3; xAI used Spectrum-X for its 100,000 Hopper GPU cluster and achieved zero application latency degradation and maintained 95% data throughput, compared to 60% for Ethernet

Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters.

NVIDIA Spectrum-X Ethernet for AI revenue increased over 3x year-on-year. And our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI’s Colossus 100,000 Hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet…

…Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we’re going to be right back up in terms of growing. We’re getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems we are providing them to.

Nvidia has begun shipping new GeForce RTX AI PCs

We began shipping new GeForce RTX AI PC with up to 321 AI FLOPS from ASUS and MSI with Microsoft’s Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.

Nvidia’s Automotive revenue had strong growth year-on-year and sequentially in 2024 Q3, driven by self-driving brands of Nvidia Orin; Volvo’s electric SUV will be powered by Nvidia Orin

Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Volvo Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.

Nvidia’s management thinks pre-training scaling of foundation AI models is intact, but it’s not enough; another way of scaling has emerged, which is inference-time scaling; management thinks that the new ways of scaling has resulted in great demand for Nvidia’s chips, but for now, most of Nvidia’s chips are used in pre-training 

Foundation model pretraining scaling is intact and it’s continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough, that we’ve now discovered 2 other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling. And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI’s o1, which does inference time scaling, what’s called test time scaling. The longer it thinks, the better and higher-quality answer it produces. And it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth…

… we now have 3 ways of scaling and we’re seeing all 3 ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pretraining scaling, post-training scaling, and then now very importantly, inference time scaling…

…[Question] Today, how much of the compute goes into each of these buckets? How much for the pretraining? How much for the reinforcement? And how much into inference today?

[Answer] Today, it’s vastly in pretraining a foundation model because, as you know, post-training, the new technologies are just coming online. And whatever you could do in pretraining and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority. And so you’ll always have to do on-the-spot thinking and in context thinking and a reflection. And so I think that the fact that all 3 are scaling is actually very sensible based on where we are. And in the area foundation model, now we have multimodality foundation models and the amount of petabytes video that these foundation models are going to be trained on, it’s incredible. And so my expectation is that for the foreseeable future, we’re going to be scaling pretraining, post-training as well as inference time scaling and which is the reason why I think we’re going to need more and more compute.  

Nvidia’s management thinks the company generates the greatest possible revenue for its customers because its products has much better performance per watt

Most data centers are now 100 megawatts to several hundred megawatts, and we’re planning on gigawatt data centers, it doesn’t really matter how large the data centers are. The power is limited. And when you’re in the power-limited data center, the best — the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual road map reduces cost. But on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues. 

Nvidia’s management sees Hopper demand continuing through 2025

Hopper demand will continue through next year, surely the first several quarters of the next year. 

Nvidia’s management sees 2 fundamental shifts in computing happening today: (1) the movement from code that runs on CPUs to neural networks that run on GPUs and (2) the production of AI from data centres; the fundamental shifts will drive a $1 trillion modernisation of data centres globally

We are really at the beginnings of 2 fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that’s happening is $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.

On the other hand, secondarily, I guess, is that on top of these systems are going to be — we’re going to be creating a new type of capability called AI. And when we say generative AI, we’re essentially saying that these data centers are really AI factories. They’re generating something. Just like we generate electricity, we’re now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we’re going to see this new type of system come online, and I call it an AI factory because that’s really as close to what it is. It’s unlike a data center of the past.

Nvidia’s management does not see any digestion happening for GPUs until the world’s data centre infrastructure is modernised

[Question] My main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase? Or is it just too premature to discuss that because you’re just at the start of Blackwell?

[Answer] I believe that there will be no digestion until we modernize $1 trillion with the data centers.

Okta (NASDAQ: OKTA)

Okta AI is really starting to help newer Okta products

Second thing is that we have Okta AI, which we talked a lot about a couple of years ago, and we continue to work on that. And it’s really starting to help these new products like identity threat protection with Okta AI. The model inside of identity threat protection and how that works is AI is a big part of the product functionality. 

Okta’s management sees the need for authentication for AI agents and has a product called Auth for Gen AI; management thinks authentication of AI agents could be a new area of growth for Okta; management sees the pricing for Auth for Gen AI as driven by a fee per monthly active machine

Some really interesting new areas are we have something we talked about at Oktane called Auth for Gen AI, which is basically authentication platform for agents. Everyone is very excited about agents, as they should be. I mean, we used to call them bots, right? 4, 5 years ago, they’re called bots. Now they’re called agents, like what’s the big deal? How different is it? Well, you can interact with them natural languages and they can do a lot more with these models. So now it’s like bots are real in real time. But the problem is all of these bots and all of these platforms to build bots, they have the equivalent of the monitor sticky notes with passwords on them, they have the equivalent of that inside the bot. So there’s no protocol for single sign-on for bots. They have like stored passwords in the bot. And if that bot gets hacked, guess what? You signed up for that bot and it has access to your calendar and has access to your travel booking and it has access to your company e-mail and your company data, that’s gone because the hacker is going to get all those passwords out there. So Auth for Gen AI automates that and make sure you can have a secure protocol to build a bot around. And so that’s a really interesting area. It’s very new. We just announced it and all these agent frameworks and so forth are new…

… Auth for GenAI, it’s basically like — think about it as a firm machine authentication. So every time — we have this feature called machine-to-machine, which does a similar thing today, and you pay basically by the monthly active machine.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Salesforce is at the edge of the rise of digital labour, which are autonomous AI agents; management thinks the TAM (total addressable market) for digital labour is much larger than the data management market that Salesforce was previously in; management thinks Salesforce is the largest supplier of digital labour right from the get-go; Salesforce’s AgentForce service went into production in 2024 Q3 and Salesforce has already delivered 200 AgentForce deals with more to come; management has never seen anything like AgentForce; management sees AgentForce as the next evolution of Salesforce; management thinks AgentForce will help companies scale productivity independent of workforce growth; management sees AgentForce AI agents manifesting as robots that will supplement human labour; management sees AgentForce, together with robots, as a driving force for future global economic growth even with a stagnant labour force; AgentForce is already delivering tangible value to customers; Salesforce’s customers recently built 10,000 AI agents with AgentForce in 3 days, and thousands more AI agents have been built since then; large enterprises across various industries are building AI agents with AgentForce; management sees AgentForce unlocking a whole new level of operational efficiency; management will be delivering AgentForce 2.0 in December this year

We’re really at the edge of a revolutionary transformation. This is really the rise of digital labor. Now for the last — I would say for the last 25 years at Salesforce, and we’ve been helping companies to manage and share their information…

…But now we’ve really created a whole new market, a new TAM, a TAM that is so much bigger and so much more exciting than the data management market that it’s hard to get our head completely around. This is the market for digital labor. And Salesforce has become, right out of the gate here, the largest supplier of digital labor and this is just the beginning. And it’s all powered by these autonomous AI agents…

…With Salesforce, agent force, we’re not just imagining this future. We’re already delivering it. And you so know that in the last week of the quarter, Agentforce went production. We delivered 200 deals, and our pipeline is incredible for future transactions. We can talk about that with you on the call, but we’ve never seen anything like it. We don’t know how to characterize it. This is really a moment where productivity is no longer tied to workforce growth, but through this intelligent technology that can be scaled without limits. And Agentforce represents this next evolution of Salesforce. This is a platform now, Salesforce as a platform or AI agents work alongside humans in a digital workforce that amplifies and augments human capabilities and delivers with unrivaled speed…

…On top of the agenetic layer, we’ll soon see a robotic layer as well where these agents will manifest into robots…

…These agents are not tools. They are becoming collaborators. They’re working 24/7 to analyze data, make decisions, take action, and we can all start to picture this enterprise managing millions of customer interactions daily with Agentforce seamlessly resolving issues, processing transactions, anticipating customer needs, freeing up humans to focus on the strategic initiatives and building meaningful relationships. And this is going to evolve into customers that we have, whether it could be a large hospital or a large hotel where not only are the agents working 24/7, but robots are also working side-by-side with humans, robots manifestations of agents this idea that it’s all happening before our eyes and that this isn’t just some far-off future. It’s happening right now…

…For decades, economic growth dependent on expanding the human workforce. It was all about getting more labor. But with labor and with the labor force stagnating globally, Agentforce is unlocking a new path forward. It’s a new level of growth for the world and for our GPT and businesses no longer need to choose between scale and efficiency with agents, they can achieve both…

…Our customers are already experiencing this transformation. Agentforce is deflecting service cases and resolving issues, processing, qualifying leads, helping close more deals, creating optimizing marketing campaigns, all at an unprecedented scale, 24/7…

…What was remarkable was the huge thirst that our customers had for this and how they built more than 10,000 agents in 3 days. And I think you know that we then unleashed a world tour of that program, and we have now built thousands and thousands of more agents in these world tours all over the world…

…So companies like FedEx, [indiscernible], Accenture, Ace Hardware, IBM, RBC Wealth Management and many more are now building their digital labor forces on the Salesforce platform with Agentforce. So that is the largest and most important companies in the world across all geographies, across all industries are now building and delivering agents…

…While these legacy chatbots have handled these basic tasks like password resets and other basic mundane things, Agentforce is really unlocking an entirely new level of digital intelligence and operational efficiency at this incredible scale…

…I want to invite all of you to join us for the launch of Agentforce 2.0. And it is incredible what you are going to see the advancements in the technology already are amazing and accuracy and the ability to deliver additional value. And we hope that you’re going to join us in San Francisco. This is going to happen on December 17. You’ll see Agentforce 2.0 for the first time,

Salesforce is customer-zero for AgentForce and the service is live on Salesforce’s help-website; AgentForce is handling 60 million sessions and 2 millions support cases annually on the help-website; the introduction of AgentForce in Salesforce’s help-website has allowed management to rebalance headcount into growth-areas; users of Salesforce’s help-website will experience very high levels of accuracy because AgentForce is grounded with the huge repository of internal and customer data that Salesforce has; management sees Salesforce’s data as a huge competitive advantage for AgentForce; AgentForce can today quickly deliver personalised insights to users of Salesforce’s help-website and hand off users to support engineers for further help; management thinks AgentForce will deflect between a quarter and half of annual case volume; Salesforce is also using AgentForce internally to engage prospects and hand off prospects to SDR (sales development representative) team

We pride ourselves on being customer [ 0 ] for all of our products, and Agentforce is no exception. We’re excited to share that Agentforce is now live on help.salesforce.com…

… Our help portal, help.salesforce.com, which is now live. This portal, this is our primary support mechanism for our customers. It lets them authenticate in, it then becomes grounded with the agent, and that Help portal already is handling 60 million sessions and more than 2 million support cases every year. Now that is 100% on Agentforce…

…From a human resource point of view, where we can really start to look at how are we going to rebalance our headcount into areas that now are fully automated and to into areas that are critical for us to grow like distribution…

…Now when you use help.salesforce.com, especially as authenticated users, as I mentioned, you’re going to see this incredible level of accuracy and responsiveness and you’re going to see remarkably low hallucinogenic performance whether for solving simple queries or navigating complex service issues because Agentforce is not just grounded in our Salesforce data and metadata including the repository of 740,000 documents and 17 languages, it’s also grounded in each customer’s data, their purchases, returns, that data it’s that 200 petabytes or through 200 to 300 petabytes of Salesforce data that we have that gives us this kind of, I would say, almost unfair advantage with Agentforce because our agents are going to be more accurate in the least hallucinogenic of any because they have access to this incredible capability. And Agentforce can instantly reason over this vast amounts of data, deliver precise personalizing [indiscernible] with citations in seconds, and Agentforce can seamlessly hand off to support engineers, delivering them complete summary and recommendation as well. And you can all try this today. This isn’t some fantasy land future idea this is today reality…

…We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company, it is going to deflect between a quarter and half of annual case volume and in optimistic cases, probably much, much more of that…

…We’re also deploying Agentforce to engage our prospects on salesforce.com, answering their questions 24/7 as well as handing them off to our SDR team. You can see it for yourself and test it out on our home page. We’ll use our new Agentforce SDR agent to further automate top of funnel activities when gatherings leads, lead data for providing education and qualifying prospects and booking meetings.

Salesforce’s management thinks AgentForce is much better than Microsoft’s AI Copilots

I just want to compare and contrast that against other companies who say they are doing enterprise AI. You can look at even Microsoft. We all know about Copilot, it’s been out, it’s been touted now for a couple of years. We’ve heard about CoPilot. We’ve seen the demo. In many ways, it’s just repackaged ChatGPT. You can really see the difference where Salesforce now can operate its company on our platform. And I don’t think you’re going to find that on Microsoft’s website, are you?

Vivint is using AgentForce for customer support and for technician scheduling, payment requests, and more; Adecco is using AgentForce to improve the handling of job applicants (Adecco receives 300 million job applications annually); Wiley is resolving cases 40% faster with AgentForce; Heathrow Airport is using AgentForce to respond to thousands of travelers instantly, accurately, and simultaneously; SharkNinja is using AgentForce for personalised 24/7 customer support in 28 geographies; Accenture is using AgentForce to improve deal quality and boost bid coverage by 75%

One of them is the smart home security provider, Vivint. They’ve struggled with this high volume of support calls, a high churn rate for service reps. It’s a common story. But now using the Agentforce, Vivint has created a digital support staff to autonomously provide support through their app, their website, troubleshooting, a broad variety of issues across all their customer touch points. And in addition, Vivint is planning to utilize Agentforce to further automate technician scheduling, payment request, proactive issue resolution, the use of device telemetry because Agentforce is across the entire sales force product line and including Slack…

…Another great customer example that’s already incredible to work they’ve already done to get this running and going in their company Adecco, the world’s leading provider of talent solutions, handling 300 million job applications annually, but historically, they have just not been able to go through or respond in a timely way, of course, to the vast majority of applications that they’re gating, but now the Agentforce is going to operate an incredible scale, sorting through the millions of resumes, 24/7 matching candidates to opportunities proactively prequalifying them for recruiters. And in addition, Agentforce can also assess candidates helping them to refine their resumes, giving them a better chance of qualifying for a role…

…Wiley, an early adopter, is resolving cases over 40% faster with Agentforce than their previous chat bot. Heathrow Airport, one of the busiest airports in the world will be able to respond to thousands of travelers inquiries instantly, accurately and simultaneously. SharkNinja, a new logo in the quarter, chose Agentforce and Commerce Cloud to deliver 24/7 personalized support for customers across 28 international markets and unifying its service operations…

…Accenture chose Agentforce to streamline sales operations and enhance bid management for its 52,000 global sellers. By integrating sales coach and custom AI agents, Agentforce is improving deal quality and targeting a 75% boost in bid coverage. 

College Possible is using AgentForce to build virtual college counsellors as there’s a shortage of labour (for example, California has just 1 counsellor for every 500 students); College Possible built its virtual counsellors with AgentForce in under a week – basically like flipping a switch – because it has been accumulating all its data in Salesforce for years

Another powerful example is a nonprofit, College Possible. College Possible matches eligible students with counselors to help them navigate and become ready for college. And in California, for example, the statewide average stands at slightly over 1 counselor for every 500 students. It just isn’t enough. Where are we going to get all that labor…

…We’re going to get it from Agentforce. This means the vast majority of students are not getting the help they need, and now they are going to get the help they need.

College Possible creates a virtual counselor built on Agentforce in under a week. They already had all the data. They have the metadata, they already knew the students. They already had all of the capabilities built into their whole Salesforce application. It was just a flip of a switch…

…  But why? It’s because all of the work and the data and the capability that College Possible has put into Salesforce over the years and years that they had it. It’s not the week that it took to get them to turn it on. They have done a lot of work.

Salesforce’s management’s initiative to have all of the company’s apps be rewritten into a single core platform is called More Core; the More Core initiative also involves Salesforce’s Data Cloud, which is important for AI to work; Salesforce is now layering the AI agent layer on top of More Core, and management sees this combination as a complete AI system for enterprises that also differentiates Salesforce’s AgentForce product

Over the last few years, we’ve really aggressively invested in integrating all of our apps on a single core platform with shared services for security workflow user interfaces more. We’ve been rewriting all of our acquisitions into that common area. We’re really looking at how do we take all of our applications and all of our acquisitions, everything and delivered into one consistent platform, we call that More Core internally inside Salesforce. And when you look at that More Core initiative, I don’t think there’s anyone who delivers this comprehensive platform, sales, service, marketing, commerce, analytics, Slack, all of it as one piece of code. And then now deeply integrated in that 1 piece of code is also our data cloud. That is a key part of our strategy, which continues to have this phenomenal momentum as well to help customers unify and federate with zero-copy data access across all their data and metadata, which is crucial for AI to work.

And now that third layer is really opening up for us, which is this agenetic layer. We have built this agenetic layer that takes advantage of all the investments in Salesforce for our customers and made it in our platform. It’s really these 3 layers. And in these 3 layers that form a complete AI system for enterprises and really uniquely differentiate Salesforce uniquely differentiate Agentforce from every other AI platform that this is one piece of code. This isn’t like 3 systems. It’s not a bunch of different apps all running independently. This is all one piece of code. That’s why it works so well, by the way, because it is 1 platform.

Salesforce’s management thinks jobs and roles within Salesforce will change because of AI, especially AI agents

The transformation is not without challenges. Jobs are going to evolve, roles are going to shift and businesses will need to adapt. And listen, at Salesforce, jobs are going to evolve and roles will shift and businesses will need to adapt as well. We’re all going to need to rebalance our workforce. This is the agents take on more of the workforce.

Salesforce’s management is hearing that a large customer of Salesforce is targeting 25% more efficiency with AI

This morning, I was on the phone with one of our large customers, and they were telling me how they’re targeting inside their company, 25% more efficiency with artificial intelligence.

Salesforce signed more than 2,000 AI deals in 2024 Q3 (FY2025 Q3), and number of AI deals that are over $1 million more than tripled year-on-year; 75% of Salesforce’s AgentForce deals, and 9 of Salesforce’s top 10 deals, in 2024 Q3 involved Salesforce’s global partners; more than 80,000 system integrators have completed AgentForce training; hundreds of ISVs (independent software vendors) and partners are building and selling AI agents; Salesforce has a new AgentForce partner network that allows customers to deploy customised AI agents using trusted 3rd-party extensions from Salesforce App Exchange; Salesforce’s partnership with AWS Marketplace is progression well as transactions doubled sequentially in 2024 Q3, with 10 deals exceeding $1 million

In Q3, the number of wins greater than $1 million with AI more than tripled year-over-year. and we signed more than 2,000 AI deals, including more than the 200 Agentforce wins that Marc shared…

…We’re also seeing amazing Agentforce energy across the ecosystem with our global partners involved in 75% of our Q3 Agentforce deals and 9 of our top 10 wins in the quarter. Over 80,000 system integrators have completed Agentforce training and hundreds of ISVs and technology partners are building and selling agents…

… We continue to unlock customer spend through new channels, including the Agentforce partner network that launched at Dreamforce, which allows customers to customize and deploy specialized agents using trusted third-party extensions from Salesforce App Exchange. And AWS Marketplace continues to be a growth driver. Our Q3 transactions doubled quarter-over-quarter with 10 deals exceeding $1 million. 

Veeva Systems (NYSE: VEEV)

Veeva Vault CRM has a number of new innovations coming, including two AI capabilities that will be available in late-2025 at no additional charge; one of the AI capabilities leverages Apple Intelligence; Vault CRM’s CRM Bot AI application will see Vault CRM be hooked onto customers’ own large language models, and Veeva will not be incurring compute costs

We just had our European Commercial Summit in Madrid where we announced a number of new innovations coming in Vault CRM, including two new AI capabilities – CRM Bot and Voice Control. CRM Bot is a GenAI assistant in Vault CRM. Voice Control is a voice interface for Vault CRM, leveraging Apple Intelligence. Both are included in Vault CRM for no additional charge and are planned for availability in late 2025…

…For the CRM Bot, that’s where we will hook our CRM system into the customers’ own large language model that they’re running. And that’s where we will not charge for, and we will not incur compute cost…

Veeva has a new AI application, MLR Bot, for Vault PromoMats within Commercial Cloud; MLR Bot helps perform checks on content with a Veeva-hosted large language model (LLM); MLR Bot will be available in late-2025 and will be charged separately; management has been thinking about MLR Bot for some time; management is seeing a lot of excitement over MLR Bot; management is still working through the details of the monetisation of MLR Bot; MLR Bot’s LLM will be from one of the big tech providers but it will be Veeva who’s the one paying for the compute 

We also announced MLR Bot, an AI application in Vault PromoMats to perform content quality and compliance checks with a Veeva-hosted large language model. Planned for availability in late 2025, MLR Bot will require a separate license…

… So I was at our Europe Summit event where we announced MLR Bot, something we’ve been thinking about and evaluating for some time…

…So there’s a lot of excitement. This is a really core process for life sciences companies. So a lot of excitement there…

…In terms of sizing and the monetization, we’re still working through the details on that, but there’s a ton of excitement from our existing customers. We look forward to getting some early customers started on that as we go into next year…

…MLR Bot, we will charge for, and that’s where we will host and run a large language model. Not our own large language model, right? We’ll use one from the big tech providers, but we will be paying for the compute power for that, and so we’ll be charging for that.

CRM Bot, Voice Control, and MLR Bot are part of Veeva’s management’s overall AI strategy to provide AI applications with tangible value; another part of the AI strategy involves opening up data for customers to power all forms of AI; management’s current thinking is to charge for AI applications if Veeva is responsible for paying compute costs

These innovations are part of our overall AI strategy to deliver specific AI applications that provide tangible value and enable customers and partners with the AI Partner Program, as well as the Vault Direct Data API, for the data needed to power all forms of AI…

… So where we have to use significant compute power, we will most likely charge. And where we don’t, we most likely won’t.

Wix (NASDAQ: WIX)

More than 50% of new Wix users are using the company’s AI-powered onboarding process which was launched nearly a year ago; users who onboard using Wix’s AI process are 50% more likely to start selling on Wix and are more likely to become paid subscribers; the AI-powered onboarding process is leading to a 13% uplift in conversion rate for the most recent Self-Creator cohort; the AI website builder is free but it helps with conversions to paid subscribers

Almost one year ago, we launched our AI website builder, which is now available in 20 languages and has been a game changer in our user onboarding strategy. Today, more than 50% of new users are choosing to create their online presence through our AI-powered onboarding process. The tool is resonating particularly well with small businesses and entrepreneurs as paid subscriptions originated from this AI-powered onboarding are 50% more likely to have a business vertical attached and significantly more likely to start selling on Wix by streamlining the website building process while offering a powerful and tailored commerce-enablement solution…

…Cash in our most recent self-created cohort showed a 13% uplift in conversion rate from our AI onboarding tool…

…[Question] A lot of the commentary seems that today, AI Website Builder is helping on conversion. I wanted to ask about specifically, is there an opportunity to directly monetize the AI products within the kind of core website design funnel?

[Answer] So I think that the way we monetize, of course, during the buildup phase of the website, is by making it easier. And our customers are happy with their websites, of course, we convert better. So I don’t think there is any better way to monetize than that, right? The more users finish the website, the better the website, the higher conversion and the high monetization. 

Wix now has 29 AI assistants to support users

Earlier this year, we spoke about our plan to embed AI assistance across our platform and we’re continuing to push that initiative forward. We now have a total of 29 assistants, spanning a wide range of use cases to support users and to service customers throughout their online journeys.

Wix has a number of AI products that are launching in the next few months that are unlike anything in the market and they will be the first AI products that Wix will be monetising directly

We have a number of AI products coming in the next few months that are unlike anything in the market today. These products will transform the way merchants manage their businesses, redefine how users interact with their customers and enhance the content creation experience. Importantly, these will also be the first AI products we plan to monetize directly. We are on the edge of unforeseen innovation, and I’m looking forward to the positive impact it will have on our users.

Zoom Communications (NASDAQ: ZM)

Zoom’s management has a new vision for Zoom, the AI-first Work Platform for Human Connection

In early October, we hosted Zoomtopia, our annual customer and innovation event, and it was an amazing opportunity to showcase all that we have been working on for our customers. We had a record-breaking virtual attendance, and unveiled our new vision, AI-first Work Platform for Human Connection. This update marks an exciting milestone as we extend our strength as a unified communication and collaboration platform into becoming an AI-first work platform. Our goal is to empower customers to navigate today’s work challenges, streamline information, prioritizing tasks and making smarter use of time.

Management has released AI Companion 2.0, which is an agentic AI technology; AI Companion 2.0 is able to see a broader window of context and gather information from internal and external sources; Zoom AI Companion monthly active users grew 59% sequentially in 2024 Q3; Zoom has over 4 million accounts that have enabled AI Companion; management thinks customers really like Zoom AI Companion; customer feedback for AI Companion has been extremely positive; management does not intend to charge customers for AI Companion

At Zoomtopia, we took meaningful steps towards that vision with the release of AI Companion 2.0…

…This release builds upon the awesome quality of Zoom AI Companion 1.0 across features like Meeting Summary, Meeting Query and Smart Compose, and brings it together in a way that evolves beyond task-specific AI towards agentic AI. This major update allows the AI Companion to see a broader window of context, synthesize the information from internal and external sources, and orchestrate action across the platform. AI Companion 2.0 raises the bar for AI and demonstrates to customers that we understand their needs…

…We saw progress towards our AI-first vision with Zoom AI Companion monthly active users growing 59% quarter-over-quarter…

…At Zoomtopia, we mentioned that there are over 4 million accounts who are already enabled AI Companion. Given the quality, ease of use and no additional cost, the customer really like Zoom AI Companion…

…Feedback from our customers at Zoomtopia Zoom AI Companion 2.0 were extremely positive because, first of all, they look at our innovation, the speed, right? And the — a lot of features built into the AI Companion 2.0, again, at no additional cost, right? At the same time, Enterprise customers also want to have some flexibility. That’s why we also introduced customized AI Companion and also AI Companion Studio. And that will be available first half of next year and also we can monetize…

…We are not going to charge the customer for AI Companion, at no additional cost

Zscaler is using Zoom AI Companion to improve productivity across the whole company; large enterprises such as HSBC and Exxon Mobil are also using Zoom AI Companion

Praniti Lakhwara, CIO of Zscaler, provided a great example of how Zoom AI Companion helped democratize AI and enhance productivity across the organization, without sacrificing security and privacy. And it wasn’t just Zscaler. the RealReal, HSBC, ExxonMobil and Lake Flato Architects shared similar stories about Zoom’s secure, easy-to-use solutions, helping them thrive in the age of AI and flexible work.

Zoom’s management recently introduced a road map of AI products that expands Zoom’s market opportunity; Custom AI Companion add-on, including paid add-ons for healthcare and education, will be released in 2025 H1; management built the monetisable parts of AI Companion after gathering customer feedback 

Building on our vision for democratizing AI, we introduced a road map of TAM-expanding AI products that create additional business value through customization, personalization and alignment to specific industries or use cases. 

 Custom AI Companion add-on, which will be released in the first half of next year, aims to meet our customers where they are in their AI journey by plugging into knowledge bases, integrating with third-party apps and personalizing experiences like custom AI avatars and AI coaching. Additionally, we announced that we’ll also have Custom AI Companion paid add-ons for health care and education available as early as the first quarter of next year…

…The reason why we introduced the Customized AI Companion or AI Companion Studio because, a few quarters ago — and we talked to many Enterprise customers. They shared with us feedback, right? So they like AI Companion. Also, they want to make sure, hey, some customers, they already build their own AI large language model. How to [ federate ] that into our federated AI approach. And some customers, they have very large content, like a knowledge base, how to connect with that. Some customers, they have with other beginning systems, right, like a ServiceNow, Atlassian and Workday, a lot of Box and HubSpot, how to connect those data sources, right? And also even from an employee perspective, right, they won’t have a customized avatar, like AI to — as a personal culture as well. So meaning those customers, they have customized requirements. To support those customer requirements, we need to make sure we have AI infrastructure and technology ready, right? That’s the reason why we introduced the AI Companion, the Customized AI Companion. The goal is really working together with integrated customers to tailored for each Enterprise customer. That’s the reason why it’s not free.

I think the feedback from Zoomtopia is very positive because, again, those features are not built by our — just the several product managers, engineers think about let’s build that. We already solicited feedback from our Enterprise content before, those features that I think can truly satisfy their needs.

Zoon’s management thinks that Zoom is very well-positioned because it is providing AI-powered tools to customers at no additional cost, unlike other competitors

Given our strength on the quality plus at no additional cost, Zoom is much better positioned. In particular, customers look at all the vendors when they try to consult and look at — again, the AI cost is not small, right? You look at some of the competitors, per user per month, $30, right? And look at Zoom, better quality at no additional cost. That’s the reason why it comes with a total cost of ownership. Customers look at Zoom, I think, much better positioned…

…Again, almost every business, they subscribe to multiple software services. If each software service vendors they are going to charge the customer with AI, guess what, every business is — they have to spend more. That’s the reason why they trust Zoom, and I think we are much better positioned.

Zoom’s management is seeing some customers find new budgets to invest in AI, whereas some customers are reallocating budgets from other areas towards AI

Every company, I think now they are all thinking about where they should allocate the budget, right? Where should they get more money or fund, right, to support AI? I think every company is different. And some internal customers, and they have a new budget. Some customers, they consolidated into the few vendors and some customers, they just want to say, hey, maybe actually save the money from other areas and to shift the budget towards embracing AI.

Zoom’s management thinks Zoom will need to continue investing in AI, but they are not worried about the costs because the AI features will be monetised

Look at AI, right? So we have to invest more, right? And I think a few areas, right? One is look at our Zoom Workplace platform, right? We have to [ invent ] more talent, deploy more GPUs and also use more of the cloud, basically GPUs, as well as we keep improving the AI quality and innovate on AI features. That’s for Workplace. And at the same time, we are going to introduce the customized AI Companion, also AI Studio next year. Not only do we offer the free service for AI Companion, but those Enterprise customization certainly can help us in terms of monetization. At the same time, we leverage the technology we build for the workplace, apply that to the Contact Center, like Zoom Virtual Agent, right, and also some other Contact Center features. We can share the same AI infrastructure and also a lot of technology components and also can be shared with Zoom Contact Center.

Where AI Companion is not free, the Contact Center is different, right? We also can monetize. Essentially, we build the same common AI infrastructure architecture and Workplace — Customized AI Companion, we can monetize. Contact Center, also, we can monetize. I think more and more — and like today, you see you keep investing more and more, and soon, we can also monetize more as well. That’s why I think we do not worry about the cost in the long run at all, I mean, the AI investment because with the monetization coming in, certainly can help us more. So, so far, we feel very comfortable.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google and GCP), Amazon (parent of AWS), Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Veeva Systems, Wix, and Zoom Video Communications. Holdings are subject to change at any time.

The Pitfalls of Using IRR 

IRR is a useful calculation but it has its limitations.

The internal rate of return (IRR) is a commonly used metric to estimate the profitability of an investment. It can be used to assess whether an investment is worth making or not. It is also used to assess the performance of investment funds, such as venture capital and private equity funds.

However, an IRR can be somewhat misleading and actual returns can differ significantly from what the IRR shows you. This is because the IRR only calculates the return on investment starting at the point when cash is deployed. In many funds, cash may not be deployed immediately, which results in a cash drag that is not accounted for in the IRR calculation.

The IRR also makes an assumption that the cash generated can be redeployed at the calculated IRR rate. This is often not the case.

Here are some examples to illustrate these points.

Cash drag

Venture capital and private equity funds are unique in that investors do not give the committed capital to a fund immediately. Instead, investors make a commitment to a fund. The fund only asks for the money when it has found a startup or company to invest in; this is called paid-in capital, which differs from committed capital.

To calculate returns, venture capital and private equity funds use the IRR based only on paid-in capital. This means that while the IRR of two venture funds can look the same, the actual returns can be very different. Let’s look at two IRR scenarios below:

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Fund A-$100000$20000026%
Fund B00-$100000$200026%

Both Fund A and Fund B have an IRR of 26%. The difference is that Fund A deployed the capital straight away while Fund B only found an investment in Year 3. Investors in Fund A are actually much better off as they can then deploy the $2000 received in Year 3 into another investment vehicle to compound returns. Fund B’s investors, meanwhile, had a cash drag with committed capital that was not deployed in Year 1 and 2, and this drag is not recorded in the IRR calculation.

Wrong assumptions

The IRR formula also assumes that the cash returned to investors can be redeployed at the IRR rate. As mentioned above, this is not always the case. Take the example below:

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Investment A-$1000$300$300$300$300$30015.2%
Investment B-$10000000$202515.2%

In the above scenario, both Investment A and Investment B provide a 15.2% IRR. However, there is a difference in the timing of cash flows. Investment A provides cash flow of $300 per year while Investment B provides a one-time $2025 cash flow at the end of Year 5. While the IRR is the same, investors should opt for Investment B.

This is because the IRR calculation assumes that the cash flow generated can be deployed at similar rates as the IRR. But the reality is that oftentimes, the cash flow can neither be redeployed immediately, nor at similar rates to the investment.

For instance, suppose the cash flow generated can only provide a 10% return. Here are the adjusted returns at the end of Year 5 for Investment A

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Investment A-$1000$300$300$300$300$30015.2%
Investment A (adjusted)-$10000000$183212.9%
Investment B-$10000000$202515.2%

I calculated $1832 by summing up the cash flows with the extra returns generated by investing the cash flows at a 10% rate. As you can see, after doing this, the returns generated from investment A now fall to just 12.9% vs the 15.2% previously calculated.

The bottom line

Using the IRR to calculate investment returns is a good starting point to assess an investment opportunity. This can be used for investments such as real estate or private equity funds.

But it is important to note the limitations of the IRR calculation. It can overstate or understate actual returns, depending on the timing of the cash flows as well as the actual returns on the cash generated.

A key rule of thumb is that the IRR is best used when cash can be deployed quickly so that there is minimal cash drag, and when the cash generated can be deployed at close to the IRR of the investment. If this assumption does not hold true, then a manual calculation of the returns of the investment need to be made by inputting the actual returns of the cash generated.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

The Best Investment Theme For The New Trump Presidency

There is no shortage of investing ideas being thrown around that could potentially do well under the new Trump administration – but what would actually work?

Last week, Donald Trump won the latest US Presidential Elections, which would see him be sworn in as the USA’s new President on 20 January 2025. Often, there’s a huge rush of investment themes that accompany the inauguration of a new political leader in a country. It’s no exception this time. 

For my own investment activities, the only theme I’m in favour of with the new Trump presidency – in fact, with any new presidency – is to look at a stock as a piece of a business, and assess the value of that business. Why? Because there’s a long history of investment themes accompanying shifts in political leadership that have soured. In a November 2014 article for The Motley Fool, Morgan Housel shared some examples:

“During the 1992 election, a popular argument was that Bill Clinton’s proposed remake of the U.S. healthcare system would be disastrous for pharmaceutical stocks… by the end of Clinton’s presidency pharmaceutical companies were some of the most valuable companies in the world. Pfizer increased 791% during Clinton’s presidency. Amgen surged 611%. Johnson & Johnson popped 385%. Merck jumped 299%. Those crushed the market, with the S&P 500 rising 251% from January 1993 to January 2001…

…During the 2000 election, Newsweek wrote that if George W. Bush wins, the ensuing tax changes could “help banks, brokers and other investment firms.” By the end of Bush’s second term, the KBW Bank Index had dropped almost 80%. The article also recommended pharmaceutical stocks thanks to Bush’s light touch on regulation. The NYSE Pharmaceutical Index lost nearly half its value during Bush’s presidency…

…During the 2008 election, many predicted that an Obama victory would be a win for green energy like solar and wind and a loss for big oil… The opposite happened: The iShares Clean Energy ETF is down 51% since then, while Chevron (CVX 0.10%) is up 110%.

During the 2012 election, Fox Business wrote that if Obama wins, “home builders such as Pulte and Toll Brothers could see increased demand for new homes due to a continuation of the Obama Administration’s efforts to limit foreclosures, keeping homeowners in their existing properties.” Their shares have underperformed the S&P 500 by 26 percentage points and 40 percentage points since then, respectively.”

It was more of the same in the presidential elections that came after Housel’s article.

When Trump won the 2016 US elections for his first term as President, CNBC proclaimed the banking sector as a strong beneficiary because of his promises to ease banking regulations. But from the day Trump was sworn into office (President-elects are typically sworn in on 20 January in the following year after the elections) till the time he stepped down four years later, the KBW Nasdaq Bank Index was up by less than 20%, whereas the S&P 500 was up by nearly 70%. The KBW Nasdaq Bank Index tracks the stock market performance of 24 of America’s largest banks.

CNBC surveyed more than 100 investment professionals shortly after Joe Biden won the 2020 elections. They thought that “consumer discretionary, industrials and financials will perform the best under a Biden administration.” From Biden’s first day as President till today, the S&P 500 is up by slightly under 60%. Meanwhile, the S&P 500 Consumer Discretionary Index, which comprises consumer discretionary companies within the S&P 500 index, has gained just around 30%. The Dow Jones Industrials Index (a collection of American industrial companies) and the KBW Nasdaq Bank Index are both also trailing the S&P 500 with their respective gains of around 40% and 20%.

I have no idea if the hot themes for Trump’s second term as President would end up performing well. But given the weight of the historical evidence, I have no interest in participating in them. Politics and investing seldom mix well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Problems With China’s Economy And How To Fix Them

An analysis of China’s balance sheet recession, and what can be done about it.

Economist Richard Koo (Gu Chao Ming) is the author of the book The Other Half of Macroeconomics and the Fate of Globalization. Investor Li Lu published a Mandarin review of the book in November 2019, which I translated into English in March 2020. When I translated Li’s review, I found myself nodding in agreement to Koo’s unique concept of a balance sheet recession as well as his analyses of Japan’s economic collapse in the late 1980s and early 1990s, and the Japanese government’s responses to the crash. 

When I realised that Koo was interviewed last week in an episode of the Bloomberg Odd Lots podcast to discuss the Chinese government’s recent flurry of stimulus measures, I knew I had to tune in – and I was not disappointed. In this article, I want to share my favourite takeaways (the paragraphs in italics are transcripts from the podcast)

Takeaway #1: China is currently facing a balance sheet recession, and in a balance sheet recession, the economy can shrink very rapidly and be stuck for a long time

I think China is facing balance sheet recession and balance sheet recession happens when a debt-financed bubble bursts, asset prices collapse, liabilities remain, people realise that their balance sheets’ under water or nearly so, and they all try to repair their balance sheets all at the same time…

…Suppose I have $1000 of income and I spend $900 myself. The $900 is already someone else’s income so that’s not a problem. But the $100 that I saved will go through people like us, our financial institutions, and will be lent to someone who can use it. That person borrows and spends it, then total expenditure in economy will be $900 that I spent, plus $100 that this guy spent, to get $1000 against original income of $1000. That’s how economy moves forward, right? If there are too many borrowers and economy is doing well, central banks will raise rates. Too few, central bank will lower rates to make sure that this cycle is maintained. That’s the usual economy.

But what happens in the balance sheet recession is that when I have $1000 in income and I spend $900 myself, that $900 is not a problem. But the $100 I decide to save ends up stuck in the financial system because no one’s borrowing money. And China, so many people are refusing to borrow money these days because of that issue. Then economy shrinks from $1000 to $900, so 10% decline. The next round, the $900 is someone else’s income, when that person decides to save 10% and spends $810 and decides to save $90, that $90 gets stuck in the financial system again, because repairing financial balance sheets could take a very long time. I mean, Japanese took nearly 20 years to repair their balance sheets.

But in the meantime, economy can go from $1000, $900, $810, $730, very, very quickly. That actually happened in United States during the Great Depression. From 1929 to 1933, the United States lost 46% of its nominal GDP. Something quite similar actually happened in Spain after 2008when unemployment rates skyrocketed to 26% in just three and a half years or so. That’s the kind of danger we face in the balance sheet recession.

Takeaway #2: Monetary policy (changing the level of interest rates) is not useful in dealing with a balance sheet recession – what’s needed is fiscal policy (government spending), but it has yet to arrive for China

I’m no great fan of using monetary policy, meaning policies from the central bank to fight what I call a balance sheet recession…

…Repairing balance sheets of course is the right thing to do. But when everybody does it all at the same time, we enter the problem of fallacy of composition, in that even though everybody’s doing the right things, collectively we get the wrong results. And we get that problem in this case because in the national economy, if someone is repairing balance sheets, meaning paying down debt or increasing savings, someone has to borrow those funds to keep the economy going. But in usual economies, you bring interest rates down, there’ll be people out there willing to borrow the money and spend it. That’s how you keep the economy going.

But in the balance sheet recession, you bring interest rates down to very low levels – and Chinese interest rates are already pretty low. But even if you bring it down to zero, people will be still repairing balance sheets because if you are in negative equity territory, you have to come out of that as quickly as possible. So when you’re in that situation, you cannot expect private sector to respond to lowering of interest rates or quantitative easing, forward guidance, and all of those monetary policy, to get this private sector to borrow money again because they are all doing the right things, paying down debt. So when you’re in that situation, the economy could weaken very, very quickly because all the saved funds that are returned to the banking system cannot come out again. That’s how you end up with economy shrinking very, very rapidly.

The only way to stop this is for the government, which is outside of the fallacy of composition, to borrow money. And that’s the fiscal policy of course, but that hasn’t come out yet. And so yes, they did the quick and easy part with big numbers on the monetary side. But if you are in balance sheet recession, monetary policy, I’m afraid is not going to be very effective. You really need a fiscal policy to get the economy moving and that hasn’t arrived yet.

Takeaway #3: China’s fiscal policy for dealing with the balance sheet recession needs to be targeted, and a good place to start would be to complete all unfinished housing projects in the country, followed by developing public works projects with a social rate of return that’s higher than Chinese government bond yields

If people are all concerned about repairing their balance sheets, you give them money to spend and too often they just use it to pay down debt. So even within fiscal stimulus, you have to be very careful here because tax cuts I’m afraid, are not very effective during balance sheet recessions because people use that money to repair their balance sheets. Repairing balance sheets is of course the right thing to do, but it will not add to GDP when they’re using that tax cuts to pay down debt or rebuild their savings. So that will not add to consumption as much as you would expect under ordinary circumstances. So I would really like to see government just borrow and spend the money because that will be the most effective way to stop the deflationary spiral…

… I would use money first to complete all the apartments that were started but are not yet complete. In that case you might have to take some heavy handed actions, but basically the government should take over these companies and the projects, and start putting money so that they’ll complete the projects. That way, you don’t have to decide what to make, because the things that are already in the process of being built – or the construction drawings are there, workers are there, where to get the materials. And in many cases, potential buyers already know. So in that case, you don’t waste time thinking about what to build, who’s to design, and who the order should go to.

Remember President Obama, when he took over 2009, US was in a balance sheet recession after the collapse of the housing bubble. But he was so careful not to make the Japanese mistake of building bridges to nowhere and roads to nowhere. He took a long time to decide which projects should be funded. But that year-and-a-half or so, I think the US lost quite a bit of time because during that time, economy continued to weaken. There were no shovel-ready projects.

But in the Chinese case, I would argue that these uncompleted apartments are the shovel-ready projects. You already know who wants them, who paid their down payments and all of that. So I will spend the money first on those projects, complete those projects, and use the time while the money is used to complete these apartments.

I would use the magic wand to get the brightest people in China to come into one room and ask them to come up with public works projects with a social rate of return higher than 2.0%. The reason is that Chinese government bond is about 2.00-something. If these people can come up with public works projects with a social rate of return higher than let’s say 2.1%, then those projects will be basically self-financing. It won’t be a burden on future taxpayers. Then once apartments are complete, then the economy still is struggling from balance sheet recession, then I would like to spend the money on those projects that these bright people might come up with.

Takeaway #4: The central government in China actually has a budget deficit that is a big part of the country’s GDP, unlike what official statistics say

But in China, even though same rules should have applied, local governments were able to sell lots of land, make a lot of money in the process, and then they were able to do quite a bit of fiscal stimulus, which also of course added to their GDP. That model will have to be completely revised now because no one wants to buy land anymore. So the big source of revenue of local governments are gone and as a result, many of them are very close to bankrupt. Under the circumstances, I’m afraid central government will have to take over a lot of these problems from the local government. So this myth that Chinese central government, the budget deficit is not a very big part of GDP, that myth will have to be thrown out. Central government will have to take on, not all of it perhaps, but some of the liabilities of the local governments so that local governments can move forward.

Takeaway #5: There’s plenty of available-capital for the Chinese central government to borrow from, and the low yields of Chinese government bonds are a sign of this

So even though budget deficit of China might be very large, the money is there for government to borrow. If the money is not there for the government to borrow, Chinese government bond yields should have gone up higher and higher. But as you know, Chinese government 10-year government bond yields almost down to 2.001% or 2%. It went that low because there are not enough borrowers out there. Financial institutions have to place this money somewhere, all these deleveraged funds coming back into the financial institutions, newly generated savings, all the money that central bank put in, all comes to basically people like us in the financial institutions, the fund managers. But if the private sector is not borrowing money, the only borrower left is the government.

So even if the required budget deficit might be very large to stabilize the economy, the funds are available in the financial market. Only the government just have to borrow that and spend it. So financing should not be a big issue for governments in balance sheet recession. Japan was running huge budget deficits and a lot of conventional minded economists who never understood the dynamics of balance sheet recession was warning about Japan’s budget deficit growing sky high, and then interest rates going sky high. Well, interest rates kept on coming down because of the mechanism that I just described to you, that all those funds coming into the financial sector cannot go to the private sector, end up going to our government bond market. And I see the same pattern developing in China today.

Takeaway #6: Depending on exports is a great way for a country to escape from a balance sheet recession, but this route is not available for China because its economy is already running the largest trade surplus in the world

Export is definitely one of the best ways if you can use it, to come out of balance sheet recession. But China, just like Japan 30 years ago, is the largest trade surplus country in the world. And if the world’s largest trade surplus country in the world tries to export its way out, very many trading partners will complain. You are already such a large destabilizing factor on the world trade, now you’re going to destabilize it even more.

I remember 30 years ago that United States, Europe, and others were very much against Japan trying to export its way out. Because of their displeasure, particularly the US displeasure, Japanese yen, which started at 160 yen when the bubble burst in 1990, ended up 80 yen to the dollar, five years later, 1995. What that indicated to me was that if you’re running trade deficit, you can probably export your way out and no one can really complain because you are a deficit country to begin with. But if you are the surplus country, and if you’re the largest trade surplus country in the world, there will be huge pushback against that kind of move by the Chinese. We already seeing that, in very many countries complaining that China should not export its problems.

Takeaway #7: Regulatory uncertainties for businesses that are caused by the Chinese central government may have played a role in the corporate sector’s unwillingness to borrow

Aside from a balance sheet recession, which is a very, very serious disease to begin with, we have those other factors that started hurting the Chinese economy, I would say, starting as early as 2016.

When you look at the flow of funds data for the Chinese economy, you notice that the Chinese corporate sector started reducing their borrowings, starting around 2016. So until 2016, Chinese companies were borrowing all the household sector savings generated, which is of course the ideal world. The household sector saving money, the corporate sector borrowing money. But starting around 2016, you see corporate sector borrowing less and less. And at around the Covid time, corporate sector was actually a net saver, not a net borrower. So that trend, I think has to do with what you just described, that regulatory uncertainties got bigger and bigger under the current leadership and I think people began to realize that even after you make these big investments in the new projects, they may not be able to expect the same revenue stream that they expected earlier because of this regulatory uncertainty.

Takeaway #8: China’s economy was already running a significant budget deficit prior to the bubble bursting, and this may have made the central government reluctant to step in as borrower of last resort now to fix the balance sheet recession

If the household sector is saving money, but the corporate sector is not borrowing money, you need someone else to fill that gap. And actually that gap was filled by Chinese government, mostly decentralized local governments. But if that temporary fiscal jolt of fiscal stimulus then turn the economy around, then those local government interventions would’ve been justified. But because this was a much more deeply rooted – here, I would use structural problems, this regulatory uncertainties and middle income trap and so forth – local government just had to keep on borrowing and spending money to keep the economy going. That was happening long before the bubble burst. So if you look at total, or what I call general government spending – not just the central government, but the general government – they were financial deficit to the tune of almost 7% of GDP by 2022. This is before the bubble bursting.

So if you are already running a budget deficit, 7% of GDP before the onset of balance sheet recession, then whatever you have to do to stop balance sheet recession, we have to be on top of the 7%. Suppose you need 5% GDP equivalent to keep the economy going, then you’re talking about 12% of GDP budget deficit. I think that’s one of the reasons why Chinese policy makers, even though many of them are fully aware that in the balance sheet recession, you need the government to come in, they haven’t been able to come to a full consensus yet because even before the bubble burst, Chinese government was writing a large budget deficit.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

How Recessions and Interest Rate Changes Affect Stocks

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future.

After years of investing in stocks, I’ve noticed that stock market participants place a lot of emphasis on how recessions and changes in interest rates affect stocks. This topic is even more important right now for investors in US stocks, given fears that a recession could happen soon in the country, and the interest rate cut last month by the Federal Reserve, the country’s central bank. I have no crystal ball, so I have no idea how the US stock market would react if a recession were to arrive in the near future and/or the Federal Reserve continues to lower interest rates.   

What I have is historical context. History is of course not a perfect indicator of the future, but it can give us context for possible future outcomes. I’ve written a few articles over the years in this blog discussing the historical relationships between stocks, recessions, and movements in interest rates, some of which are given below (from oldest to the most recent):

I thought it would be useful to collect the information from these separate pieces into a single place, so here goes!

The history of recessions and stocks

These are the important historical relationships between recessions and stocks:

  • It’s not a given that stocks will definitely fall during a recession. According to a June 2022 article by Ben Carlson, Director of Institutional Asset Management at Ritholtz Wealth Management, there have been 12 recessions in the USA since World War II (WWII). The average return for the S&P 500 (a broad US stock market benchmark) when all these recessions took place was 1.4%. There were some horrible returns within the average. For example, the recession that stretched from December 2007 to June 2009 saw the S&P 500 fall by 35.5%. But there were also decent returns. For the recession between July 1981 and November 1982, the S&P 500 gained 14.7%.
  • Holding onto stocks in the lead up to, through, and in the years after a recession, has mostly produced good returns. Carlson also showed in his aforementioned article that if you had invested in the S&P 500 six months prior to all of the 12 recessions since WWII and held on for 10 years after each of them, you would have earned a positive return on every occasion. Furthermore, the returns were largely rewarding. The worst return was a total gain of 9.4% for the recession that lasted from March 2001 to November 2001. The best was the first post-WWII recession that happened from November 1948 to October 1949, a staggering return of 555.7%. After taking away the best and worst returns, the average was 257.2%. 
  • Avoiding recessions flawlessly would have caused your return to drop significantly. Data from Michael Batnick, Carlson’s colleague at Ritholtz Wealth Management, showed that a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint.
  • Stocks tend to bottom before the economy does. The three most recent recessions in the USA prior to COVID-19 would be the recessions that lasted from July 1990 to March 1991, from March 2001 to November 2001, and from December 2007 to June 2009. During the first recession in this sample, data on the S&P 500 from Yale economist Robert Shiller, who won a Nobel Prize in 2013, showed that the S&P 500 bottomed in October 1990. In the second episode, the S&P 500 found its low 15 months after the end of the recession, in February 2003. This phenomenon was caused by the aftermath of the dotcom bubble’s bursting. For the third recession, the S&P 500 reached a trough in March 2009, three months before the recession ended. Moreover, after the December 2007 – June 2009 recession ended, the US economy continued to worsen in at least one important way over the next few months. In March 2009, the unemployment rate was 8.7%. By June, it rose to 9.5% and crested at 10% in October. But by the time the unemployment rate peaked at 10%, the S&P 500 was 52% higher than its low in March 2009. Even if we are right today that the economy would be in worse shape in the months ahead, stocks may already have bottomed or be near one – only time can tell.
  • The occurrence of multiple recessions has not stopped the upward march of stocks. The logarithmic chart below shows the performance of the S&P 500 (including dividends) from January 1871 to February 2020. It turns out that US stocks have done exceedingly well over these 149 years (up 46,459,412% in total including dividends, or 9.2% per year) despite the US economy having encountered numerous recessions. If you’re investing for the long run, recessions are nothing to fear.
Figure 1; Source: Robert Shiller data; National Bureau of Economic Research

The history of interest rates and stocks

These are the important historical relationships between interest rates and stocks:

  • Rising interest rates have been met with rising valuations. According to Robert Shiller’s data, the US 10-year Treasury yield was 2.3% at the start of 1950. By September 1981, it had risen to 15.3%, the highest rate recorded in Shiller’s dataset. In that same period, the S&P 500’s price-to-earnings (P/E) ratio moved from 7 to 8. In other words, the P/E ratio for the S&P 500 increased slightly despite the huge jump in interest rates. It’s worth noting too that the S&P 500’s P/E ratio of 7 at the start of 1950 was not a result of earnings that were temporarily inflated. Yes, there’s cherry picking with the dates. For example, if I had chosen January 1946 as the starting point, when the US 10-year Treasury yield was 2.2% and the P/E ratio for the S&P 500 was 19, then it would be a case of valuations falling alongside rising interest rates. But this goes to show that while interest rates have a role to play in the movement of stocks, it is far from the only thing that matters.
  • Stocks have climbed in rising interest rate environments. In a September 2022 piece, Carlson showed that the S&P 500 climbed by 21% annually from 1954 to 1964 even when the yield on 3-month Treasury bills (a good proxy for the Fed Funds rate, which is the key interest rate set by the Federal Reserve) surged from around 1.2% to 4.4% in the same period. In the 1960s, the yield on the 3-month Treasury bill doubled from just over 4% to 8%, but US stocks still rose by 7.7% per year. And then in the 1970s, rates climbed from 8% to 12% and the S&P 500 still produced an annual return of nearly 6%.
  • Stocks have done poorly in both high and low interest rate environments, and have also done well in both high and low interest rate environments. Carlson published an article in February 2023 that looked at how the US stock market performed in different interest rate regimes. It turns out there’s no clear link between the two. In the 1950s, the 3-month Treasury bill (which is effectively a risk-free investment, since it’s a US government bond with one of the shortest maturities around) had a low average yield of 2.0%; US stocks returned 19.5% annually back then, a phenomenal gain. In the 2000s, US stocks fell by 1.0% per year when the average yield on the 3-month Treasury bill was 2.7%. Meanwhile, a blockbuster 17.3% annualised return in US stocks in the 1980s was accompanied by a high average yield of 8.8% for the 3-month Treasury bill. In the 1970s, the 3-month Treasury bill yielded a high average of 6.3% while US stocks returned just 5.9% per year. 
  • A cut in interest rates by the Federal Reserve is not guaranteed to be a good or bad event for stocks. Josh Brown, CEO of Ritholtz Wealth Management, shared fantastic data in an August 2024 article on how US stocks have performed in the past when the Federal Reserve lowered interest rates. His data, in the form of a chart, goes back to 1957 and I reproduced them in tabular format in Table 1; it shows how US stocks did in the next 12 months following a rate cut, as well as whether a recession occurred in the same window. I also split the data in Table 1 according to whether a recession had occurred shortly after a rate cut, since eight of the 21 past rate-cut cycles from the Federal Reserve since 1957 took place without an impending recession. Table 2 shows the same data as Table 1 but for rate cuts with a recession; Table 3 is for rate cuts without a recession. What the data show is that US stocks have historically done well, on average, in the 12 months following a rate-cut. The overall record, seen in Table 1, is an average 12-month forward return of 9%. When a recession happened shortly after a rate-cut, the average 12-month forward return is 8%; when a recession did not happen shortly after a rate-cut, the average 12-month forward return is 12%. A recession is not necessarily bad for stocks. As Table 2 shows, US stocks have historically delivered an average return of 8% over the next 12 months after rate cuts that came with impending recessions. It’s not a guarantee that stocks will produce good returns in the 12 months after a rate cut even if a recession does not occur, as can be seen from the August 1976 episode in Table 3.
Table 1; Source: Josh Brown
Table 2; Source: Josh Brown
Table 3; Source: Josh Brown

Conclusion

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future. But it’s also worth bearing in mind that anything can happen in the financial markets. Things that have never happened before do happen, so there are limits to learning from history. Nonetheless, there’s a really important lesson from all the data seen above that I think is broadly applicable even far into the future, and it is that one-factor analysis in finance – “if A happens, then B will occur” – should be largely avoided because clear-cut relationships are rarely seen.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time. 

The Federal Reserve Has Much Less Power Over Financial Markets Than You Think 

It makes sense to mostly ignore the Federal Reserve’s actions when assessing opportunities in the stock market.

Last week, the Federal Reserve, the USA’s central bank, opted to lower the federal funds rate (the key interest rate controlled by it) by 50 basis points, or 0.5%. The move, both before and after it was announced, was heavily scrutinised by market participants. There’s a wide-held belief that the Federal Reserve wields tremendous influence over nearly all aspects of financial market activity in the USA.

But Aswath Damodaran, the famed finance professor from New York University, made an interesting observation in a recent blog post: The Federal Reserve actually does not have anywhere close to the level of influence over America’s financial markets as many market participants think.

In his post, Damodaran looked at the 249 calendar quarters from 1962 to 2024, classified them according to how the federal funds rate changed, and compared the changes to how various metrics in the US financial markets moved. There were 96 quarters in the period where the federal funds rate was raised, 132 quarters where it was cut, and 21 quarters where it was unchanged. Some examples of what he found:

  • A median change of -0.01% in the 10-year Treasury rate was seen in the following quarter after the 96 quarters where the federal funds rate increased, whereas a median change of 0.07% was seen in the following quarter after the 132 quarters where the federal funds rate was lowered. Put another way, the 10-year Treasury rate has historically tended to (1) decrease when the federal funds rate increased, and (2) increase when the federal funds rate decreased. This means that the Federal Reserve has very little control over longer-term interest rates. 
  • A median change of -0.13% in the 15-year mortgage rate was seen in the following quarter after the quarters where the federal funds rate increased, whereas a median change of -0.06% was seen in the following quarter after the quarters where the federal funds rate was lowered. It turns out that the Federal Reserve also exerts little control over the types of interest rates that consumers directly interact with on a frequent basis.
  • A median change of 2.85% in US stocks was seen in the following quarter after the quarters where the federal funds rate increased, a median change of 3.07% was seen in the following quarter after the quarters where the federal funds rate was lowered, and a median change of 5.52% was seen in the following quarter after the quarters where the federal funds rate was unchanged. When discussing the stock-market related data, Damodaran provided a provocative question and answer: 

“At the risk of disagreeing with much of conventional wisdom, is it possible that the less activity there is on the part of the Fed, the better stocks do? I think so, and stock markets will be better served with fewer interviews and speeches from members of the FOMC and less political grandstanding (from senators, congresspeople and presidential candidates) on what the Federal Reserve should or should not do.”

I have always paid scant attention to what the Federal Reserve is doing when making my investing decisions. My view, born from observations of financial market history* and a desire to build a lasting investment strategy, is that business fundamentals trump macro-economics. Damodaran’s data lends further support for my stance to mostly ignore the Federal Reserve’s actions when I assess opportunities in the stock market. 

*A great example can be found in Berkshire Hathaway, Warren Buffett’s investment conglomerate. Berkshire produced an 18.7% annual growth rate in its book value per share from 1965 to 2018, which drove a 20.5% annual increase in its stock price. Throughout those 53 years, Berkshire endured numerous macro worries, such as the Vietnam War, the Black Monday stock market crash, the “breaking” of the Bank of England, the Asian Financial Crisis, the bursting of the Dotcom Bubble, the Great Financial Crisis, Brexit, and the US-China trade war. Damodaran’s aforementioned blog post also showed that the federal funds rate moved from around 5% in the mid-1960s to more than 20% in the early-1980s and then to around 2.5% in 2018. And yet, an 18.7% input (Berkshire’s book value per share growth) still resulted in a 20.5% output (Berkshire’s stock price growth).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.