The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.
With the latest earnings season for the US stock market – for the fourth quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:
- 2023 Q1 – here and here
- 2023 Q2 – here and here
- 2023 Q3 – here and here
- 2023 Q4 – here and here
- 2024 Q1 – here and here
- 2024 Q2 – here and here
- 2024 Q3 – here and here
I’ve split the latest commentary into two parts for the sake of brevity. This is Part 2, and you can Part 1 here. With that, I’ll let the management teams take the stand…
Microsoft (NASDAQ: MSFT)
Microsoft’s management is seeing enterprises move to enterprise-wide AI deployments
Enterprises are beginning to move from proof of concepts to enterprise-wide deployments to unlock the full ROI of AI.
Microsoft’s AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-on-year; Microsoft’s AI business did better than expected because of Azure, Microsoft Copilot (within Copilot, price per seat was a strength and still retains good signal for value)
Our AI business has now surpassed an annual revenue run rate of $13 billion, up 175% year-over-year…
…[Question] Can you give more color on what drove the far larger-than-expected Microsoft AI revenue? We talked a bit about the Azure AI component of it. But can you give more color on that? And our estimates are that the Copilot was much bigger than we had expected and growing much faster. Any more details on the breakdown of what that Microsoft AI beat would be great.
[Answer] A couple of pieces to that, which you correctly identified, number one is the Azure component we just talked about. And the second piece, you’re right, Microsoft Copilot was better. And what was important about that, we saw strength both in seats, both new seats and expansion seats, as Satya talked about. And usage doesn’t directly impact revenue, but of course, indirectly does as people get more and more value added. And also price per seat was actually quite good. We still have a good signal for value.
Microsoft’s management is seeing AI scaling laws continue to show up in both pre-training and inference-time compute, and both phenomena have been observed internally at Microsoft for years; management has seen gains of 2x in price performance for each new hardware generation, and 10x for each new model generation
AI scaling laws continue to compound across both pretraining and inference time compute. We ourselves have been seeing significant efficiency gains in both training and inference for years now. On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimizations.
Microsoft’s management is balancing across training and inference in the buildout of Microsoft’s AI capacity; the buildout going forward will be governed by revenue growth and capability growth; Microsoft’s Azure data center capacity is expanding in line with both near-term and long-term demand signals; Azure has more than doubled its capacity in the last 3 years, and added a record amount of capacity in 2024; Microsoft’s data centres uses both in-house as well as 3rd-party chips
Much as we have done with the commercial cloud, we are focused on continuously scaling our fleet globally and maintaining the right balance across training and inference as well as geo distribution. From now on, it’s a more continuous cycle governed by both revenue growth and capability growth thanks to the compounding effects of software-driven AI scaling laws and Moore’s Law…
…Azure is the infrastructure layer for AI. We continue to expand our data center capacity in line with both near-term and long-term demand signals. We have more than doubled our overall data center capacity in the last 3 years, and we have added more capacity last year than any other year in our history. Our data centers, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next-generation AI workloads. We continue to take advantage of Moore’s Law and refresh our fleet as evidenced by our support of the latest from AMD, Intel, NVIDIA, as well as our first-party silicon innovation from Maia, Cobalt, Boost and HSM.
Microsoft’s management is seeing growth in raw storage, database services, and app platform services as AI apps scale, with an example being Azure OpenAI apps that run on Azure databases and Azure App Services
We are seeing new AI-driven data patterns emerge. If you look underneath ChatGPT or Copilot or enterprise AI apps, you see the growth of raw storage, database services and app platform services as these workloads scale. The number of Azure OpenAI apps running on Azure databases and Azure App Services more than doubled year-over-year, driving significant growth in adoption across SQL, Hyperscale and Cosmos DB.
OpenAI has made a new large Azure commitment; OpenAI’s APIs run exclusively on Azure; management is still very happy with the OpenAI partnership; Microsoft has ROFR (right of first refusal) on OpenAI’s Stargate project
As we shared last week, we are thrilled OpenAI has made a new large Azure commitment…
… And with OpenAI’s APIs exclusively running on Azure, customers can count on us to get access to the world’s leading models…
…[Question] I wanted to ask you about the Stargate news and the announced changes in the OpenAI relationship last week. It seems that most of your investors have interpreted this as Microsoft, for sure, remaining very committed to OpenAI’s success, but electing to take more of a backseat in terms of funding OpenAI’s future training CapEx needs. I was hoping you might frame your strategic decision here around Stargate.
[Answer] We remain very happy with the partnership with OpenAI. And as you saw, they have committed in a big way to Azure. And even in the bookings, what we recognized is just the first tranche of it. And so you’ll see, given the ROFR we have, more benefits of that even into the future.
Microsoft’s management thinks Azure AI Foundry has best-in-class tooling run times for users to build AI agents and access thousands of AI models; Azure AI Foundry already has 200,000 monthly active users after just 2 months; the models available on Azure AI Foundry include DeepSeek’s R1 model, and more than 30 industry-specific models from partners; Microsoft’s Phi family of SLMs (small language model) has over 20 million downloads
Azure AI Foundry features best-in-class tooling run times to build agents, multi-agent apps, AIOps, API access to thousands of models. Two months in, we already have more than 200,000 monthly active users, and we are well positioned with our support of both OpenAI’s leading models and the best selection of open source models and SLMs. DeepSeek’s R1 launched today via the model catalog on Foundry and GitHub with automated red teaming, content safety integration and security scanning. Our Phi family of SLM has now been downloaded over 20 million times. And we also have more than 30 models from partners like Bayer, PAYG AI, Rockwell Automation, Siemens to address industry-specific use cases.
Microsoft’s management thinks Microsoft 365 Copilot is the UI (user interface) for AI; management is seeing accelerated adoption of Microsoft 365 Copilot across all deal sizes; majority of Microsoft 365 Copilot customers purchase more seats over time; daily users of Copilot more than doubled sequentially in 2024 Q4, while usage intensity grew 60% sequentially; more than 160,000 organisations have used Copilot Studio, creating more than 400,000 custom agents in 2024 Q4, uo 2x sequentially; Microsoft’s data cloud drives Copilot as the UI for AI; management is seeing Copilot plus AI agents disrupting business applications; the initial seats for Copilot were for departments that could see immediate productivity benefits, but the use of Copilot then spreads across the enterprise
Microsoft 365 Copilot is the UI for AI. It helps supercharge employee productivity and provides access to a swarm of intelligent agents to streamline employee workflow. We are seeing accelerated customer adoption across all deal sizes as we win new Microsoft 365 Copilot customers and see the majority of existing enterprise customers come back to purchase more seats. When you look at customers who purchased Copilot during the first quarter of availability, they have expanded their seat collectively by more than 10x over the past 18 months. To share just one example, Novartis has added thousands of seats each quarter over the past year and now have 40,000 seats. Barclays, Carrier Group, Pearson and University of Miami all purchased 10,000 or more seats this quarter. And overall, the number of people who use Copilot daily, again, more than doubled quarter-over-quarter. Employees are also engaging with Copilot more than ever. Usage intensity increased more than 60% quarter-over-quarter and we are expanding our TAM with Copilot Chat, which was announced earlier this month. Copilot Chat, along with Copilot Studio, is now available to every employee to start using agents right in the flow of work…
…More than 160,000 organizations have already used for Copilot Studio, and they collectively created more than 400,000 custom agents in the last 3 months alone, up over 2x quarter-over-quarter…
…What is driving Copilot as the UI for AI as well as our momentum with agents is our rich data cloud, which is the world’s largest source of organizational knowledge. Billions of e-mails, documents and chats, hundreds of millions of Teams meetings and millions of SharePoint sites are added each day. This is the enterprise knowledge cloud. It is growing fast, up over 25% year-over-year…
…What we are seeing is Copilot plus agents disrupting business applications, and we are leaning into this. With Dynamics 365, we took share as organizations like Ecolab, Lenovo, RTX, TotalEnergies and Wyzant switched to our AI-powered apps from legacy providers…
…[Question] Great to hear about the strength you’re seeing in Copilot… Would love to get some color on just the common use cases that you’re seeing that give you that confidence that, that will ramp into monetization later.
[Answer] I think the initial sort of set of seats were for places where there’s more belief in immediate productivity, a sales team, in finance or in supply chain where there is a lot of, like, for example, SharePoint grounded data that you want to be able to use in conjunction with web data and have it produce results that are beneficial. But then what’s happening very much like what we have seen in these previous generation productivity things is that people collaborate across functions, across roles, right? For example, even in my own daily habit, it’s I go to chat, I use Work tab and get results, and then I immediately share using Pages with colleagues. I sort of call it think with AI and work with people. And that pattern then requires you to make it more of a standard issue across the enterprise. And so that’s what we’re seeing.
Azure grew revenue by 31% in 2024 Q4 (was 33% in 2024 Q3), with 13 points of growth from AI services (was 12 points in 2024 Q3); Azure AI services was up 157% year-on-year, with demand continuing to be higher than capacity; Azure’s non-AI business had weaker-than-expected growth because of go-to-market execution challenges
Azure and other cloud services revenue grew 31%. Azure growth included 13 points from AI services, which grew 157% year-over-year, and was ahead of expectations even as demand continued to be higher than our available capacity. Growth in our non-AI services was slightly lower than expected due to go-to-market execution challenges, particularly with our customers that we primarily reach through our scale motions as we balance driving near-term non-AI consumption with AI growth.
For Azure’s expected growth of 31%-32% in 2025 Q1 (FY2025 Q3), management expects contribution from AI services to grow from increased AI capacity coming online; management expects Azure’s non-AI services to still post healthy growth, but there are still impacts from execution challenges; management expects Azure to no longer be capacity-constrained by the end of FY2025 (2025 Q2); Azure’s capacity constraint has been in power and space
In Azure, we expect Q3 revenue growth to be between 31% and 32% in constant currency driven by strong demand for our portfolio of services. As we shared in October, the contribution from our AI services will grow from increased AI capacity coming online. In non-AI services, healthy growth continues, although we expect ongoing impact through H2 as we work to address the execution challenges noted earlier. And while we expect to be AI capacity constrained in Q3, by the end of FY ’25, we should be roughly in line with near-term demand given our significant capital investments…
…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year.
More than half of Microsoft’s cloud and AI-related capex in 2024 Q4 (FY2025 Q2) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q1 (FY2025 Q3) and 2025 Q2 (FY2025 Q4) to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex will grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; Microsoft’s long-lived infrastructure investments are fungible; the long-lived assets are land; the presence of Moore’s Law means that management does not want to invest too much capex in any one year because the hardware and software will become much better in just 1 year; management thinks Microsoft’s AI infrastructure should be continuously upgraded to take advantage of Moore’s Law; Microsoft’s AI capex growth going forward will be tagged to customer contract delivery; the fungibility of Microsoft’s AI infrastructure investments relates to not just inference (which is the primary use case), but also training, post training, and running the commercial cloud business
More than half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog…
…Next, capital expenditures. We expect quarterly spend in Q3 and Q4 to remain at similar levels as our Q2 spend. In FY ’26, we expect to continue investing against strong demand signals, including customer contracted backlog we need to deliver against across the entirety of our Microsoft Cloud. However, the growth rate will be lower than FY ’25 and the mix of spend will begin to shift back to short-lived assets, which are more correlated to revenue growth. As a reminder, our long-lived infrastructure investments are fungible, enabling us to remain agile as we meet customer demand globally across our Microsoft Cloud, including AI workloads…
…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year…
…You don’t want to buy too much of anything at one time because, in Moore’s Law, every year is going to give you 2x, your optimization is going to give you 10x. You want to continuously upgrade the fleet, modernize the fleet, age the fleet and, at the end of the day, have the right ratio of monetization and demand-driven monetization to what you think of as the training expense…
…I do think the way I want everyone to internalize it is that the CapEx growth is going through that cycle pivot, which is far more correlated to customer contract delivery, no matter who the end customer is…
… the other thing that’s sometimes missing is when we say fungible, we mean not just the primary use, which we’ve always talked about, which is inference. But there is some training, post training, which is a key component. And then they’re just running the commercial cloud, which at every layer and every modern AI app that’s going to be built will be required. It will be required to be distributed, and it will be required to be global. And all of those things are really important because it then means you’re the most efficient. And so the investment you see us make in CapEx, you’re right, the front end has been this sort of infrastructure build that lets us really catch up not just on the AI infrastructure we needed, but think about that as the building itself, data centers, but also some of the catch-up we need to do on the commercial cloud side. And then you’ll see the pivot to more CPU and GPU.
Microsoft’s management thinks DeepSeek had real innovations, but those are going to be commoditized and become broadly used; management thinks that innovations in AI that reduce the cost of inference will drive more consumption and more apps being developed, and make AI more ubiquitous, which are all positive forces for Microsoft
I think DeepSeek has had some real innovations. And that is some of the things that even OpenAI found in ’01. And so we are going to — obviously, now that all gets commoditized and it’s going to get broadly used. And the big beneficiaries of any software cycle like that is the customers, right? Because at the end of the day, if you think about it, right, what was the big lesson learned from client server to cloud? More people bought servers, except it was called cloud. And so when token prices fall, inference computing prices fall, that means people can consume more, and there will be more apps written. And it’s interesting to see that when I referenced these models that are pretty powerful, it’s unimaginable to think that here we are in sort of beginning of ’25, where on the PC, you can run a model that required pretty massive cloud infrastructure. So that type of optimization means AI will be much more ubiquitous. And so therefore, for a hyperscaler like us, a PC platform provider like us, this is all good news as far as I’m concerned.
Microsoft has been reducing prices of GPT models over the years through inference optimizations
We are working super hard on all the software optimizations, right? I mean, just not the software optimizations that come because of what DeepSeek has done, but all the work we have done to, for example, reduce the prices of GPT models over the years in partnership with OpenAI. In fact, we did a lot of the work on the inference optimizations on it, and that’s been key to driving, right?
Microsoft’s management is aware that launching a frontier AI model that is too expensive to serve is useless
One of the key things to note in AI is you just don’t launch the frontier model, but if it’s too expensive to serve, it’s no good, right? It won’t generate any demand.
Microsoft’s management is seeing many different AI models being used for any one application; management thinks that there will always be a combination of different models used in any one application
What you’re seeing is effectively lots of models that get used in any application, right? When you look underneath even a Copilot or a GitHub Copilot or what have you, you already see lots of many different models. You build models. You fine-tune models. You distill models. Some of them are models that you distill into an open source model. So there’s going to be a combination…
….There’s a temporality to it, right? What you start with as a given COGS profile doesn’t need to be the end because you continuously optimize for latency and COGS and putting in different models.
NVIDIA (NASDAQ: NVDA)
NVIDIA’s Data Center revenue again had incredibly strong growth in 2024 Q4, driven by demand for the Hopper GPU computing platform and the ramping of the Blackwell GPU platform
In the fourth quarter, Data Center revenue of $35.6 billion was a record, up 16% sequentially and 93% year-on-year, as the Blackwell ramp commenced and Hopper 200 continued sequential growth.
Blackwell’s sales exceeded management’s expectations and is the fastest product ramp in NVIDIA’s history; it is common for Blackwell clusters to start with 100,000 GPUs or more and NVIDIA has started shipping for multiple such clusters; management architected Blackwell for inference; Blackwell has 25x higher token throughput and 20x lower cost for AI reasoning models compared to the Hopper 100; Blackwell has a NVLink domain that handles the growing complexity of inference at scale; management is seeing great demand for Blackwell for inference, with many of the early GB200 (GB200 is based on the Blackwell family of GPUs) deployments earmarked for inference; management expects NVIDIA’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; management expects a significant ramp of Blackwell in 2025 Q1; the Blackwell Ultra, the next generation of GPUs within the Blackwell family, is slated for introduction in 2025 H2; the system architecture between Blackwell and Blackwell Ultra is exactly the same
In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp in our company’s history, unprecedented in its speed and scale…
…With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size…
…Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Its revolutionary transformer engine is built for LLM and mixture of experts inference. And its NVLink domain delivers 14x the throughput of PCIe Gen 5, ensuring the response time, throughput and cost efficiency needed to tackle the growing complexity of inference at scale…
…Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture…
…As Blackwell ramps, we expect gross margins to be in the low 70s. Initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure. When fully ramped, we have many opportunities to improve the cost and gross margin will improve and return to the mid-70s, late this fiscal year…
…Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1…
…Blackwell Ultra is second half…
…The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories and of course, new processors, and all of that is coming online…
…This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It’s a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition. But the next transition will slot right in. Blackwell Ultra will slot right in.
NVIDIA’s management sees post-training and model customisation has demanding orders of magnitude more compute than pre-training
The scale of post-training and model customization is massive and can collectively demand orders of magnitude, more compute than pretraining.
NVIDIA’s management is seeing accelerating demand for NVIDIA GPUs for inference, driven by test-time scaling and new reasoning models; management thinks reasoning models require 100x more compute per task than one-shot inference models; management is hopeful that future generation of reasoning models will require millions of times more compute; management is seeing that the vast majority of NVIDIA’s compute today is inference
Our inference demand is accelerating, driven by test-time scaling and new reasoning models like OpenAI o3, DeepSeek-R1 and Grok 3. Long thinking reasoning AI can require 100x more compute per task compared to one-shot inferences…
…. The amount of tokens generated, the amount of inference compute needed is already 100x more than the one-shot examples and the one-shot capabilities of large language models in the beginning. And that’s just the beginning. This is just the beginning. The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future…
……The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level.
Companies such as ServiceNow, Perplexity, Microsoft, and Meta are using NVIDIA’s software and GPUs to achieve lower costs and/or better performance with their inference workloads
ServiceNow tripled inference throughput and cut costs by 66% using NVIDIA TensorRT for its screenshot feature. Perplexity sees 435 million monthly queries and reduced its inference costs 3x with NVIDIA Triton Inference Server and TensorRT-LLM. Microsoft Bing achieved a 5x speed up at major TCO savings for Visual Search across billions of images with NVIDIA TensorRT and acceleration libraries…
…Meta’s cutting-edge Andromeda advertising engine runs on NVIDIA’s Grace Hopper Superchip serving vast quantities of ads across Instagram, Facebook applications. Andromeda harnesses Grace Hopper’s fast interconnect and large memory to boost inference throughput by 3x, enhanced ad personalization and deliver meaningful jumps in monetization and ROI.
NVIDIA has driven a 200x reduction in inference costs in the last 2 years
We’re driven to a 200x reduction in inference costs in just the last 2 years.
Large cloud service providers (CSPs) were half of NVIDIA’s Data Centre revenue in 2024 Q4, and up nearly 2x year-on-year; large CSPs were the first to stand up Blackwell systems
In Q4, large CSPs represented about half of our data center revenue, and these sales increased nearly 2x year-on-year. Large CSPs were some of the first to stand up Blackwell with Azure, GCP, AWS and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI.
Regional clouds increased as a percentage of NVIDIA’s Data Center revenue in 2024 Q4, driven by AI data center build outs globally; management is seeing countries across the world building AI ecosystems
Regional cloud hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory build-outs globally and rapidly rising demand for AI reasoning models and agents where we’ve launched a 100,000 GB200 cluster-based incidents with NVLink Switch and Quantum-2 InfiniBand…
…Countries across the globe are building their AI ecosystems and demand for compute infrastructure is surging. France’s EUR 200 billion AI investment and the EU’s EUR 200 billion InvestAI initiatives offer a glimpse into the build-out to set redefined global AI infrastructure in the coming years.
NVIDIA’s revenue from consumer internet companies tripled year-on-year in 2024 Q4
Consumer Internet revenue grew 3x year-on-year, driven by an expanding set of generative AI and deep learning use cases. These include recommender systems, vision-language understanding, synthetic data generation, search and agentic AI.
NVIDIA’s revenue from enterprises nearly doubled year-on-year in 2024 Q4, partly with the help of agentic AI demand
Enterprise revenue increased nearly 2x year on accelerating demand for model fine-tuning, RAG and agentic AI workflows and GPU accelerated data processing.
NVIDIA’s management has introduced NIMs (NVIDIA Inference Microservices) focused on AI agents and leading AI agent platform providers are using these tools
We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications, including customer support, fraud detection and product supply chain and inventory management. Leading AI agent platform providers, including SAP and ServiceNow are among the first to use new models.
Healthcare companies are using NVIDIA’s AI products to power healthcare innovation
Health care leaders, IQVIA, Illumina and Mayo Clinic as well as ARC Institute are using NVIDIA AI to speed drug discovery, enhance genomic research and pioneer advanced health care services with generative and agentic AI.
Hyundai will be using NVIDIA’s technologies for the development of AVs (autonomous vehicles); NVIDIA’s automotive revenue had strong growth year-on-year and sequentially in 2024 Q4, driven by ramp in AVs; automotive companies such as Toyota, Aurora, and Continental are working with NVIDIA to deploy AV technologies; NVIDIA’s AV platform has passed 2 of the automotive industry’s foremost authorities for safety and cybersecurity
At CES, Hyundai Motor Group announced it is adopting NVIDIA technologies to accelerate AV and robotics development and smart factory initiatives…
…Now moving to Automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year-on-year…
…Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis. At CES, we announced Toyota, the world’s largest auto maker will build its next-generation vehicles on NVIDIA Orin running the safety certified NVIDIA DriveOS. We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA DRIVE Thor. Finally, our end-to-end autonomous vehicle platform NVIDIA DRIVE Hyperion has passed industry safety assessments like TÜV SÜD and TÜV Rheinland, 2 of the industry’s foremost authorities for automotive-grade safety and cybersecurity. NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments.
NVIDIA’s management has introduced the NVIDIA Cosmos World Foundation Model platform for the continued development of autonomous robots; Uber is one of the first major technology companies to adopt the NVIDIA Cosmos World Foundation Model platform
At CES, we announced the NVIDIA Cosmos World Foundation Model Platform. Just as language, foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics. Leading robotics and automotive companies, including ridesharing giant Uber, are among the first to adopt the platform.
As a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China is well below levels seen prior to the US government’s export controls; management expects the Chinese market to be very competitive
Now as a percentage of total data center revenue, data center sales in China remained well below levels seen on the onset of export controls. Absent any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive.
NVIDIA’s networking revenue declined sequentially in 2024 Q4, but the networking-attach-rate to GPUs remains robust at 75%; NVIDIA is transitioning to NVLink 72 with Spectrum-X (Spectrum-X is NVIDIA’s Ethernet networking solution); management expects networking revenue to resume growing in 2025 Q1; management sees AI requiring a new class of networking, for which the company’s NVLink, Quantum Infiniband, and Spectrum-X networking solutions are able to provide; large AI data centers, including OpenAI’s Stargate project, will be using Spectrum X
Networking revenue declined 3% sequentially. Our networking attached to GPU compute systems is robust at over 75%. We are transitioning from small NVLink 8 with InfiniBand to large NVLink 72 with Spectrum-X. Spectrum-X and NVLink Switch revenue increased and represents a major new growth vector. We expect networking to return to growth in Q1. AI requires a new class of networking. NVIDIA offers NVLink Switch systems for scale-up compute. For scale out, we offer Quantum InfiniBand for HPC supercomputers and Spectrum-X for Ethernet environments. Spectrum-X enhances the Ethernet for AI computing and has been a huge success. Microsoft Azure, OCI, CoreWeave and others are building large AI factories with Spectrum-X. The first Stargate data centers will use Spectrum-X. Yesterday, Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure. With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.
NVIDIA’s management is seeing 3 scaling laws at play in the development of AI models, namely pre-training scaling, post-training scaling, and test-time compute scaling
There are now multiple scaling laws. There’s the pre-training scaling law, and that’s going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining. And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning, verifiable rewards. The amount of computation you use for post training is actually higher than pretraining. And it’s kind of sensible in the sense that you could, while you’re using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens. AI models are basically generating tokens to train AI models. And that’s post-training. And the third part, this is the part that you mentioned is test-time compute or reasoning, long thinking, inference scaling. They’re all basically the same ideas. And there you have a chain of thought, you’ve search.
NVIDIA’s management thinks the popularity of NVIDIA’s GPUs stems from its fungibility across all kinds of AI model architectures and use cases; NVIDIA’s management thinks that NVIDIA GPUs have an advantage over the ASIC (application-specific integrated circuit) AI chips developed by others because of (1) the general-purpose nature of NVIDIA GPUs, (2) NVIDIA’s rapid product development roadmap, (3) the software stack developed for NVIDIA GPUs that is incredibly hard to replicate
The question is how do you design such an architecture? Some of it — some of the models are auto regressive. Some of the models are diffusion-based. Some of it — some of the times you want your data center to have disaggregated inference. Sometimes it is compacted. And so it’s hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA’s architecture is so popular. We run every model. We are great at training…
…When you have a data center that allows you to configure and use your data center based on are you doing more pretraining now, post training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways. And so we’re seeing, in fact, much, much more concentration of a unified architecture than ever before…
…[Question] We heard a lot about custom ASICs. Can you kind of speak to the balance between custom ASIC and merchant GPU?
[Answer] We build very different things than ASICs, in some ways, completely different in some areas we intercept. We’re different in several ways. One, NVIDIA’S architecture is general whether you’re — you’ve optimized for auto regressive models or diffusion-based models or vision-based models or multimodal models or text models. We’re great in all of it. We’re great at all of it because our software stack is so — our architecture is flexible, our software stack ecosystem is so rich that we’re the initial target of most exciting innovations and algorithms. And so by definition, we’re much, much more general than narrow…
…The third thing I would say is that our performance and our rhythm is so incredibly fast. Remember that these data centers are always fixed in size. They’re fixed in size or they’re fixed in power. And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues. And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4x or 8x higher, your revenues for that gigawatt data center is 8x higher. And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated. And so the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROI…
…The last thing that I would say is the software stack is incredibly hard. Building an ASIC is no different than what we do. We build a new architecture. And the ecosystem that sits on top of our architecture is 10x more complex today than it was 2 years ago. And that’s fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem on top of multiple chips is hard.
NVIDIA’s management thinks that only consumer AI and search currently have well-developed AI use cases, and the next wave will be agentic AI, robotics, and sovereign AI
We’ve really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software. The next wave is coming, agentic AI for enterprise, physical AI for robotics and sovereign AI as different regions build out their AI for their own ecosystems. And so each one of these are barely off the ground, and we can see them.
NVIDIA’s management sees the upcoming Rubin family of GPUs as being a big step-up from the Blackwell family
The next transition will slot right in. Blackwell Ultra will slot right in. We’ve also already revealed and been working very closely with all of our partners on the click after that. And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we’re going to provide a big, huge step-up.
NVIDIA’s management sees AI as having the opportunity to address a larger part of the world’s GDP than any other technology has ever had
No technology has ever had the opportunity to address a larger part of the world’s GDP than AI. No software tool ever has. And so this is now a software tool that can address a much larger part of the world’s GDP more than any time in history.
NVIDIA’s management sees customers still actively using older families of NVIDIA GPUs because of the high level of programmability that CUDA has
People are still using Voltas and Pascals and Amperes. And the reason for that is because there are always things that — because CUDA is so programmable you could use it — one of the major use cases right now is data processing and data curation. You find a circumstance that an AI model is not very good at. You present that circumstance to a vision language model, let’s say, it’s a car. You present that circumstance to a vision language model. The vision language model actually looks at the circumstances and said, “This is what happened and I wasn’t very good at it.” You then take that response — the prompt and you go and prompt an AI model to go find in your whole lake of data, other circumstances like that, whatever that circumstance was. And then you use an AI to do domain randomization and generate a whole bunch of other examples. And then from that, you can go train the model. And so you could use the Amperes to go and do data processing and data curation and machine learning-based search. And then you create the training data set, which you then present to your Hopper systems for training. And so each one of these architectures are completely — they’re all CUDA-compatible and so everything runs on everything. But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past. All of our GPUs are very well employed.
Paycom Software (NYSE: PAYC)
Paycom’s management rolled out an AI agent six months ago to its service team, and has seen higher immediate response rates to clients and eliminated service tickets by 25% from a year ago; Paycom’s AI agent is driving internal efficiencies, higher client satisfaction, and higher Net Promoter Scores
Paycom’s AI agent, which was rolled out to our service team 6 months ago, utilizes our own knowledge-based semantic search model to provide faster responses and help our clients more quickly and consistently than ever before. As responses continuously improve over time, our client interactions become more valuable, and we connect them faster to the right solution. As a result, we are seeing improved immediate response rates and have eliminated service tickets by over 25% compared to a year ago…
…With automations like AI agent, we are realizing internal efficiencies, driving increasing client satisfaction and seeing higher Net Promoter Scores.
PayPal (NASDAQ: PYPL)
One of the focus areas for PayPal’s management in 2025 will be on raising efficiency with the help of AI
Fourth is efficiency and effectiveness. In 2024, we reduced headcount by 10%. We made deliberate investments in AI and automation, which are critical to our future. This year, we are prioritizing the use of AI to improve the customer experience and drive efficiency and effectiveness within PayPal.
PayPal’s management sees AI being a huge opportunity for PayPal given the company’s volume of data; PayPal is using AI on its customer facing side to more efficiently process customer support cases and interactions with customers (PayPal Assistant has been rolled out and it has cut down phone calls and active events for PayPal); PayPal is using AI to personalise the commerce journey for consumers; PayPal is also using AI for back-office productivity and risk decisions
[Question] The ability to use AI for more operating efficiency. And are those initiatives that are requiring some incremental investment near term? Or are you already seeing sort of a positive ROI from that?
[Answer] AI is opening up a huge opportunity for us. First, at our scale, we saw 26 billion transactions on our platform last year. We have a massive data set that we are actively working and investing in to be able to drive our effectiveness and efficiency…
…First, on the customer-facing side, we’re leveraging AI to really become more efficient in our support cases and how we interact with our customers. We see tens of millions of support cases every year, and we’ve rolled out our PayPal Assistant, which is now really cutting down phone calls and active events that we have.
We also are leveraging AI to personalize the commerce journey, and so working with our merchants to be able to understand and create this really magical experience for consumers. When they show up at a checkout, it’s not just a static button anymore. This really can become a dynamic, personalized button that starts to understand the profile of the consumer, the journey that they’ve been on, perhaps across merchants, and be able to enable a reward or a cash-back offer in the moment or even a Buy Now, Pay Later offer in a dynamic experience…
…In addition, we also are looking at our back office and ensuring that not just on the engineering and employee productivity side, but also in things like our risk decisions. We see billions and billions of risk decisions that often, to be honest, we’re very manual in the past. We’re now leveraging AI to be able to understand globally what are the nature of these risk decisions and how do we automate these across both risk models as well as even just ensuring that customers get the right response at the right time in an automated fashion.
Salesforce (NYSE: CRM)
Salesforce ended 2024 (FY2025) with $900 million in Data Cloud and AI ARR (annual recurring revenue), up 120% from a year ago; management has never seen products grow at this rate before, especially Agentforce
We ended this year with $900 million in Data Cloud and AI ARR. It grew 120% year-over-year. We’ve never seen products grow at these levels, especially Agentforce.
Salesforce’s management thinks building digital labour (AI agents) is a much bigger market than just building software
I’m sure you saw those ARK slides that got released over the weekend where she said that she thought this digital labor revolution, which is really like kind of what we’re in here now, this digital labor revolution, this looks like it’s anywhere from a few trillion to $12 trillion. I mean, I kind of agree with her. I think this is much, much bigger than software. I mean, for the last 25 years, we’ve been doing software to help our customers manage their data. That’s very exciting. I think building software that kind of prints and deploys digital workers is more exciting.
Salesforce’s unified platform, under one piece of code, combining customer data and an agentic platform, is what gives Agentforce its accuracy; Agentforce already has 3,000 paying customers just 90 days after going live; management thinks Agentforce is unique in the agentic capabilities it is delivering; Salesforce is Customer Zero for Agentforce; Agentforce has already resolved 380,000 service requests for Salesforce, with an 84% resolution rate, and just 2% of requests require human escalation; Agentforce has accelerated Salesforce’s sales-quoting cycles by 75% and increased AE (account executive) capacity by 7%; Agentforce is helping Salesforce engage more than 50 leads per day, freeing up the sales team for higher-value conversations; management wants every Salesforce customer to be using Agentforce; Data Cloud is at the heart of Agentforce; management is seeing customers across every industry deploying Agentforce; management thinks Salesforce’s agentic technology works better than many other providers, and that other providers are just whitewashing their technology with the “agent” label; Agentforce is driving growth across Salesforce’s portfolio; Salesfroce has prebuilt 170 specialised Agentforce industry skills; Agentforce’s 3,000 customers come from a diverse set of industries
Our formula now really for our customers is this idea that we have these incredible Customer 360 apps. We have this incredible Data Cloud, and this incredible agentic platform. These are the 3 layers. But that it is a deeply unified platform, it’s a deeply unified platform, it’s just one piece of code, that’s what makes it so unique in this market…
…It’s this idea that it’s a deeply unified platform with one piece of code all wrapped in a beautiful layer of trust. And that’s what gives Agentforce this incredible accuracy that we’re seeing…
…Just 90 days after it went live, we’ve already have 3,000 paying Agentforce customers who are experiencing unprecedented levels of productivity, efficiency and cost and cost savings. No one else is delivering at this level of capability…
…We’re seeing some amazing results on Salesforce as Customer Zero for Agentforce. Our digital labor force is resolving tens of thousands of customer service inquiries, freeing our human employees to focus on the most nuanced issues and customer relationships. We’re seeing tremendous momentum and success stories emerge as we execute our vision to make every company, every single company, every customer of ours, an Agentforce company, that is, we want every customer to be an Agentforce customer…
…We also continued phenomenal growth with Data Cloud this year, which is the heart of Agentforce. Data Cloud is the fuel that powers Agentforce and our customers are investing in it…
…We’re seeing customers deploy Agentforce across every industry…
…You got to be aware of the false agent because the false agent is out there where people can use the word agent or they kind of — they’re trying to whitewash all the agent, the thing, everywhere. But the reality is there is the real agents and there are the false agents, and we’re very fortunate to have the real stuff going on here. So we’ve got a lot more groundbreaking AI innovation coming…
…Today, we’re live on Agentforce across service and sales, our business technology organization, customer support and more. And the results are phenomenal. Since launching on our Salesforce help portal in October, Agentforce has autonomously handled 380,000 service requests, achieving an incredible 84% resolution rate and only 2% of the requests require human escalation. And we’re using Agentforce for quoting, accelerating our quoting cycles by more than 75%. In Q4, we increased our AE [account executive] capacity while still driving productivity up 7% year-over-year. Agentforce is transforming how we do outbound prospecting, already engaging more than 50 leads per day with personalized outreach and timely follow-ups, freeing up our teams to focus on high-value conversation. Our reps are participating in thousands of sales coaching training sessions each month…
…Agentforce is revolutionizing how our customers work by bringing AI-powered insights and actions directly into the workflows across the Customer 360 applications. This is driving strong growth across our portfolio. Sales Cloud and Service Cloud both achieved double-digit growth again in Q4. We’re seeing fantastic momentum with Slack, with customers like ZoomInfo, Remarkable and MIMIT Health using Agentforce and Slack to boost productivity…
…We’ve prebuilt over 170 specialized Agentforce industry skills and a team of 400 specialists, supporting transformations across sectors and geographies…
…We closed more than 3,000 paid Agentforce deals in the quarter. As customers continue to harness the value of AI deeply embedded across our unified platform, it is no surprise that these customers average nearly 4 clouds. And these customers came from a diverse set of industries with more than half in technology, manufacturing, financial services and HLS.
Lennar, the USA’s largest homebuilder, has been a Salesforce customer for 8 years and it is deploying Agentforce to fulfill their management’s vision of selling all kinds of new products; jewelry company, Pandora, an existing Salesforce customer, is deploying Agentforce with the aim of handling 30%-60% of its service cases with Agentforce; pharmaceutical giant Pfizer is using Agentforce to augment its sales teams; Singapore-based airline, Singapore Airlines, is now a customer of Agentforce and wants to deliver service through it; Goodyear is using Agentforce to automate and increase the effectiveness of its sales efforts; Accenture is using Agentforce to coach its sales team and expects to achieve higher win rates; Deloitte is using Agentforce and expects to achieve significant productivity gains
We’ve been working with Lennar, the nation’s largest homebuilder. And most of you know Lennar is really an incredible company, and they’ve been a customer of ours for about 8 years…
…You probably know Stuart Miller, Jon Jaffe, amazing CEOs. And those co-CEOs called me and said, “Listen, these guys have done a hackathon around Agentforce. We’ve got 5 use cases. We see incredible opportunities on our margin, incredible opportunities in our revenue. And do you have our back if we’re going to deploy this?” And we said, “Absolutely. We’ve deployed it ourselves,” which is the best evidence that this is real. And they are just incredible, their vision as a homebuilder providing 24/7 support, sales leads through all their digital channels. They’re able to sell all kinds of new products. I think they’re going to sell mortgages and insurance and all kinds of things to their customers. And the cool thing is they’re using our sales product, our service product, marketing, MuleSoft, Slack, Tableau, they use everything. But they are able to leverage it all together by realizing that just by turning it on, they get this incredible Agentforce capability…
…I don’t know how many of you know about Pandora. If you’ve been to a shopping center, you will see the Pandora store. You walk in, they have this gorgeous jewelry. They have these cool charm bracelets. They have amazing products. And if you know their CEO, Alex, he’s absolutely phenomenal…
…They’re in 100 countries. They employ 37,000 people worldwide. And Alex has this great vision to augment their employees with digital labor. And this idea that whether you’re on their website or in their store, or whatever it is, that they’re going to be able to do so much more with Agentforce. They already use — first of all, they already use Commerce Cloud. So if you’ve been to pandora.com and bought their products — and if you have it, by the way, it’s completely worthwhile. It’s great. And you can experience our Commerce Cloud, but it’s deeply integrated with our Service Cloud, with Data Cloud. It’s the one unified platform approach. And now they’re just flipping the switch, turning agents on, and they’re planning to deliver 30% to 60% of their service cases with Agentforce. That is awesome. And I really love Alex’s vision of what’s possible….
…The last customer I really want to hit on, which I’m so excited about, is Pfizer. And Albert is an incredible CEO. They are doing unbelievable things. They’ve been a tremendous customer. But now they’re really going all in on our Life Sciences Cloud…
…And with Agentforce, sales agents, for example, with Pfizer, that’s — they’ve got 20,000 customer-facing employees and customer-facing folks. That is just a radical extension for them with agents…
…I’m sure a lot of you — like, I have flown in Singapore Air. You know what? It’s a great airline. The CEO, Goh, is amazing. And he has a huge vision that also came out of Dreamforce, where — they’ve already delivered probably the best service of any airline in the world — they want to deliver it through agents. So whether you’re doing it with service or sales or marketing or commerce or all the different things that Singapore Air is doing with us, you’re going to be able to do this right on Singapore Air…
…Goodyear is partnering with us on their transformation, using Agentforce to automate and increase the effectiveness of their sales efforts. With Agentforce for Field Service, Goodyear will be able to reduce repair time by assisting technicians with answers to vehicle-related questions and autonomously scheduling field tech appointments…
…Accenture is using Agentforce Sales Coach, which provides personalized coaching and recommendations for sales teams, which is expected to lead to higher win rates. And Deloitte is projecting significant productivity gains and saved workforce hours as they roll out Agentforce over the next few years.
Salesforce’s management expects modest revenue contribution from Agentforce in 2025 (FY2026); contribution from Agentforce is expected to be more meaningful in 2026 (FY2027)
Starting with full fiscal year ’26. We expect revenue of $40.5 billion to $40.9 billion, growth of approximately 7% to 8% year-over-year in nominal and constant currency. And for subscription and support revenue, we expect growth of approximately 9% year-over-year in constant currency…
…On Agentforce, we are incredibly excited about the customer momentum we are seeing. However, the adoption cycle is still early as we focus on deployment with our customers. As a result, we are assuming a modest contribution to revenue in fiscal ’26. We expect the momentum to build throughout the year, driving a more meaningful contribution in fiscal ’27.
Salesforce has long had a mix of per-seat and consumption pricing models; for now, Agentforce is a consumption product, but management sees Agentforce evolving to a mix of per-seat and consumption pricing models; there was a customer that bought Agentforce in 2024 Q4 (FY2025 Q4) along with other Salesforce products and the customer signed a $7 million Agentforce contract and a $13 million contract for the other products; based on early days of engagement with Agentforce customers, management sees significant future upside to Salesforce’s pricing structure; Agentforce’s pricing will also take into account whether Agentforce will bring other human-based clouds into the customer; Agentforce is currently creating some halo around Salesforce’s other products
We’ve kind of started the company out with the per user pricing model, and that’s about humans. We price per human, so you’re kind of pricing per human. And then we have products, though, that are also in the consumption world as well. And of course, those started in the early days, things like our sandboxes, even things like our Commerce Cloud, even our e-mail marketing product, our Marketing Cloud. These are consumption-based products we’ve had for years…
…Now we have these kind of products that are for agents also, and agents are also a consumption model. So when we look at our Data Cloud, for example, that’s a consumption product. Agentforce is a consumption product. But it’s going to be a mix. It’s going to be a mix between what’s going on with our customers with how many humans do they have and then how many agents are they deploying…
…In the quarter, we did a large transaction with a large telecommunications company… we’re rebuilding this telecommunications company. So it’s Sales Cloud, it’s Service Cloud, it’s Marketing Cloud. It’s all of our core clouds, but then also it’s Agentforce. And the Agentforce component, I think, was maybe $7 million in the transaction. So she was buying $7 million of Agentforce. She bought $13 million in our products for humans, and I think that was about $20 million in total…
…We will probably move into the near future from conversations as we price most of our initial deals to universal credit. It will allow our customers far more flexibility in the way they transact with us. But we see this as a significant upside to our pricing structures going forward. And that’s what we’ve seen in the early days with our engagement with customers…
…Here’s a transaction that you’re doing, let’s say, a customer comes in, they’re very interested in building an agentic layer on their company, is that bringing other human-based clouds along with it?…
…[Question] Is Agentforce having a bit of a halo effect around some of your other products, meaning, as we are on the journey to get more monetization from Agentforce, are you seeing pickups or at least higher activity levels in some of your other products?
[Answer] That’s exactly right. And we’re seeing it in the way that our customers are using our technology, new ideas, new workflows, new engagements. We talked about Lennar as an example, their ability to handle leads after hours that they weren’t able to get back to or respond to in a quick time frame are now able to touch and engage with those leads. And that, of course, flows into their Salesforce automation system. And so we are seeing this halo effect with our core technology. It is making every single one of our core apps better as they deliver intelligence, underpinning these applications.
Salesforce’s management sees the combination of apps, data, and agents as the winning combination in an AI-world; management disputes Microsoft’s narrative that software apps will become a dumb database layer in an AI-dominated world, because it is the combination of apps, data, and agents that is important
I don’t know any company that’s 100% agents. I don’t know of any company that doesn’t need automation for its humans. I don’t know of any company that doesn’t need a data cloud where it needs a consistent common data repository for all of its agents to gain their intelligence. And I don’t know of any company that’s not going to need an agentic layer. And that idea of having apps, data and agents, I think, is going to be the winning combination…
…[Question] As part of that shift to agentic technology, there’s been a lot of debate about the SaaS technology and the business model. The SaaS tech stack that you built and pioneered, how does that fit into the agentic world? Is there a risk that SaaS just becomes a CRUD database?
[Answer] I’ve heard that Microsoft narrative, too. So I watched the podcast you watched, and that’s a very interesting idea. Here’s how I look at it, which is, I believe there is kind of a holy trinity here of AI CRM, which is the apps, the data and the agents. And these three things have to kind of work together. And I kind of put my money where our mouth is where we kind of built it and we delivered it. And you can see the 380,000 conversations that we had as point of evidence here in the last 90 days on our service and with a very high resolution rate of 84%. You can go to help.salesforce.com, and you can see that today.
Now Microsoft has had Copilot available for, I think, about 2 years or more than 2 years. And I know that they’re the reseller of OpenAI and they’ve invested, they kind of repackaged this ChatGPT, whatever. But where on their side are they delivering agents? Where in their company have they done this? Are they a best practice? Because I think that while they can say such a thing, do they have humans and agents working together to create customer success? Are they rebalancing their workforce with humans and agents? I think that it’s a very interesting point that, yes, the agentic layer is very important, but it doesn’t operate by itself. It operates with data, with a Data Cloud that has to be federated through your company, to all your data sources. And humans, we’re still here
Salesforce’s management is seeing Agentforce deliver tremendous efficiency in Salesforce’s customer support function that they may rebalance some customer-support roles into other roles; management is currently seeing AI coding tools improve the productivity of Salesforce’s engineering team by 30% and thinks even more productivity can be found; management will not be expanding Salesforce’s engineering team this year, but will grow the sales team
We really are seeing tremendous efficiency with help.salesforce.com. So we may see the opportunity to rebalance some of those folks into sales and marketing and other functions…
…We definitely have seen a lot of efficiency with engineering and with some of the new tools that I’ve seen, especially some of these high-performance coding tools. One of the key members of my staff who’s here in the room with us has just showed me one of his new examples of what we’re able to do with these coding tools, pretty awesome. And we’re not going to hire any new engineers this year. We’re seeing 30% productivity increase on engineering. And we’re going to really continue to ride that up…
…We’re going to grow sales pretty dramatically this year. Brian has got a big vision for how to grow the sales organization. probably another 10% to 20%, I hope, this year because we’re seeing incredible levels of demand.
Salesforce’s management thinks that AI agents is one of the catalysts to drive GDP growth
So if you want productivity to go up and you want GDP to grow up and you want growth, I think that digital labor is going to be one of the catalysts to make that happen.
Shopify (NASDAQ: SHOP)
Shopify launched its first AI-powered search integration with Perplexity in 2024
Last year, we… launched our first AI-powered search integration with Perplexity, enabling new ways for buyers to find merchants.
One of Shopify’s management’s focus areas in 2025 is to continue embracing AI by investing more in Sidekick and other AI capabilities that help merchants launch and grow faster; management wants to shift Shopify towards producing goal-oriented software; management believes Shopify is well-positioned as a leader for commerce in an AI-driven world
We will continue to embrace the transformative potential of AI. This technology is not just a part of the future, it is redefining it. We’ve anticipated this. So we’re already transforming Shopify into a platform where users and machines work seamlessly together. We plan to deepen our investment in Sidekick and other AI capabilities to help not just brand-new merchants to launch, but also to help larger merchants scale faster and drive greater productivity. Our efforts to shift towards more goal-oriented software will further help to streamline operations and improve decision-making. This focus on embracing new ways of thinking and working positions us not only as the platform of choice today, but also as a leader for commerce in the AI-driven era with a relentless focus on cutting-edge technology.
Shopify’s management believes Shopify will be one of the major net beneficiaries in the AI era as the company is leveraging AI really well, such as its partnerships with Perplexity and OpenAI
I actually think Shopify will very much be one of the major net beneficiaries in this new AI era. I think we are widely recognized as one of the best companies that foster long-term partnership. And so when it comes to partnership in AI, whether it’s Perplexity, where we’re now powering their search results with incredible product across the Shopify product catalog or OpenAI where we’re using — we have a direct set of their APIs to help us internally, we are really leveraging it as best as we can.
In terms of utilising AI, Shopify’s management sees 2 angles; the 1st angle is Shopify using AI to help merchants with mundane tasks and allow merchants to focus only on the things they excel at; the 2nd angle is Shopify using AI internally to make developers and customer-support teams more effective (with customer-support teams, Shopify is using AI to handle low-quality conversations with customers)
[Question] A question in regards to AI and the use of AI internally. Over the last year or so, you’ve made significant investments. Where are you seeing it operationally having the most impact? And then what has been the magnitude of productivity gains that you’ve seen?
[Answer] We think about it in sort of 2 ways. The first is from a merchant perspective, how can we make our merchants way more successful, get them to do things faster, more effectively. So things like Sidekick or media editor or Shopify Inbox, Semantic Search, Sidekick, these are things that now — that every merchant should want when they’re not just getting started, but also scaling their business. And those are things that are only available from Shopify. So we’re trying to make some of the more mundane tasks far more easy to do and get them to focus on things that only they can — only the merchants can do. And I think that’s an important aspect of what Shopify will bring…
…Internally, however, this is where it gets really interesting, because not only can we use it to make our developers more effective, but also, if you think about our support organization, now we can ensure that our support team is actually having very high-quality conversations with merchants, whereas a lot of low-quality conversations, things like configuring a domain or a C name or a user name and password issue, that can be handled really elegantly by AI.
Taiwan Semiconductor Manufacturing Company (NYSE: TSM)
TSMC’s AI accelerators revenue more than tripled in 2024 and was mid-teens percent of overall revenue in 2024, but management expects AI accelerators revenue to double in 2025; management sees really strong AI-related demand in 2025
Revenue from AI accelerators, which we now define as AI GPU, AI ASICs and HBM controller for AI training and inference in the data center, accounted for close to mid-teens percent of our total revenue in 2024. Even after more than tripling in 2024, we forecast our revenue from AI accelerator to double in 2025 as the strong surge in AI-related demand continues…
…[Question] Try to get a bit more clarity on the cloud growth for 2025. I think, longer term, without a doubt, the technology definitely has lots of potential for demand opportunities, but I think — if we look at 2025 and 2026, I think there could be increasing uncertainties coming from maybe [indiscernible] spending, macro or even some of the supply chain challenges. And so I understand the management just provided a pretty good guidance for this year for sales to double. And so if you look at that number, do you think there is still more upside than downside as we go through 2025?
[Answer] I certainly hope there is upside, but I hope I get — my team can supply enough capacity to support it. Does that give you enough hint?
TSMC’s management saw a mixed year of recovery for the global semiconductor industry in 2024 with strong AI-related demand but mild recovery in other areas
2024 was a mixed year of recovery for the global semiconductor industry. AI-related demand was strong, while other applications saw only a very mild recovery as macroeconomics condition weighed on consumer sentiment and the end market demand.
TSMC’s management expects mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024 (previous forecast was for 50% CAGR for the 5-years starting from 2024, but off a lower base); management expects AI accelerators to be the strongest growth driver for TSMC’s overall HPC platform and overall revenue over the next few years
Underpinned by our technology leadership and broader customer base, we now forecast the revenue growth from AI accelerators to approach a mid-40% CAGR for the 5-year period starting off the already higher base of 2024. We expect AI accelerators to be the strongest driver of our HPC platform growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.
TSMC’s management expects 20% revenue CAGR in USD terms in the 5-years starting from 2024, driven by growth across all its platforms; management thinks that in the next few years, TSMC’s smartphone and PC end-markets will have higher silicon content and faster replacement cycle, driven by AI-related demand, which will in turn drive robust demand for TSMC’s chip manufacturing service; the AI-related demand in the smartphone and PC end-markets are related to edge-AI
Looking ahead, as the world’s most reliable and effective capacity provider, TSMC is playing a critical and integral role in the global semiconductor industry. With our technology leadership, manufacturing excellence and customer trust, we are well positioned to address the growth from the industry megatrend of 5G, AI and HPC with our differentiated technologies. For the 5-year period starting from 2024, we expect our long-term revenue growth to approach a 20% CAGR in U.S. dollar term, fueled by all 4 of our growth platform, which are smartphone, HPC, IoT and automotive…
…[Question] I believe that 20% starting from a very — already very high base in 2024 is a really good long-term objective but just wondering that, aside from the strong AI demand, what’s your view on the traditionals, applications like PC and the smartphone, growth and particularly for this year.
[Answer] This year is still mild growth for PC and smartphone, but everything is AI related, all right, so you can start to see why we have confidence to give you a close to 20% CAGR in the next 5 years. AI: You look at a smartphone. They will put AI functionality inside, and not only that. So the silicon content will be increased. In addition to that, actually the replacement cycle will be shortened. And also they need to go into the very advanced technology because of, if you want to put a lot of functionality inside a small chip, you need a much more advanced technology to put those [indiscernible]. Put all together, that even smartphone, the unit growth is almost low single digit, but then the silicon and the replacement cycle and the technology migration, that give us more growth than just unit growth; similar reason for PC…
…On the edge AI, in our observation, we found out that our customers start to put up more neural processing inside. And so we estimated the 5% to 10% more silicon being used. [ Can it be ] every year 5% to 10%? Definitely it is no, right? So they will move to next node, the technology migration. That’s also to TSMC’s advantage. Not only that, I also say that, the replacement cycle, I think it will be shortened because of, when you have a new toy that — with AI functionality inside it, everybody want replacing, replace their smartphone, replace their PCs. And [ I count that one ] much more than the — a mere 5% increase.
TSMC’s upcoming A16 process technology is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads
We will also introduce A16 featuring Super Power Rail or SPR as separate offering. TSMC’s SPR is a innovative, best-in-class backside power delivery solution that is first in the industry to incorporate a novel backside metal scheme that preserves gate density and device width flexibility to maximize product benefit. Compared with N2P, A16 provide a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed, and additional 7% to 10% chip density gain. A16 is the best suitable for specific HPC product with a complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.
TSMC’s management thinks that the US government’s latest AI restrictions will only have a minimal impact on the company’s business
[Question] Overnight, the U.S. seems to put a new framework on restricting China’s AI business, right? So I’m wondering whether that will create some business impact to your China business.
[Answer] We don’t have all analysis yet, but the first look is not significantly. It’s manageable. So that meaning that, my customers who are being restricted [ or something ], we are applying for the special permit for them. And we believe that we have confidence that they will get some permission, so long as they are not in the AI area, okay, especially automotive industry. Or even you talk about crypto mining, yes.
TSMC’s management does not want to reveal the level of demand for AI-related ASICs (application-specific integrated circuits) from the cloud hyperscalers, but they are confident that the demand is real, and that the cloud hyperscalers will be working with TSMC as they all need leading-edge technology for their AI-related ASICs
[Question] Broadcom’s CEO recently laid out a large SAM for AI hyperscalers building out custom silicon. I think he was talking about million clusters from each of the customers he has in the next 2 or 3 years. What’s TSMC’s perspective on all this?
[Answer] I’m not going to answer the question of the specific number, but let me assure you that, whether it’s ASIC or it’s graphic, they all need a very leading-edge technology. And they’re all working with TSMC, okay, so — and the second one is, is the demand real. Was — is — as a number that my customers said. I will say that the demand is very strong.
AI makes up all of the current demand for CoWoS (chip on wafer on substrate) capacity that TSMC’s management is seeing, but they think non-AI-related demand for CoWoS will come in the near future from CPUs and servers; there are rumours of a cut in orders for CoWoS, but management is not seeing any cuts; it appears that HBM (high bandwidth memory) is the key constraint on AI demand, instead of CoWoS; CoWoS was over 8% of TSMC’s revenue in 2024 and will be over 10% in 2025; CoWoS gross margin is better than before, but still lower than the corporate average
[Question] When can we see non-AI application such as server, smartphone or anything else can be — can start to adopt CoWoS capacity in case there is any fluctuation in the AI demand?
[Answer] Today is all AI focused. And we have a very tight capacity and cannot even meet customers’ need, but whether other products will adopt this kind of CoWoS approach, they will. It’s coming and we know that it’s coming.
[Question] When?
[Answer] It’s coming… On the CPU and on the server chip. Let me give you a hint…
…[Question] About your CoWoS and SoIC capacity ramp. Can you give us more color this year? Because recently there seemed to be a lot of market noises. Some add orders. Some cut orders, so I would like to see your view on the CoWoS ramp.
[Answer] That’s a rumor. I assure you. We are working very hard to meet the requirement of my customers’ demand, so “cut the order,” that won’t happen. We actually continue to increase, so we are — again I will say that. We are working very hard to increase the capacity…
…[Question] A question on AI demand. Is there a scenario where HBM is more of a constraint on the demand, rather than CoWoS which seems to be the biggest constraint at the moment?
[Answer] I don’t comment on other supplier, but I know that we have a very tight capacity to support the AI demand. I don’t want to say I’m the bottleneck. TSMC, always working very hard with customer to meet their requirement…
…[Question] So we have observed an increasing margin of advanced packaging. Could you remind us the CoWoS contribution of last year? And do you expect the margin to kind of approach the corporate average or even exceed it after the so-called — the value reflection this year?
[Answer] Overall speaking, advanced packaging accounted for over 8% of revenue last year. And it will account for over 10% this year. In terms of gross margins, it is better. It is better than before but still below the corporate average.
AI makes up all of the current demand for SoIC (system on integrated chips) that TSMC’s management is seeing, but they think non-AI-related demand for SoIC will come in the future
Today, SoIC’s demand is still focused on AI applications, okay? For PC or for other area, it’s coming but not right now.
Tesla (NASDAQ: TSLA)
Tesla’s management thinks Tesla’s FSD (Full Self Driving) technology has grown up a lot in the past few years; management thinks that car-use can grow from 10 hours per week to 55 hours per week with autonomous vehicles; autonomous vehicles can be used for both cargo and people delivery; FSD currently works very well in the USA , and will soon work well everywhere else; the constraint Tesla is currently experiencing with autonomous vehicles is in battery packs; FSD makes traffic commuting safer; FSD is currently on Version 13, and management believes Version 14 will have a significant step-improvement; Tesla has launched the Cortex training cluster at Gigafactory Austin, and it has played a big role in advancing FSD; Tesla will launch unsupervised FSD in June 2025 in Austin; Tesla already has thousands of its cars driving autonomously daily in its factories in Fremont and Texas, and Tesla will soon do that in Austin and elsewhere in the world; Tesla’s solution for autonomous vehicles is a generalised AI solution which does not need high-precision maps; Tesla’s unsupervised FSD work outside of Austin even when it’s launched only in June 2025 in Austin, but management just wants to be cautious; management thinks Tesla will release unsupervised FSD in many parts of the USA by end-2025; management’s safety-standard for FSD is for it to be far, far, far superior to humans; management thinks Tesla will have unsupervised FSD in almost every market this year
For a lot of people, like their experience of Tesla autonomy is like if it’s even a year old, if it’s even 2 years old, it’s like meeting someone when they’re like a toddler and thinking that they’re going to be a toddler forever. But obviously not going to be a toddler forever. They grow up. But if their last experience was like, “Oh, FSD was a toddler.” It’s like, well, it’s grown up now. Have you seen it? It’s like walks and talks…
…My #1 recommendation for anyone who doubts is simply try it. Have you tried it? When is the last time you tried it? And the only people who are skeptical, the only people who are skeptical are those who have not tried it.
So a car goes — a passenger car typically has only about 10 hours of utility per week out of 168, a very small percentage. Once that car is autonomous, my rough estimate is that it is in use for at least 1/3 of the hours per week, so call it, 50, maybe 55 hours of the week. . And it can be used for both cargo delivery and people delivery…
…That same asset, the thing that — these things that already exist with no incremental cost change, just a software update, now have 5x or more the utility than they currently have. I think this will be the largest asset value increase in human history…
…So look, the reality of autonomy is upon us. And I repeat my advice, try driving the car or let it drive you. So now it works very well in the U.S., but of course, it will, over time, work just as well everywhere else…
…Our current constraint is battery packs this year but we’re working on addressing that constraint. And I think we will make progress in addressing that constraint…
…So a bit more on full self-driving. Our Q4 vehicle safety report shows continued year-over-year improvement in safety for vehicles. So the safety numbers, if somebody has supervised full self-driving turn on or not, the safety differences are gigantic…
…People have seen the immense improvement with version 13, and with incremental versions in version 13 and then version 14 is going to be yet another step beyond that, that is very significant. We launched the Cortex training cluster at Gigafactory Austin, which was a significant contributor to FSD advancement…
…We’re going to be launching unsupervised full self-driving as a paid service in Austin in June. So I talked to the team. We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June…
…We already have Teslas operating autonomously unsupervised full self-driving at our factory in Fremont, and we’ll soon be doing that at our factory in Texas. So thousands of cars every day are driving with no one in them at our Fremont factory in California, and we’ll soon be doing that in Austin and then elsewhere in the world with the rest of our factories, which is pretty cool. And the cars aren’t just driving to exactly the same spot because, obviously, it all — [ implied ] at the same spot. The cars are actually programmed with where — with what lane they need to park into to be picked up for delivery. So they drive from the factory end of line to their destination parking spot and to be picked up for delivery to customers and then doing this reliably every day, thousands of times a day. It’s pretty cool…
…Our solution is a generalized AI solution. It does not require high-precision maps of locality. So we just want to be cautious. It’s not that it doesn’t work beyond Austin. In fact, it does. We just want to be — put our toe in the water, make sure everything is okay, then put a few more toes in the water, then put a foot in the water with safety of the general public as and those in the car as our top priority…
…I think we will most likely release unsupervised FSD in many regions of the country of the U.S. by the end of this year…
…We’re looking for a safety level that is significantly above the average human driver. So it’s not anywhere like much safer, not like a little bit safer than human, way safer than human. So the standard has to be very high because the moment there’s any kind of accident with an autonomous car, that immediately gets worldwide headlines, even though about 40,000 people die every year in car accidents in the U.S., and most of them don’t even get mentioned anywhere. But if somebody [ scrapes a shed ] within autonomous car, it’s headline news…
…But I think we’ll have unsupervised FSD in almost every market this year, limited simply by regulatory issues, not technical capability.
Tesla’s management thinks the compute needed for Optimus will be 10x that of autonomous vehicles, even though a humanoid robot has 1,000x more uses than an autonomous vehicle; management has seen the cost of training Optimus (or AI, in general) dropping dramatically over time; management thinks Optimus can produce $10 trillion in revenue, and that will still make the training needs of $500 billion in compute a good investment; management realises their revenue projections for Optimus sound insane, but they believe in it (sounds like a startup founder trying to get funding from VCs); it’s impossible for management to predict the exact timing for Optimus because everything about the robot has to be designed and built from the ground up by Tesla (nothing could be bought off-the-shelf by Tesla), but management thinks Tesla will build a few thousand Optimus robots by end-2025, and that these robots will be doing useful work in Tesla’s factories in the same timeframe; management’s goal is to ramp up Optimus production at a far faster rate than anything has ever been ramped; Optimus can even do delicate things such as play the piano; Optimus is still not design-locked for production; Tesla might be able to deliver Optimus to external clients by 2026 H2; management is confident that at scale, Optimus will be cheaper to produce than a car
The training needs for Optimus, our Optimus humanoid robot are probably at least ultimately 10x what is needed for the car, at least to get to the full range of useful role. You can say like how many different roles are there for a humanoid robot versus a car? A humanoid robot has probably 1,000x more uses and more complex things than in a car. That doesn’t mean the training scales by 1,000 but it’s probably 10x…
…It doesn’t mean like — or Tesla’s going to spend like $500 billion in training compute because we will obviously train Optimus to do enough tasks to match the output of Optimus robots. And obviously, the cost of training is dropping dramatically with time. But it is — it’s one of those things where I think long-term, Optimus will be — Optimus has the potential to be north of $10 trillion in revenue, like it’s really bananas. So that you can obviously afford a lot of training compute in that situation. In fact, even $500 billion training compute in that situation will be quite a good deal…
…With regard to Optimus, obviously, I’m making these revenue predictions that sound absolutely insane, I realize that. But they are — I think they will prove to be accurate…
…There’s a lot of uncertainty on the exact timing because it’s not like a train arriving at the station for Optimus. We are designing the train and the station and in real time while also building the tracks. And sort of like, why didn’t the train arrive exactly at 12:05? And like we’re literally designing the train and the tracks and the station in real-time while like how can we predict this thing with absolute precision? It’s impossible. The normal internal plan calls for roughly 10,000 Optimus robots to be built this year. Will we succeed in building 10,000 exactly by the end of December this year? Probably not. But will we succeed in making several thousand? Yes, I think we will. Will those several thousand Optimus robots be doing useful things by the end of the year? Yes, I’m confident they will do useful things…
…Our goal is to run Optimus production faster than maybe anything has ever been ramped, meaning like aspirationally in order of magnitude, ramp per year. Now if we aspire to an order of magnitude ramp per year, perhaps, we only end up with a half order of magnitude per year. But that’s the kind of growth that we’re talking about. It doesn’t take very many years before we’re making 100 million of these things a year, if you go up by let’s say, a factor by 5x per year…
…This is an entirely new supply chain, it’s entirely new technology. There’s nothing off the shelf to use. We tried desperately with Optimus to use any existing motors, any actuators, sensors. Nothing worked for a humanoid robot at any price. We had to design everything from physics-first principles to work for humanoid robot and with the most sophisticated hand that has ever been made before by far. Optimus will be also able to play the piano and be able to thread a needle. I mean this is the level of precision no one has been able to achieve…
…Optimus is not design-locked. So when I say like we’re designing the train as it’s going — we’re redesigning the train as it’s going down the tracks while redesigning the tracks and the train stations…
…I think probably with version 2, it is a very rough guess because there’s so much uncertainty here, very rough guess that we start delivering Optimus robots to companies that are outside of Tesla in maybe the second half of next year, something like that…
…I’m confident at 1 million units a year, that the production cost of Optimus will be less than $20,000. If you compare the complexity of Optimus to the complexity of a car, so just the total mass and complexity of Optimus is much less than a car.
The buildout of Cortex accelerated the rollout of FSD Version 13; Tesla has invested $5 billion so far in total AI-related capex
The build-out of Cortex was accelerated because of the role — actually accelerate the rollout of FSD Version 13. Our cumulative AI-related CapEx, including infrastructure, so far has been approximately $5 billion.
Tesla’s management is seeing significant interest from some car manufacturers in licensing Tesla’s FSD technology; management thinks that car manufacturers without FSD technology will go bust; management will only entertain situations where the volume would be very high
What we’re seeing is at this point, significant interest from a number of major car companies about licensing Tesla full self-driving technology…
…We’re only going to entertain situations where the volume would be very high. Otherwise, it’s just not worth the complexity. And we will not burden our engineering team with laborious discussions with other engineering teams until we obviously have unsupervised full self-driving working throughout the United States. I think the interest level from other manufacturers to license FSD will be extremely high once it is obvious that unless you have FSD, you’re dead.
Compared to Version 13, Version 14 of FSD will have a larger model size, longer context length, more memory, more driving-context, and more data on tricky corner cases
[Question] What technical breakthroughs will define V14 of FSD, given that V13 already covered photon to control?
[Answer] So continuing to scale the model size a lot. We scale a bunch in V13 but then there’s still room to grow. So we’re going to continue to scale the model size. We’re going to increase the context length even more. The memory sort of like limited right now. We’re going to increase the amount of memory [indiscernible] minutes of context for driving. They’re going to add audio and emergency vehicles better. Add like data of the tricky corner cases that we get from the entire fleet, any interventions or any kind of like user intervention. We just add to the data set. So scaling in basically every access, training compute, [ asset ] size, model size, model context and also all the reinforcement learning objectives.
Tesla has difficulties training AI models for autonomous vehicles in China because the country previously did not allow Tesla to transfer training videos outside of China while the US government did not allow Tesla to do training in China; a workaround Tesla did was to train on publicly available videos of streets in China; Tesla also had to build a simulator for its AI models to train on bus lanes in China because they are complicated
In China, which is a gigantic market, we do have some challenges because they weren’t — they currently allow us to transfer training video outside of China. And then the U.S. government won’t let us do training in China. So we’re in a bit of a buying there. It’s like a bit of a quandary. So what we’re really solving then is by literally looking at videos of streets in China that are available on the Internet to understand and then feeding that into our video training so that publicly available video of street signs and traffic rules in China can be used for training and then also putting it in a very accurate simulator. And so it will train using SIM for bus lanes in China. Like bus lanes in China, by the way, one of our biggest challenges in making FSD work in China is the bus lanes are very complicated. And there’s like literally like hours of the day that you’re allowed to be there and not be there. And then if you accidently go in at a bus lane at the wrong time, you get an automatic ticket instantly. And so it was kind of a big deal, bus lanes in China. So we put that into our simulator train on that, the car has to know what time of the day it is, read the sign. We’ll get this solved.
Elon Musk knows LiDAR technology really well because he built a LiDAR system for SpaceX that is in-use at the moment, but he thinks LiDAR is simply the wrong technology to use for autonomous vehicles because it has issues, and because humans are driving vehicles simply with our eyes and our biological neural nets
[Question] You’ve said in the past about LiDAR, for EVs, that LiDAR is a crutch, a fool’s errand. I think you even told me once, even if it was free, you’d say you wouldn’t use it. Do you still feel that way?
[Answer] Obviously humans drive without shooting lasers out of their eyes. I mean unless you’re superman. But like humans drive just with passive visual — humans drive with eyes and a neural net — and a brain neural net, sort of biological — so the digital equivalent of eyes and a brain are cameras and digital neural nets or AI. So that’s — the entire road system was designed for passive optical neural nets. That’s how the whole real system was designed and what everyone is expecting, that’s how we expect other cars to behave. So therefore, that is very obviously a solution for full self-driving as a generalized — but the generalized solution for full self-driving as opposed to the very specific neighborhood-by-neighborhood solution, which is very difficult to maintain, which is what our competitors are doing…
…LiDAR has a lot of issues. Like SpaceX Dragon docks with the space station using LiDAR, that’s a program that I personally spearheaded. I don’t have some fundamental bizarre dislike of LiDAR. It’s simply the wrong solution for driving cars on roads…
…I literally designed and built our own red LiDAR. I oversaw the project, the engineering thing. It was my decision to use LiDAR on Dragon. And I oversaw that engineering project directly. So I’m like we literally designed and made a LiDAR to dock with the space station. If I thought it was the right solution for cars, I would do that, but it isn’t.
The Trade Desk (NASDAQ: TTD)
Trade Desk’s management continues to invest in AI and thinks that AI is game-changing for forecasting and insights on identity and measurement; Trade Desk’s AI efforts started in 2017 with Koa, but management sees much bigger opportunities today; management is asking every development team in Trade Desk to look for opportunities to introduce AI into Trade Desk’s platform; there are already hundreds of AI enhancements to Trade Desk’s platform that have been shipped or are going to be shipped in 2025
AI is providing next-level performance in targeting and optimization, but it is also particularly game-changing in forecasting and identity and measurement. We continue to look at our technology stack and ask, where can we inject AI and enhance our product and client outcomes? Over and over again, we are finding new opportunities to make AI investments…
…We started our ML and AI efforts in 2017 with the launch of Koa, but today, the opportunities are much bigger. We’re asking every scrum inside of our company to look for opportunities to inject AI into our platform. Hundreds of enhancements recently shipped and coming in 2025 would not be possible without AI. We must keep the pedal to the metal, not to chest them on stages, which everyone else seems to be doing, but instead to produce results and win share.
Wix (NASDAQ: WIX)
Wix’s AI Website Builder was launched in 2024 and has driven stronger conversion and purchase behaviour from users; more than 1 million sites have been created and published with AI Website Builder; most new Wix users today are creating their websites through Wix’s AI tools and AI Website Builder and these users have higher rates of converting to paid subscribers
2024 was also the year of AI innovation. In addition to the significant number of AI tools introduced, we notably launched our AI Website Builder, the new generation of our previous AI site builder introduced in 2016. The new AI Website Builder continues to drive demonstrably stronger conversion and purchase behavior…
…Over 1 million sites have been created and published with the Website Builder…
…Most new users today are creating their websites through our AI-powered onboarding process and Website Builder which is leading to a meaningful increase in conversion of free users to paid subscriptions, particularly among Self Creators.
Wix’s management launched Wix’s first directly monetised AI product – AI Site Chat – in December 2024; AI Site Chat will help businesses converse with customers round the clock; users of AI Site Chat have free limited access, with an option to pay for additional usage; AI Site Chat’s preliminary results look very promising
In December, we also rolled out our first directly monetized AI product – the AI Site Chat…
…The AI Site-Chat was launched mid-December to Wix users in English, providing businesses with the ability to connect with visitors 24/7, answer their questions, and provide relevant information in real time, even when business owners are unavailable. By enhancing availability and engagement on their websites, the feature empowers businesses to meet the needs of their customers around the clock, ultimately improving the customer experience and driving potential sales. Users have free limited access with the option to upgrade to premium plans for additional usage…
…So if you’re a Wix customer, you can now install a chat, AI-powered chat on your website, and this will handle customer requests, product inquiries and support request. And from — and again, it’s very early in days and the preliminary results, but it looks very promising.
AI agents and assistants are an important part of management’s product roadmap for Wix in 2025; Wix is testing (1) an AI assistant for its Wix Business Manager dashboard, and (2) Marketing Agent, a directly monetizable AI agent that helps users accomplish marketing tasks; Marketing Agent is the first of a number of specialised AI agents management will roll out in 2025; management intends to test monetisation opportunities with the new AI agents
AI remains a major part of our 2025 product roadmap with particular focus on AI-powered agents and assistants…
… Currently, we are testing our AI Assistant within the Wix Business Manager as well as our AI Marketing Agent.
The AI Assistant in the Wix Business Manager is a seamlessly integrated chat interface within the dashboard. Acting as a trusted aide, this assistant guides users through their management journey by providing answers to questions and valuable insights about their site. With its comprehensive knowledge, the AI Assistant empowers users to better understand and leverage available resources, assisting with site operations and business tasks. For instance, it can suggest content options, address support inquiries, and analyze analytics—all from a single entry point.
The AI Marketing Agent helps businesses to market themselves online by proactively generating tailored marketing plans that align with users’ goals and target audiences. By analyzing data from their website, the AI delivers personalized strategies to enhance SEO, create engaging content, manage social media, run email campaigns and optimize paid advertising—all with minimal effort from the user. This solution not only simplifies marketing but also drives Wix’s monetization strategy, seamlessly guiding users toward high-impact paid advertising and premium marketing solutions. As businesses invest in growing their online presence, Wix benefits through a share of ad spend and premium feature adoption—fueling both user success and revenue growth.
We will continue to release and optimize specialized AI agents that assist our users in building the online presence they envision. We are exploring various monetization strategies as we fully roll out these agents and adoption increases.
Wix’s management is seeing Wix’s gross margin improve because of AI integration in customer care
Creative Subscriptions non-GAAP gross margin improved to 85% in Q4’24 and to 84% for the full year 2024, up from 82% in 2023. Business Solutions non-GAAP gross margin increased to 32% in Q4’24 and to slightly above 30% for the full year 2024. Continued gross margin expansion is the product of multiple years of cost structure optimization and efficiencies from AI integration across our Customer Care operations.
Wix’s management believes the opportunity for Wix in the AI era is bigger than what came before
There’s a lot of discussions about a lot of theories about it. But I really believe that the opportunity there is bigger than anything else because what we have today are going to continue to dramatically evolve into something that is probably more powerful and more enabling for small businesses to be successful. Overall, the Internet has a tendency to do it every 10 years or so, right, in the ’90s, the Internet started and became HTML, then it became images and then later on videos and then it became mobile, right? And I think they became interactive, everything become an application, kind of an application. And I think how website will look at the AI universe is the next step, and I think there’s a lot of exciting things we can offer our users there.
Visa (NYSE: V)
Visa is an early adopter of AI and management continues to drive adoption; Visa has seen material gains in engineering productivity; Visa has deployed AI in many functions, such as analytics, sales, finance, and marketing
We were very early adopters of artificial intelligence, and we continue to drive hard at the adoption of generative AI as we have for the last couple of years. So we’ve been working to embed AI and AI tooling into our company’s operations, I guess, broadly. We’ve seen material gains in productivity, particularly in our engineering teams. We’ve deployed AI tooling in client services, sales, finance, marketing, really everywhere across the company. And we were a very early adopter of applied AI in the analytics and modeling space, very early by like decades, we’ve been using AI in that space. So our data science and risk management teams have, at this point, decades of applied experience with AI, and they’re aggressively adopting the current generations of AI technology to enhance both our internal and our market-facing predictive and detective modeling capabilities. Our product teams are also aggressively adopting gen AI to build and ship new products.
Zoom Communications (NASDAQ: ZM)
Zoom AI Companion’s monthly active users (MAUs) grew 68% quarter-on-quarter; management has added new agentic AI capabilities to Zoom AI Companion; management will launch the Custom AI Companion add-on in April 2025; management will launch AI Companion for clinicians in March 2025; Zoom AI Companion is added into a low-end Zoom subscription plan at no added cost, and customers do not want to leave their subscriptions because of the added benefit of Zoom AI Companion; Zoom will be monetising Zoom AI Companion from April 2025 onwards through the Custom AI Companion add-on; the Custom AI Companion add-on would be $12 a seat when it’s launched in April 2025 and management thinks this price would provide a really compelling TCO (total cost of ownership) for customers; management thinks Custom AI Companion would have a bigger impact on Zoom’s revenue in 2026 (FY2027) than in 2025 (FY2026); see Point 28 for use cases for Custom AI Companion
Growth in monthly active users of Zoom AI Companion has accelerated to 68% quarter-over-quarter, demonstrating the real value AI is providing customers…
…As part of AI Companion 2.0, we added advanced agentic capabilities, including memory, reasoning, orchestration and a seamless integration with Microsoft and Google services. In April, we’re launching Custom AI Companion add-on to automate workplace tasks through custom agents. This will personalize AI to fit customer needs, connect with their existing data, and work seamlessly with their third-party tools. We’re also enhancing Zoom Workplace for Clinicians with an upgraded AI Companion that will enable clinical note-taking capabilities and specialized medical features for healthcare providers starting in March…
…If you look at our low SMB customer online buyers, AI Companion is part of that at no additional cost, made our service very sticky and also the customers give a very basic example, like meeting summary, right? It works so well, more and more customers follow the value…
…For high end, for sure, and we understand that today’s AI Companion and additional cost we cannot monetize. However, in April, we are going to announce the customized Companion for interested customers. We can monetize…
…[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?
[Answer] In regards to your question about what are sort of the assumptions or what’s the targeting in our [ head ] with the $12 Custom AI Companion SKU. I would say, starting with enterprise customers, obviously, the easiest place to sort of pounce on them is our own customer base and talk about that, but certainly not just limited to that. But we’ll be probably giving a lot more, I would say, at Enterprise Connect, which you can see on the thing there. But I would say we’ve assumed some degree of monetization in FY ’26, I think you’ll see more of it in ’27. And we think that the $12 price point is going to be a really compelling TCO story for our customers, it’s differentiated from what others in the market are pricing now.
The Zoom Virtual Agent feature will soon be able to handle complex tasks
Zoom Virtual Agent will soon expand reasoning abilities to handle complex tasks while maintaining conversational context for more natural and helpful outcomes.
Zoom’s management believes Zoom is uniquely positioned to win in agentic AI for a few reasons, including Zoom having exception context of users’ ongoing conversations, and Zoom’s federated AI approach where the company can use the best models for each task
We’re uniquely positioned to succeed in agentic AI for several reasons:
● Zoom is a system of engagement for our users with recent information in ongoing conversations. This exceptional context along with user engagement allows us to drive greater value for customers.
● Our federated AI approach lets us combine the best models for each task. We can use specialized small language models where appropriate, while leveraging larger models for more complex reasoning – driving both quality and cost efficiency
Zoom’s management is seeing large businesses want to use Zoom because of the AI features of its products
You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth.
Zoom’s management sees Zoom’s AI business services as a great way to monetise AI
You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth. But all those business services, that’s another great way for us to monetize AI.
Zoom’s management thinks Zoom’s cost of ownership with AI is lower than what competitors are offering
And I look at our AI Companion, all those AI Companion core features today at no additional cost, right? And customer really like it because of the quality, they’re getting better and better every quarter and very useful, right? Not like some other competitors, right? They talk about their AI strategy and when customers realize that, wow, it’s very expensive. And the total cost of ownership is not getting better because cost of the value is not [ great ], but also it’s not [ free ] and they always try to increase price.
A good example of a use case for Custom AI Companion
[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?
[Answer] So regarding the Custom AI Combined on use cases, high levels, we give a customer ability to customize their needs. I’ll give a few examples. One feature like we have a Zoom Service Call video clip, and we are going to support the standard template, right? How to support every customer? They have a customized template for each of the users, and this is a part of combining AI Studio, right? And also all kinds of third-party integration, right? And they like they prefer, right, some of those kind of sort of third-party application integration. With their data, with the knowledge, whether the [ big scenery ], a lot of things, right? Each company is different, they would not customized, so we can leverage our combining studio to work together with the customer to support their needs and also at same time commodities.
Zoom’s management expects the cost from AI usage to increase and so that will impact Zoom’s margins in the future, but management is also building efficiencies to offset the higher cost of AI
[Question] As we think about a shift more towards AI contribution, aren’t we shifting more towards a consumption model rather than a seat model over time, why wouldn’t we see margin compression longer term?
[Answer] Around how to think about margins and business models and why we don’t see compression. And what I would say is that — what we expect to see is similar to what you saw in FY ’25, which is we’re seeing obvious increase in cost from AI. And that we have an ongoing methodical kind of efficiency list to offset, and we certainly expect that broadly to continue into FY ’26. So I think we feel good about our ability to kind of moderate that. There’s other things we do more holistically where we can offset stuff that’s maybe not in AI in our margins, things like [ colos ], et cetera, that we’ve talked about previously.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Microsoft, Paycom Software, PayPal, Salesforce, Shopify, TSMC, Tesla, The Trade Desk, Wix, Visa, and Zoom. Holdings are subject to change at any time.