Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2024’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
- 2023 Q1 – here and here
- 2023 Q2 – here and here
- 2023 Q3 – here and here
- 2023 Q4 – here and here
- 2024 Q1 – here and here
- 2024 Q2 – here and here
Here they are, in no particular order:
Adobe (NASDAQ: ADBE)
Adobe’s management introduced multiple generative AI models in the Firefly family in 2024 and now has a generative video model; Adobe’s generative AI models are designed to be safe for commercial usage; the Firefly models are integrated across Adobe’s software products, which brings value to creative professionals across the world; Firefly has powered 16 billion generations (12 billion in 2024 Q2) since its launch in March 2023 and each month in 2024 Q3 has set a new record in generations; the new Firefly video model is in limited beta, but has already gathered massive customer interest (the model has driven a 70% increase in Premier Pro beta users since its introduction) and will be generally available in early-2025; recent improvements to the Firefly models include 4x faster image generation; enterprises such as Tapestry and Pepsi are using Firefly Services to scale content production; Firefly is the foundation of Adobe’s AI-related innovation; management is using Firefly to drive top-of-funnel user-acquisition for Adobe
2024 was also a transformative year of product innovation, where we delivered foundational technology platforms. We introduced multiple generative AI models in the Adobe Firefly family, including imaging, vector design and, most recently, video. Adobe now has a comprehensive set of generative AI models designed to be commercially safe for creative content, offering unprecedented levels of output quality and user control in our applications…
…The deep integration of Firefly across our flagship applications in Creative Cloud, Document Cloud, and Experience Cloud is driving record customer adoption and usage. Firefly-powered generations across our tools surpassed 16 billion, with every month this past quarter setting a new record…
…We have made major strides with our generative AI models with the introduction of Firefly Image Model 3 enhancements to our vector models, richer design models, and the all-new Firefly Video Model. These models are incredibly powerful on their own and their deep integration into our tools like Lightroom, Photoshop, Premiere, InDesign and Express have brought incredible value to millions of creative professionals around the world…
…The launch of the Firefly Video Model and its unique integration in Premier Pro and limited public beta garnered massive customer interest, and we look forward to making it more broadly available in early 2025. This feature drove a 70% increase in the number of Premier Pro beta users since it was introduced at MAX. Enhancements to Firefly image, vector, and design models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Premiere Pro and Adobe Express…
…Firefly Services adoption continued to ramp as enterprises such as Pepsi and Tapestry use it to scale content production, given the robust APIs and ease of creating custom models that are designed to be commercially safe…
…This year, we introduced Firefly Services. That’s been — that’s off to a great start. We have a lot of customers that are using that. A couple we talked about on the call include Tapestry. They’re using it for scaled content production. Pepsi, for their Gatorade brand, is enabling their customers to personalize any merchandise that they’re buying in particular, starting with Gatorade bottles. And these have been very, very productive for them, and we are seeing this leveraged by a host of other companies for everything from localization at scale to personalization at scale to user engagement or just raw content production at scale as well…
…You’re exactly right in terms of Firefly is a platform and a foundation that we’re leveraging across many different products. As we talked about, everything from Express and Lightroom and even in Acrobat on mobile for a broad-based but then also in our core Creative products, Photoshop, Illustrator, Premiere. And as we’ve alluded to a number of times on this call, with the introduction of video, even a stand-alone offer for Firefly that we think will be more valuable from a tiering perspective there. And then into Firefly Services through APIs in connection to GenStudio. So we are looking at leveraging the power of this AI foundation in all the activities…
…We see that when we invest in mobile and web, we are getting some very positive signals in terms of user adoption and user conversion rate. So we’re using Firefly very actively to do that.
Adobe’s management has combined content and data in Adobe GenStudio to integrate content creation with marketing, leading to an end-to-end content supply chain solution; the Adobe GenStudio portfolio has a new addition in Adobe GenStudio for Performance Marketing, which has seen strong customer demand since becoming generally available recently; management is expanding the go-to-market teams to sell GenStudio solutions that cut across the Digital Media and Digital Experience segments and early success has been found, with management expecting acceleration in this pipeline throughout FY2025 and beyond
We set the stage to drive an AI content revolution by bringing content and data together in Adobe GenStudio integrating high-velocity creative expression with enterprise activation. The release of Adobe GenStudio for performance marketing integrates Creative Cloud, Express, and Experience Cloud and extends our end-to-end content supply chain solution, empowering freelancers, agencies, and enterprises to accelerate the delivery of content, advertising and marketing campaigns…
…We have brought our Creative and Experience Clouds together through the introduction of Firefly Services and GenStudio, addressing the growing need for scaled content production in enterprises…
… GenStudio enables agencies and enterprises to unlock new levels of creativity and efficiency across content creation and production, workflow and planning, asset management, delivery and activation and reporting and insights.
Adobe GenStudio for Performance Marketing is a great addition to the GenStudio portfolio, offering an integrated application to create paid social ads, display ads, banners, and marketing e-mails by leveraging preapproved on-brand content. It brings together creative teams that define the foundational requirements of a brand, including guidelines around brand voice, channels, and images with marketing teams that need to deliver numerous content variations with speed and agility. We are seeing strong customer demand for Adobe GenStudio for Performance Marketing since its general availability at MAX…
… We’re expanding our enterprise go-to-market teams to sell these integrated solutions that cut across Digital Media and Digital Experience globally under the new GenStudio umbrella. We have seen early success for this strategy that included Express and Firefly Services in Q4. As we enable our worldwide field organization in Q1, we anticipate acceleration of this pipeline throughout the rest of the year and beyond.
Adobe’s management introduced AI Assistant in Acrobat and Reader in FY2024; users of AI Assistant completed their document-tasks 4x faster on average; AI Assistant is now available across desktop, web, and mobile; management introduced specialised AI for specific document-types and tasks in 2024 Q3 (FY2024 Q4); management saw AI Assistant conversations double sequentially in 2024 Q3; AI Assistant is off to an incredibly strong start and management sees it continuing to accelerate; AI Assistant allows users to have conversations with multiple documents, some of which are not even PDFs, and it turns Acrobat into a general-purpose productivity platform; the rollout of AI Assistant in more languages and documents gives Acrobat’s growth more durability
We took a major step forward in FY ’24 with the introduction of AI Assistant in Acrobat and Reader. AI Assistant and other AI features like Liquid Mode and Firefly are accelerating productivity through faster insights, smarter document editing and integrated image generation. A recent productivity study found that users leveraging AI Assistant completed their document-related tasks 4x faster on average. AI Assistant is now available in Acrobat across desktop, web, and mobile and integrated into our Edge, Chrome, and Microsoft Teams extensions. In Q4, we continued to extend its value with specialized AI for contracts and scanned documents, support for additional languages, and the ability to analyze larger documents…
… We saw AI Assistant conversations double quarter-over-quarter, driving deeper customer value…
… AI Assistant for Acrobat is off to an incredibly strong start and we see it continuing to accelerate…
…One of the big things that I think has been unlocked this year is moving, not just by looking at a PDF that you happen to be viewing, but being able to look at and have a conversation with multiple documents, some of which don’t even have to be PDF. So that transition and that gives us the ability to really take Acrobat and make it more of a general purpose productivity platform…
…The thing I’ll add to that is the durability of that, to your point, in languages, as we roll that out in languages, as we roll it out across multiple documents and as we roll it out in enterprises and B2B specifically. So again, significant headroom in terms of the innovation agenda of how Acrobat can be made even more meaningful as a knowledge tool within the enterprise.
Adobe’s management will soon introduce a new higher-priced Firefly offering that includes the video models; management thinks the higher-priced Firefly offering will help to increase ARPU (average revenue per user); management sees video generation as a high-value activity, which gives Adobe the ability to introduce higher subscription tiers that come with video generation; management sees consumption of AI services adding to Adobe’s ARR (annual recurring revenue) in 2 ways in FY2025, namely, (1) pure consumption-based pricing, and (2) consumption leading to a higher pricing-tier; management has learnt from pricing experiments for AI services and found that the right model for Adobe is a combination of access to features and usage-limits
We will soon introduce a new higher-priced Firefly offering that includes our video models as a comprehensive AI solution for creative professionals. This will allow us to monetize new users, provide additional value to existing customers, and increase ARPU…
…Video generation is a much higher-value activity than image generation. And as a result, it gives us the ability to start to tier Creative Cloud more actively there…
…You’re going to see “consumption” add to ARR in 2 or maybe 3 ways more so in ’25 than in ’24. The first, and David alluded to this, is if you have a video offering and that video offering, that will be a pure consumption pricing associated with it. I think the second is in GenStudio and for enterprises and what they are seeing. With respect to Firefly Services, which, again, I think David touched on how much momentum we are seeing in that business. So that is, in effect, a consumption business as it relates to the enterprise so I think that will also continue to increase. And then I think you’ll see us with perhaps more premium price offering. So the intention is that consumption is what’s driving the increased ARR, but it may be as a result of a tier in the pricing rather than a consumption model where people actually have to monitor it. So it’s just another way, much like AI Assistant is of monetizing it, but it’s not like we’re going to be tracking every single generation for the user, it will just be at a different tier…
… What we’ve done over the last year, there’s been a bit of experimentation, obviously, in the core Creative applications. We’ve done the generative credits model. What we saw with Acrobat was this idea of a separate package and a separate SKU that created a tier that people were able to access the feature through. And as we learn from all of these, we think, as Shantanu had mentioned earlier, that the right tiering model for us is going to be a combination of feature, access to certain features and usage limits on it. So the higher the tier, the more features you get and the more usage you get of it.
The Adobe Experience Platform (AEP) AI Assistant helps marketers automate tasks and generate new audiences and journeys
Adobe Experience Platform AI Assistant empowers marketers to automate tasks and generate new audiences and journeys. Adobe Experience Manager generates variations, provides dynamic and personalized content creation natively through AEM, enabling customers to deliver more compelling and engaging experiences on their websites.
Adobe’s management thinks there are 3 foundational differences in the company’s AI models and what the rest are doing, namely, (1) commercially safe models, (2) incredible control of the models, and (3) the integration of the models into products
The foundational difference between what we do and what everyone else does in the market really comes down to 3 things: one is commercially safe, the way we train the models; two is the incredible control we bake into the model; and three is the integration that we make with these models into our products, increasingly, of course, in our CC flagship applications but also in Express and Legroom and these kinds of applications but also in Anil’s DX products as well. So that set of things is a critical part of the foundation and a durable differentiator for us as we go forward.
Adobe’s management is seeing that users are onboarded to products faster when using generative AI capabilities; management is seeing that users who use generative AI features have higher retention rates
We are seeing in the core Creative business, when people try something like Photoshop, the onboarding experience is faster to success because of the use of generative AI and generative capabilities. So you’ll start to see us continuing to drive more proliferation of those capabilities earlier in the user journeys, and that has been proven very productive. But we also noticed that more people use generative AI. Again, we’ve always had good retention rates, but the more people use generative AI, the longer they retain as well.
MongoDB (NASDAQ: MDB)
MongoDB’s management is seeing a lot of large customers want to run workloads, even AI workloads, in on-premise format
We definitely see lots of large customers who are very, very committed to running workloads on-prem. We even see some customers want who are on to run AI workloads on-prem…
… I think you have some customers who are very committed to running a big part of the estate on-prem. So by definition, then if they’re going to build an AI workload, it has to be run on-prem, which means that they also need access to GPUs, and they’re doing that. And then other customers are leveraging basically renting GPUs from the cloud providers and building their own AI workloads.
MongoDB’s initiative to accelerate legacy app modernisation with AI (Relational Migrator) has seen a 50% reduction in the cost to modernise in its early days; customer interest in this initiative is exceeding management’s expectations; management expects modernisation projects to include large services engagements and MongoDB is increasing its professional services delivery capabilities; management is building new tools to accelerate future monetisation of service engagements; management has growing confidence that the monetisation of modernisation capabilities will be a significant growth driver for MongoDB in the long term; there are a confluence of events, including the emergence of generative AI to significantly reduce the time needed for migration of databases, that make the modernisation opportunity attractive for MongoDB; the buildout of MongoDB’s professional services capabilities will impact the company’s gross margin
We are optimistic about the opportunity to accelerate legacy app modernization using AI and are investing more in this area. As you recall, we ran a few successful pilots earlier in this year, demonstrating that AI tooling combined with professional services and our relational migrator product, can significantly reduce the time, cost and risk of migrating legacy applications on to MongoDB. While it’s early days, we have observed a more than 50% reduction in the cost to modernize. On the back of these strong early results, additional customer interest is exceeding our expectations.
Large enterprises in every industry and geography are experiencing acute pain from their legacy infrastructure and are eager for more agile performance and cost-effective solutions. Not only our customers excited to engage with us, they also want to focus on some of the most important applications in their enterprise further demonstrating the level of interest and size of the long-term opportunity.
As relational applications encompass a wide variety of database types, programming languages, versions and other customer-specific variables, we expect modernization projects to continue to include meaningful services engagements in the short and medium term. Consequently, we are increasing our professional services delivery capabilities, both directly and through partners. In the long run, we expect to automate and simplify large parts of the modernization process. To that end, we are leveraging the learnings from early service engagements to develop new tools to accelerate future monetization efforts. Although it’s early days and scaling our legacy app monetization capabilities will take time, we have increased conviction that this motion will significantly add to our growth in the long term…
…We’re so excited about the opportunity to go after legacy applications is that, one, it seems like there’s a confluence of events happening. One is that the increasing cost and tax of supporting and managing these legacy apps are just going up enough. Second, for many customers who are in regulated industries, the regulators are calling their the fact that they’re running on these legacy apps a systemic risk, so they can no longer kick the can down the road. Third, also because they no longer kick the can around, some vendors are going end of life, So they have to make a decision to migrate those applications to a more modern tech stack. Fourth, because Gen AI is so predicated on data and to build a competitive advantage, you need to leverage your proprietary data. People want to access that data and be able to do so easily. And so that’s another reason for them to want to modernize…
…we always could help them very easily move the data and map the schema from a relational schema to a document schema. The hardest part was essentially rewriting the application. Now with the advent of GenAI, you can now significantly reduce the time. One, you can use GenAI to analyze the existing code. Two, you can use GenAI to reverse engineer tests to test what the code does. And then three, you can use GenAI to build new code and then use this test to ensure that the new code produce the same results as the old code. And so all that time and effort is suddenly cut in a meaningful way…
…We’re really building out that capacity in order to meet the demand that we’re seeing relative to the opportunity. We’re calling it in particular because it has a gross margin impact because that’s where that will typically show up.
MongoDB’s management thinks that the company’s database is uniquely suited for the query-rich and complex data structures commonly found in AI applications; AI-powered recommendation systems have to consider complex data structures, beyond a customer’s purchase history; MongoDB’s database unifies source data, metadata, operational data and vector data in all 1 platform, providing a better developer experience; management thinks MongoDB is well-positioned for AI agents because AI agents that perform tasks need to interact with complex data structures, and MongoDB’s database is well-suited for this
MongoDB is uniquely equipped to query-rich and complex data structures typical of AI applications. The ability of a database to query-rich and complex data structures is crucial because AI applications often rely on highly detailed, interrelated and nuanced data to make accurate predictions and decisions. For example, a recommendation system doesn’t just analyze a single customer’s purchase but also considers their browsing history, peer group behavior and product categories requiring a database that can query and ensuring these complex data structures. In addition, MongoDB’s architecture unified source data, metadata, operational data and vector data in all 1 platform, updating the need for multiple database systems and complex back-end architectures. This enables a more compelling developer experience than any other alternative…
…When you think about agents, there’s jobs, there’s sorry, there’s a job, this project and then this task. Right now, the agents that are being rolled out are really focused on task, like, say, something from Sierra or some other companies that are rolling out agents. But you’re right, what they deem to do is to deal with being able to create a rich and complex data structure.
Now why is this important for in AI is that AI models don’t just look at isolated data points, but they need to understand relationships, hierarchies and patterns within the data. They need to be able to essentially get real-time insights. For example, if you have a chat bot where someone’s clearing customers kind of trying to get some update on the order they placed 5 minutes ago because they may have not gotten any confirmation, your chatbot needs to be able to deal with real-time information. You need to be able to deal with basically handling very advanced use cases, understanding like do things like fraud detection, to understand behaviors on supply chain, you need to understand intricate data relationships. All these things are consistent with MongoDB offers. And so we believe that at the end of the day, we are well positioned to handle this.
And the other thing that I would say is that we’ve embedded in a very natural way, search and vector search. So we’re just not an OLTP [online transaction processing] database. We do tech search and vector search, and that’s all one experience and no other platform offers that, and we think we have a real advantage.
In the AI market, MongoDB’s management is seeing most customers still being in the experimental stage, but the number of AI apps in production is increasing; MongoDB has thousands of AI apps on its platform, but only a small number have achieved enterprise-scale; there’s one AI app on MongoDB’s platform that has grown 10x since the start of 2024 and is a 7-figure workload today; management believes that as AI technology matures, there will be more AI apps that attain product-market fit but it’s difficult to predict when this will happen; management remains confident that MongoDB will capture its share of successful AI applications, as MongoDB is popular with developers building sophisticated AI apps; there are no compelling AI models for smartphones at the moment because phones do not have sufficient computing power
From what we see in the AI market today, most customers are still in the experimental stage as they work to understand the effectiveness of the underlying tech stack and build early proof-of-concept applications. However, we are seeing an increasing number of AI apps in production. Today, we have thousands of AI apps on our platform. What we don’t yet see is many of these apps actually achieving meaningful product-market fit and therefore, significant traction. In fact, as you take a step back and look at the entire universe of AI apps, a very small percentage of them have achieved the type of scale that we commonly see with enterprise-specific applications. We do have some AI apps that are growing quickly, including one that is already a 7-figure workload that has grown 10x since the beginning of the year.
Similar to prior platform shifts as the usefulness of AI tech improves and becomes more cost-effective we will see the emergence of many more AI apps that do nail product market fit, but it’s difficult to predict when that will happen more broadly. We remain confident that we will capture our fair share of these successful AI applications as we see that our platform is popular with developers building more sophisticated AI use cases…
…Today, we don’t have a very compelling model designed for our phones, right? Because today, the phones don’t have the computing horsepower to run complex models. So you don’t see a ton of very, very successful consumer apps besides, say, ChatGPT or Claude.
MongoDB’s management is building enterprise-grade Atlas Vector Search functionality into the company’s platform so that MongoDB will be in an even better position to win AI opportunities; management is bringing vector search into MongoDB’s community and EA (Enterprise Advance, which is the company’s non-Atlas business) offerings
We continue investing in our product capabilities, including enterprise-grade Atlas Vector Search functionality to build on this momentum and even better position MongoDB to capture the AI opportunity. In addition, as previously announced, we are bringing search and vector service to our community and EA offerings, leveraging our run-anywhere competitive advantage in the world of AI…
…We are investing in our what we call our EA business. First, we’re starting by investing with Search and Vector Search and a community product. That does a couple of things for us. One, whenever anyone starts with MongoDB with the open source product, they need get all the benefits of that complete and highly integrated platform. Two, those capabilities will then migrate to EA. So EA for us is an investment strategy.
MongoDB’s management is expanding the MongoDB AI Applications Program (MAAP); the MAAP has signed on new partners, including with Meta; management expects more of the MAAP workloads to happen on Atlas initially
We are expanding our MongoDB AI Applications program, or MAAP, which helps enterprise customers build and bring AI applications into production by providing them with reference architectures, integrations with leading tech providers and coordinated services and support. Last week, we announced a new cohort of partners, including McKinsey, Confluent, CapGemini and Instructure as well as the collaboration with Meta to enable developers to build arenrich applications on MongoDB using Llama…
…[Question] On the MAAP program, are most of those workloads going to wind up in Atlas? Or will that be a healthy combination of EA and Atlas?
[Answer] I think it’s, again, early days. I would say — I would probably say more on the side of Atlas than EA in the early days. I think once we introduce Search and Vector Search into the EA product, you’ll see more of that on-prem. Obviously, people can use MongoDB for AI workloads using other technologies as well in conjunction with MongoDB for on-prem AI use cases. But I would say you’re probably going to see that happen first in Atlas.
Tealbook consolidated from Postgres, PG Vector, and Elastic Search to MongoDB; Tealbook has seen cost efficiencies and increased scalability with Atlas Vector Search for its application that uses generative AI to collect, verify and enrich supplier data across various sources
Tealbook, a supplier intelligence platform migrated from [ Postgres ], [ PG Vector ] and Elastic Search to MongoDB to eliminate technical debt and consolidate their tech stack. The company experienced workload isolation and scalability issues in PG vector, and we’re concerned with the search index inconsistencies, which were all resolved with the migration to MongoDB. With Atlas Vector search and dedicated search notes, Tealbook has realized improved cost efficiency and increase scalability for the supplier data platform, an application that uses GenAI to collect, verify and enrich supplier data across various sources.
MongoDB’s partnerships with all 3 major cloud providers – AWS, Azure, and GCP – for AI workloads are going well; management expects the cloud providers to bundle their own AI-focused database offerings with their other AI offerings, but management also thinks the cloud providers realise that MongoDB has a better offering and it’s better to partner with the company
With AWS, as you said, they just had their Reinventure last week. It remains very, very strong. We closed a ton of deals this past quarter, some of them very, very large deals. We’re doing integrations to some of the new products like Q and Bedrock and the engagement in the field has been really strong.
On Azure, I think we — as I’ve shared in the past, we start off with a little bit of a slower start. But in the words of the person who runs their partner leadership, the Azure MongoDB relationship has never been stronger. — we closed a large number of deals, we’re part of what’s called the Azure-native IC service program and have a bunch of deep integrations with Azure, including Fabric, Power BI, Visual Studio, Symantec Kernel and Azure OpenAI studio. And we’re also one of Azure’s largest marketplace partners.
And GCP does — we’ve actually seen some uptick in terms of co-sales that we’ve done this past quarter. GCP made some comp changes where that were favorable to working with MongoDB that we saw some results in the field and we’re focused on closing a handful of large deals with GCP in Q4. So in general, I would say things are going quite well.
And then in terms of, I guess, implying your question was like the hyperscalers and are they potentially bundling things along with their AI offerings? I mean, candidly, since day 1, the hyperscalers have been bundling their database offerings with every offering that they have. And that’s been their pretty predominant strategy. And we’ve — I think we’ve executed well against strategy because databases are not like a by-the-way decision. It’s an important decision. And I think the hyperscalers are seeing our performance and realize it’s better to partner with us. And as I said, customers understand the importance of the data layer, especially by our applications. And so the partnership across all 3 hyperscalers is strong.
A new MongoDB AI-related capability called Atlas Search Nodes is seeing very high demand; Atlas Search is being used by one of the world’s largest banks to provide a Google-like Search experience on payments data for customers; an AI-powered accounting software provider is using Atlas Search to allow end-users to perform ad-hoc analysis
On search, we introduced a new capability called Atlas Search nodes, which where you can asymmetrically scale your search nodes because if you have a search intensive use case, you don’t have to scale all your nodes because that have become quite expensive. And we’ve seen that this kind of groundbreaking capability really well received. The demand is quite high. And because customers like they can tune the configuration to the unique needs of their search requirements.
One of the world’s largest banks is using Atlas Search to provide like a Google-like search experience on payments data for massive corporate customers. So there’s a customer-facing application, and so performance and scalability are critical. A leading provider of AI-powered accounting software uses Atlas Search to Power’s invoice analytics future, which allows end users on finance teams to perform ad hoc analysis and easily find past due invoices and voices that contain errors.
Vector Search is only in its first full year of being generally available; uptake of Vector Search has been very high; MongoDB released a feature on Atlas Vector Search in 2024 Q3 that reduces memory requirements by up to 96% and this helps Atlas Vector Search support larger vector workloads at a better price-performance ratio; a multinational news organisation used Vector Search to create a generative AI tool to help producers and journalists sift through vast quantities of information; a security firm is using Vector Search for AI fraud; a global media company replaced Elastic Search with Vector Search for a user-recommendation engine
On Vector Search, again, and it’s been our kind of our first full year since going generally available and the product uptake has been actually very, very high. In Q3, we released quantization for Atlas Vector Search, which reduces the memory requirements by up to 96%, allowing us to support larger Vector workloads with vastly improved price performance.
For example, a multinational news organization created a GenAI powered tool designed to help producers and journalists efficiently search, summarize and verify information from vast and varied data sources. A leading security firm is using Atlas Vector certified AI fraud and a leading global media company replaced elastic search with hybrid search and vector search use case for a user recommendation engine that’s built to suggest that’s building to suggest articles to end users.
MongoDB’s management thinks the industry is still in the very early days of shifting towards AI applications
I do think we’re in the very, very early days. They’re still learning experimenting… I think as people get more sophisticated with AI as the AI technology matures and becomes more and more useful, I think applications will — you’ll start seeing these applications take off. I kind of chuckle that today, I see more senior leaders bragging about the chips they are using versus the Appstore building. So it just tells you that we’re still in the very, very early days of this big platform shift.
Nvidia (NASDAQ: NVDA)
Nvidia’s Data Center revenue again had incredibly strong growth in 2024 Q3, driven by demand for the Hopper GPU computing platform; Nvidia’s H200 sales achieved the fastest ramp in the company’s history
Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional, and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest prod ramp in our company’s history.
Nvidia’s H200 product has 2x faster inference speed, and 50% lower total cost of ownership (TCO)
The H200 delivers up to 2x faster inference performance and up to 50% improved TCO.
Cloud service providers (CSPs) were half of Nvidia’s Data Centre revenue in 2024 Q3, and up more than 2x year-on-year; CSPs are installing tens of thousands of GPUs to meet rising demand for AI training and inference; Nvidia Cloud Instances with H200s are now available, or soon-to-be-available, in the major CSPs
Cloud service providers were approximately half of our Data Center sales with revenue increasing more than 2x year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon.
North America, India, and Asia Pacific regions are ramping up Nvidia Cloud Instances and sovereign clouds; management is seeing an increase in momentum of sovereign AI initiatives; India’s CSPs are building data centers containing tens of thousands of GPUs and increasing GPU deployments by 10x in 2024 compared to a year ago; Softbank is building Japan’s most powerful AI supercomputer with Nvidia’s hardware
Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs…
…Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs include product communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x…
…In Japan, SoftBank is building the nation’s most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA.
Nvidia’s revenue from consumer internet companies more than doubled year-on-year in 2024 Q3
Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads.
Nvidia’s management sees Nvidia as the largest inference platform in the world; Nvidia’s management is seeing inference really starting to scale up for the company; models that are trained on previous generations of Nvidia chips inference really well on those chips; management thinks that as Blackwell proliferates in the AI industry, it will leave behind a large installed base of infrastructure for inference; management’s dream is that plenty of AI inference happens across the world; management thinks that inference is hard because it needs high accuracy, high throughput, and low latency
NVIDIA’s Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements…
…We’re seeing inference really starting to scale up for our company. We are the largest inference platform in the world today because our installed base is so large. And everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we’re seeing inference demand go up…
… Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really exceeded is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing inference 24/7…
…Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high-throughput as well as low latency is incredibly hard to build.
Nvidia’s management has driven a 5x improvement in Hopper inference throughput in 1 year via advancements in the company’s software; Hopper’s inference performance is set to increase by a further 2.4x shortly because of NIM (Nvidia Inference Microservices)
Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in 1 year and cut time to first token by 5x. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x.
Nvidia’s Blackwell family of chips is now in full production; Nvidia shipped 13,000 Blackwell samples to customers in 2024 Q3; the Blackwell family comes with a wide variety of customisable configurations; management sees all Nvidia customers wanting to be first to market with the Blackwell family; management sees staggering demand for Blackwell, with Oracle announcing the world’s first zetta-scale cluster with more than 131,000 Blackwell GPUs, and Microsoft being the first CSP to offer private-preview Blackwell instances; Blackwell is dominating GPU benchmarks; Blackwell performs 2.2x better than Hopper and is also 4x cheaper; Blackwell with NVLink Switch delivered up to a 30x improvement in inference speed; Nvidia’s management expects the company’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; Blackwell’s production is in full-steam ahead and Nvidia will deliver more Blackwells in 2024 Q4 than expected; demand for Blackwell exceeds supply
Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled.
Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world’s first zetta-scale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.
Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling, throughput and response time that is excellent for running new reasoning inference applications like OpenAI’s o1 model…
…As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s…
… Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated…
…It is the case that demand exceeds our supply. And that’s expected as we’re in the beginnings of this generative AI revolution as we all know…
…In terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible…
…[Question] Do you think it’s a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar ’25?
[Answer] Yes, I think it is a reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.
Nvidia’s management is seeing that hundreds of AI-native companies are already delivering AI services and thousands of AI-native startups are building new services
Hundreds of AI-native companies are already delivering AI services with great success. Though Google, Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor and the Bridge are seeing great success while thousands of AI-native start-ups are building new services.
Nvidia’s management is seeing large enterprises build copilots and AI agents with Nvidia AI; management sees the potential for billions of AI agents being deployed in the years ahead; Accenture has an internal AI agent use case that reduces steps in marketing campaigns by 25%-35%
Industry leaders are using NVIDIA AI to build Copilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years…
… Accenture with over 770,000 employees, is leveraging NVIDIA-powered agentic AI applications internally, including 1 case that cuts manual steps in marketing campaigns by 25% to 35%.
Nearly 1,000 companies are using NIM (Nvidia Inference Microservices); management expects the Nvidia AI Enterprise platform’s revenue in 2024 to be double that from 2023; Nvidia’s software, service, and support revenue now has an annualised revenue run rate of $1.5 billion and management expects the run rate to end 2024 at more than $2 billion
Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over 2x from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.
Nvidia’s management is seeing an acceleration in industrial AI and robotics; Foxconn is using Nvidia Omniverse to improve the performance of its factories, and Foxconn’s management expects a reduction of over 30% in annual kilowatt hour usage in Foxconn’s Mexico facility
Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics…
…Foxconn, the world’s largest electronics manufacturer, is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce — a reduction of over 30% in annual kilowatt hour usage.
Nvidia saw sequential growth in Data Center revenue in China because of export of compliant Hopper products; management expects the Chinese market to be very competitive
Our data center revenue in China grew sequentially due to shipments of export-compliant Hopper products to industries…
…We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.
Nvidia’s networking revenue declined sequentially, but there was sequential growth in Infiniband and Ethernet switches, Smart NICs (network interface controllers), and BlueField DPUs; management expects sequential growth in networking revenue in 2024 Q4; management is seeing CSPs adopting Infiniband for Hopper clusters; Nvidia’s Spectrum-X Ethernet for AI revenue was up 3x year-on-year in 2024 Q3; xAI used Spectrum-X for its 100,000 Hopper GPU cluster and achieved zero application latency degradation and maintained 95% data throughput, compared to 60% for Ethernet
Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters.
NVIDIA Spectrum-X Ethernet for AI revenue increased over 3x year-on-year. And our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI’s Colossus 100,000 Hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet…
…Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we’re going to be right back up in terms of growing. We’re getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems we are providing them to.
Nvidia has begun shipping new GeForce RTX AI PCs
We began shipping new GeForce RTX AI PC with up to 321 AI FLOPS from ASUS and MSI with Microsoft’s Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.
Nvidia’s Automotive revenue had strong growth year-on-year and sequentially in 2024 Q3, driven by self-driving brands of Nvidia Orin; Volvo’s electric SUV will be powered by Nvidia Orin
Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Volvo Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.
Nvidia’s management thinks pre-training scaling of foundation AI models is intact, but it’s not enough; another way of scaling has emerged, which is inference-time scaling; management thinks that the new ways of scaling has resulted in great demand for Nvidia’s chips, but for now, most of Nvidia’s chips are used in pre-training
Foundation model pretraining scaling is intact and it’s continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough, that we’ve now discovered 2 other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling. And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI’s o1, which does inference time scaling, what’s called test time scaling. The longer it thinks, the better and higher-quality answer it produces. And it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth…
… we now have 3 ways of scaling and we’re seeing all 3 ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pretraining scaling, post-training scaling, and then now very importantly, inference time scaling…
…[Question] Today, how much of the compute goes into each of these buckets? How much for the pretraining? How much for the reinforcement? And how much into inference today?
[Answer] Today, it’s vastly in pretraining a foundation model because, as you know, post-training, the new technologies are just coming online. And whatever you could do in pretraining and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority. And so you’ll always have to do on-the-spot thinking and in context thinking and a reflection. And so I think that the fact that all 3 are scaling is actually very sensible based on where we are. And in the area foundation model, now we have multimodality foundation models and the amount of petabytes video that these foundation models are going to be trained on, it’s incredible. And so my expectation is that for the foreseeable future, we’re going to be scaling pretraining, post-training as well as inference time scaling and which is the reason why I think we’re going to need more and more compute.
Nvidia’s management thinks the company generates the greatest possible revenue for its customers because its products has much better performance per watt
Most data centers are now 100 megawatts to several hundred megawatts, and we’re planning on gigawatt data centers, it doesn’t really matter how large the data centers are. The power is limited. And when you’re in the power-limited data center, the best — the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual road map reduces cost. But on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues.
Nvidia’s management sees Hopper demand continuing through 2025
Hopper demand will continue through next year, surely the first several quarters of the next year.
Nvidia’s management sees 2 fundamental shifts in computing happening today: (1) the movement from code that runs on CPUs to neural networks that run on GPUs and (2) the production of AI from data centres; the fundamental shifts will drive a $1 trillion modernisation of data centres globally
We are really at the beginnings of 2 fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that’s happening is $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.
On the other hand, secondarily, I guess, is that on top of these systems are going to be — we’re going to be creating a new type of capability called AI. And when we say generative AI, we’re essentially saying that these data centers are really AI factories. They’re generating something. Just like we generate electricity, we’re now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we’re going to see this new type of system come online, and I call it an AI factory because that’s really as close to what it is. It’s unlike a data center of the past.
Nvidia’s management does not see any digestion happening for GPUs until the world’s data centre infrastructure is modernised
[Question] My main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase? Or is it just too premature to discuss that because you’re just at the start of Blackwell?
[Answer] I believe that there will be no digestion until we modernize $1 trillion with the data centers.
Okta (NASDAQ: OKTA)
Okta AI is really starting to help newer Okta products
Second thing is that we have Okta AI, which we talked a lot about a couple of years ago, and we continue to work on that. And it’s really starting to help these new products like identity threat protection with Okta AI. The model inside of identity threat protection and how that works is AI is a big part of the product functionality.
Okta’s management sees the need for authentication for AI agents and has a product called Auth for Gen AI; management thinks authentication of AI agents could be a new area of growth for Okta; management sees the pricing for Auth for Gen AI as driven by a fee per monthly active machine
Some really interesting new areas are we have something we talked about at Oktane called Auth for Gen AI, which is basically authentication platform for agents. Everyone is very excited about agents, as they should be. I mean, we used to call them bots, right? 4, 5 years ago, they’re called bots. Now they’re called agents, like what’s the big deal? How different is it? Well, you can interact with them natural languages and they can do a lot more with these models. So now it’s like bots are real in real time. But the problem is all of these bots and all of these platforms to build bots, they have the equivalent of the monitor sticky notes with passwords on them, they have the equivalent of that inside the bot. So there’s no protocol for single sign-on for bots. They have like stored passwords in the bot. And if that bot gets hacked, guess what? You signed up for that bot and it has access to your calendar and has access to your travel booking and it has access to your company e-mail and your company data, that’s gone because the hacker is going to get all those passwords out there. So Auth for Gen AI automates that and make sure you can have a secure protocol to build a bot around. And so that’s a really interesting area. It’s very new. We just announced it and all these agent frameworks and so forth are new…
… Auth for GenAI, it’s basically like — think about it as a firm machine authentication. So every time — we have this feature called machine-to-machine, which does a similar thing today, and you pay basically by the monthly active machine.
Salesforce (NYSE: CRM)
Salesforce’s management thinks Salesforce is at the edge of the rise of digital labour, which are autonomous AI agents; management thinks the TAM (total addressable market) for digital labour is much larger than the data management market that Salesforce was previously in; management thinks Salesforce is the largest supplier of digital labour right from the get-go; Salesforce’s AgentForce service went into production in 2024 Q3 and Salesforce has already delivered 200 AgentForce deals with more to come; management has never seen anything like AgentForce; management sees AgentForce as the next evolution of Salesforce; management thinks AgentForce will help companies scale productivity independent of workforce growth; management sees AgentForce AI agents manifesting as robots that will supplement human labour; management sees AgentForce, together with robots, as a driving force for future global economic growth even with a stagnant labour force; AgentForce is already delivering tangible value to customers; Salesforce’s customers recently built 10,000 AI agents with AgentForce in 3 days, and thousands more AI agents have been built since then; large enterprises across various industries are building AI agents with AgentForce; management sees AgentForce unlocking a whole new level of operational efficiency; management will be delivering AgentForce 2.0 in December this year
We’re really at the edge of a revolutionary transformation. This is really the rise of digital labor. Now for the last — I would say for the last 25 years at Salesforce, and we’ve been helping companies to manage and share their information…
…But now we’ve really created a whole new market, a new TAM, a TAM that is so much bigger and so much more exciting than the data management market that it’s hard to get our head completely around. This is the market for digital labor. And Salesforce has become, right out of the gate here, the largest supplier of digital labor and this is just the beginning. And it’s all powered by these autonomous AI agents…
…With Salesforce, agent force, we’re not just imagining this future. We’re already delivering it. And you so know that in the last week of the quarter, Agentforce went production. We delivered 200 deals, and our pipeline is incredible for future transactions. We can talk about that with you on the call, but we’ve never seen anything like it. We don’t know how to characterize it. This is really a moment where productivity is no longer tied to workforce growth, but through this intelligent technology that can be scaled without limits. And Agentforce represents this next evolution of Salesforce. This is a platform now, Salesforce as a platform or AI agents work alongside humans in a digital workforce that amplifies and augments human capabilities and delivers with unrivaled speed…
…On top of the agenetic layer, we’ll soon see a robotic layer as well where these agents will manifest into robots…
…These agents are not tools. They are becoming collaborators. They’re working 24/7 to analyze data, make decisions, take action, and we can all start to picture this enterprise managing millions of customer interactions daily with Agentforce seamlessly resolving issues, processing transactions, anticipating customer needs, freeing up humans to focus on the strategic initiatives and building meaningful relationships. And this is going to evolve into customers that we have, whether it could be a large hospital or a large hotel where not only are the agents working 24/7, but robots are also working side-by-side with humans, robots manifestations of agents this idea that it’s all happening before our eyes and that this isn’t just some far-off future. It’s happening right now…
…For decades, economic growth dependent on expanding the human workforce. It was all about getting more labor. But with labor and with the labor force stagnating globally, Agentforce is unlocking a new path forward. It’s a new level of growth for the world and for our GPT and businesses no longer need to choose between scale and efficiency with agents, they can achieve both…
…Our customers are already experiencing this transformation. Agentforce is deflecting service cases and resolving issues, processing, qualifying leads, helping close more deals, creating optimizing marketing campaigns, all at an unprecedented scale, 24/7…
…What was remarkable was the huge thirst that our customers had for this and how they built more than 10,000 agents in 3 days. And I think you know that we then unleashed a world tour of that program, and we have now built thousands and thousands of more agents in these world tours all over the world…
…So companies like FedEx, [indiscernible], Accenture, Ace Hardware, IBM, RBC Wealth Management and many more are now building their digital labor forces on the Salesforce platform with Agentforce. So that is the largest and most important companies in the world across all geographies, across all industries are now building and delivering agents…
…While these legacy chatbots have handled these basic tasks like password resets and other basic mundane things, Agentforce is really unlocking an entirely new level of digital intelligence and operational efficiency at this incredible scale…
…I want to invite all of you to join us for the launch of Agentforce 2.0. And it is incredible what you are going to see the advancements in the technology already are amazing and accuracy and the ability to deliver additional value. And we hope that you’re going to join us in San Francisco. This is going to happen on December 17. You’ll see Agentforce 2.0 for the first time,
Salesforce is customer-zero for AgentForce and the service is live on Salesforce’s help-website; AgentForce is handling 60 million sessions and 2 millions support cases annually on the help-website; the introduction of AgentForce in Salesforce’s help-website has allowed management to rebalance headcount into growth-areas; users of Salesforce’s help-website will experience very high levels of accuracy because AgentForce is grounded with the huge repository of internal and customer data that Salesforce has; management sees Salesforce’s data as a huge competitive advantage for AgentForce; AgentForce can today quickly deliver personalised insights to users of Salesforce’s help-website and hand off users to support engineers for further help; management thinks AgentForce will deflect between a quarter and half of annual case volume; Salesforce is also using AgentForce internally to engage prospects and hand off prospects to SDR (sales development representative) team
We pride ourselves on being customer [ 0 ] for all of our products, and Agentforce is no exception. We’re excited to share that Agentforce is now live on help.salesforce.com…
… Our help portal, help.salesforce.com, which is now live. This portal, this is our primary support mechanism for our customers. It lets them authenticate in, it then becomes grounded with the agent, and that Help portal already is handling 60 million sessions and more than 2 million support cases every year. Now that is 100% on Agentforce…
…From a human resource point of view, where we can really start to look at how are we going to rebalance our headcount into areas that now are fully automated and to into areas that are critical for us to grow like distribution…
…Now when you use help.salesforce.com, especially as authenticated users, as I mentioned, you’re going to see this incredible level of accuracy and responsiveness and you’re going to see remarkably low hallucinogenic performance whether for solving simple queries or navigating complex service issues because Agentforce is not just grounded in our Salesforce data and metadata including the repository of 740,000 documents and 17 languages, it’s also grounded in each customer’s data, their purchases, returns, that data it’s that 200 petabytes or through 200 to 300 petabytes of Salesforce data that we have that gives us this kind of, I would say, almost unfair advantage with Agentforce because our agents are going to be more accurate in the least hallucinogenic of any because they have access to this incredible capability. And Agentforce can instantly reason over this vast amounts of data, deliver precise personalizing [indiscernible] with citations in seconds, and Agentforce can seamlessly hand off to support engineers, delivering them complete summary and recommendation as well. And you can all try this today. This isn’t some fantasy land future idea this is today reality…
…We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company, it is going to deflect between a quarter and half of annual case volume and in optimistic cases, probably much, much more of that…
…We’re also deploying Agentforce to engage our prospects on salesforce.com, answering their questions 24/7 as well as handing them off to our SDR team. You can see it for yourself and test it out on our home page. We’ll use our new Agentforce SDR agent to further automate top of funnel activities when gatherings leads, lead data for providing education and qualifying prospects and booking meetings.
Salesforce’s management thinks AgentForce is much better than Microsoft’s AI Copilots
I just want to compare and contrast that against other companies who say they are doing enterprise AI. You can look at even Microsoft. We all know about Copilot, it’s been out, it’s been touted now for a couple of years. We’ve heard about CoPilot. We’ve seen the demo. In many ways, it’s just repackaged ChatGPT. You can really see the difference where Salesforce now can operate its company on our platform. And I don’t think you’re going to find that on Microsoft’s website, are you?
Vivint is using AgentForce for customer support and for technician scheduling, payment requests, and more; Adecco is using AgentForce to improve the handling of job applicants (Adecco receives 300 million job applications annually); Wiley is resolving cases 40% faster with AgentForce; Heathrow Airport is using AgentForce to respond to thousands of travelers instantly, accurately, and simultaneously; SharkNinja is using AgentForce for personalised 24/7 customer support in 28 geographies; Accenture is using AgentForce to improve deal quality and boost bid coverage by 75%
One of them is the smart home security provider, Vivint. They’ve struggled with this high volume of support calls, a high churn rate for service reps. It’s a common story. But now using the Agentforce, Vivint has created a digital support staff to autonomously provide support through their app, their website, troubleshooting, a broad variety of issues across all their customer touch points. And in addition, Vivint is planning to utilize Agentforce to further automate technician scheduling, payment request, proactive issue resolution, the use of device telemetry because Agentforce is across the entire sales force product line and including Slack…
…Another great customer example that’s already incredible to work they’ve already done to get this running and going in their company Adecco, the world’s leading provider of talent solutions, handling 300 million job applications annually, but historically, they have just not been able to go through or respond in a timely way, of course, to the vast majority of applications that they’re gating, but now the Agentforce is going to operate an incredible scale, sorting through the millions of resumes, 24/7 matching candidates to opportunities proactively prequalifying them for recruiters. And in addition, Agentforce can also assess candidates helping them to refine their resumes, giving them a better chance of qualifying for a role…
…Wiley, an early adopter, is resolving cases over 40% faster with Agentforce than their previous chat bot. Heathrow Airport, one of the busiest airports in the world will be able to respond to thousands of travelers inquiries instantly, accurately and simultaneously. SharkNinja, a new logo in the quarter, chose Agentforce and Commerce Cloud to deliver 24/7 personalized support for customers across 28 international markets and unifying its service operations…
…Accenture chose Agentforce to streamline sales operations and enhance bid management for its 52,000 global sellers. By integrating sales coach and custom AI agents, Agentforce is improving deal quality and targeting a 75% boost in bid coverage.
College Possible is using AgentForce to build virtual college counsellors as there’s a shortage of labour (for example, California has just 1 counsellor for every 500 students); College Possible built its virtual counsellors with AgentForce in under a week – basically like flipping a switch – because it has been accumulating all its data in Salesforce for years
Another powerful example is a nonprofit, College Possible. College Possible matches eligible students with counselors to help them navigate and become ready for college. And in California, for example, the statewide average stands at slightly over 1 counselor for every 500 students. It just isn’t enough. Where are we going to get all that labor…
…We’re going to get it from Agentforce. This means the vast majority of students are not getting the help they need, and now they are going to get the help they need.
College Possible creates a virtual counselor built on Agentforce in under a week. They already had all the data. They have the metadata, they already knew the students. They already had all of the capabilities built into their whole Salesforce application. It was just a flip of a switch…
… But why? It’s because all of the work and the data and the capability that College Possible has put into Salesforce over the years and years that they had it. It’s not the week that it took to get them to turn it on. They have done a lot of work.
Salesforce’s management’s initiative to have all of the company’s apps be rewritten into a single core platform is called More Core; the More Core initiative also involves Salesforce’s Data Cloud, which is important for AI to work; Salesforce is now layering the AI agent layer on top of More Core, and management sees this combination as a complete AI system for enterprises that also differentiates Salesforce’s AgentForce product
Over the last few years, we’ve really aggressively invested in integrating all of our apps on a single core platform with shared services for security workflow user interfaces more. We’ve been rewriting all of our acquisitions into that common area. We’re really looking at how do we take all of our applications and all of our acquisitions, everything and delivered into one consistent platform, we call that More Core internally inside Salesforce. And when you look at that More Core initiative, I don’t think there’s anyone who delivers this comprehensive platform, sales, service, marketing, commerce, analytics, Slack, all of it as one piece of code. And then now deeply integrated in that 1 piece of code is also our data cloud. That is a key part of our strategy, which continues to have this phenomenal momentum as well to help customers unify and federate with zero-copy data access across all their data and metadata, which is crucial for AI to work.
And now that third layer is really opening up for us, which is this agenetic layer. We have built this agenetic layer that takes advantage of all the investments in Salesforce for our customers and made it in our platform. It’s really these 3 layers. And in these 3 layers that form a complete AI system for enterprises and really uniquely differentiate Salesforce uniquely differentiate Agentforce from every other AI platform that this is one piece of code. This isn’t like 3 systems. It’s not a bunch of different apps all running independently. This is all one piece of code. That’s why it works so well, by the way, because it is 1 platform.
Salesforce’s management thinks jobs and roles within Salesforce will change because of AI, especially AI agents
The transformation is not without challenges. Jobs are going to evolve, roles are going to shift and businesses will need to adapt. And listen, at Salesforce, jobs are going to evolve and roles will shift and businesses will need to adapt as well. We’re all going to need to rebalance our workforce. This is the agents take on more of the workforce.
Salesforce’s management is hearing that a large customer of Salesforce is targeting 25% more efficiency with AI
This morning, I was on the phone with one of our large customers, and they were telling me how they’re targeting inside their company, 25% more efficiency with artificial intelligence.
Salesforce signed more than 2,000 AI deals in 2024 Q3 (FY2025 Q3), and number of AI deals that are over $1 million more than tripled year-on-year; 75% of Salesforce’s AgentForce deals, and 9 of Salesforce’s top 10 deals, in 2024 Q3 involved Salesforce’s global partners; more than 80,000 system integrators have completed AgentForce training; hundreds of ISVs (independent software vendors) and partners are building and selling AI agents; Salesforce has a new AgentForce partner network that allows customers to deploy customised AI agents using trusted 3rd-party extensions from Salesforce App Exchange; Salesforce’s partnership with AWS Marketplace is progression well as transactions doubled sequentially in 2024 Q3, with 10 deals exceeding $1 million
In Q3, the number of wins greater than $1 million with AI more than tripled year-over-year. and we signed more than 2,000 AI deals, including more than the 200 Agentforce wins that Marc shared…
…We’re also seeing amazing Agentforce energy across the ecosystem with our global partners involved in 75% of our Q3 Agentforce deals and 9 of our top 10 wins in the quarter. Over 80,000 system integrators have completed Agentforce training and hundreds of ISVs and technology partners are building and selling agents…
… We continue to unlock customer spend through new channels, including the Agentforce partner network that launched at Dreamforce, which allows customers to customize and deploy specialized agents using trusted third-party extensions from Salesforce App Exchange. And AWS Marketplace continues to be a growth driver. Our Q3 transactions doubled quarter-over-quarter with 10 deals exceeding $1 million.
Veeva Systems (NYSE: VEEV)
Veeva Vault CRM has a number of new innovations coming, including two AI capabilities that will be available in late-2025 at no additional charge; one of the AI capabilities leverages Apple Intelligence; Vault CRM’s CRM Bot AI application will see Vault CRM be hooked onto customers’ own large language models, and Veeva will not be incurring compute costs
We just had our European Commercial Summit in Madrid where we announced a number of new innovations coming in Vault CRM, including two new AI capabilities – CRM Bot and Voice Control. CRM Bot is a GenAI assistant in Vault CRM. Voice Control is a voice interface for Vault CRM, leveraging Apple Intelligence. Both are included in Vault CRM for no additional charge and are planned for availability in late 2025…
…For the CRM Bot, that’s where we will hook our CRM system into the customers’ own large language model that they’re running. And that’s where we will not charge for, and we will not incur compute cost…
Veeva has a new AI application, MLR Bot, for Vault PromoMats within Commercial Cloud; MLR Bot helps perform checks on content with a Veeva-hosted large language model (LLM); MLR Bot will be available in late-2025 and will be charged separately; management has been thinking about MLR Bot for some time; management is seeing a lot of excitement over MLR Bot; management is still working through the details of the monetisation of MLR Bot; MLR Bot’s LLM will be from one of the big tech providers but it will be Veeva who’s the one paying for the compute
We also announced MLR Bot, an AI application in Vault PromoMats to perform content quality and compliance checks with a Veeva-hosted large language model. Planned for availability in late 2025, MLR Bot will require a separate license…
… So I was at our Europe Summit event where we announced MLR Bot, something we’ve been thinking about and evaluating for some time…
…So there’s a lot of excitement. This is a really core process for life sciences companies. So a lot of excitement there…
…In terms of sizing and the monetization, we’re still working through the details on that, but there’s a ton of excitement from our existing customers. We look forward to getting some early customers started on that as we go into next year…
…MLR Bot, we will charge for, and that’s where we will host and run a large language model. Not our own large language model, right? We’ll use one from the big tech providers, but we will be paying for the compute power for that, and so we’ll be charging for that.
CRM Bot, Voice Control, and MLR Bot are part of Veeva’s management’s overall AI strategy to provide AI applications with tangible value; another part of the AI strategy involves opening up data for customers to power all forms of AI; management’s current thinking is to charge for AI applications if Veeva is responsible for paying compute costs
These innovations are part of our overall AI strategy to deliver specific AI applications that provide tangible value and enable customers and partners with the AI Partner Program, as well as the Vault Direct Data API, for the data needed to power all forms of AI…
… So where we have to use significant compute power, we will most likely charge. And where we don’t, we most likely won’t.
Wix (NASDAQ: WIX)
More than 50% of new Wix users are using the company’s AI-powered onboarding process which was launched nearly a year ago; users who onboard using Wix’s AI process are 50% more likely to start selling on Wix and are more likely to become paid subscribers; the AI-powered onboarding process is leading to a 13% uplift in conversion rate for the most recent Self-Creator cohort; the AI website builder is free but it helps with conversions to paid subscribers
Almost one year ago, we launched our AI website builder, which is now available in 20 languages and has been a game changer in our user onboarding strategy. Today, more than 50% of new users are choosing to create their online presence through our AI-powered onboarding process. The tool is resonating particularly well with small businesses and entrepreneurs as paid subscriptions originated from this AI-powered onboarding are 50% more likely to have a business vertical attached and significantly more likely to start selling on Wix by streamlining the website building process while offering a powerful and tailored commerce-enablement solution…
…Cash in our most recent self-created cohort showed a 13% uplift in conversion rate from our AI onboarding tool…
…[Question] A lot of the commentary seems that today, AI Website Builder is helping on conversion. I wanted to ask about specifically, is there an opportunity to directly monetize the AI products within the kind of core website design funnel?
[Answer] So I think that the way we monetize, of course, during the buildup phase of the website, is by making it easier. And our customers are happy with their websites, of course, we convert better. So I don’t think there is any better way to monetize than that, right? The more users finish the website, the better the website, the higher conversion and the high monetization.
Wix now has 29 AI assistants to support users
Earlier this year, we spoke about our plan to embed AI assistance across our platform and we’re continuing to push that initiative forward. We now have a total of 29 assistants, spanning a wide range of use cases to support users and to service customers throughout their online journeys.
Wix has a number of AI products that are launching in the next few months that are unlike anything in the market and they will be the first AI products that Wix will be monetising directly
We have a number of AI products coming in the next few months that are unlike anything in the market today. These products will transform the way merchants manage their businesses, redefine how users interact with their customers and enhance the content creation experience. Importantly, these will also be the first AI products we plan to monetize directly. We are on the edge of unforeseen innovation, and I’m looking forward to the positive impact it will have on our users.
Zoom Communications (NASDAQ: ZM)
Zoom’s management has a new vision for Zoom, the AI-first Work Platform for Human Connection
In early October, we hosted Zoomtopia, our annual customer and innovation event, and it was an amazing opportunity to showcase all that we have been working on for our customers. We had a record-breaking virtual attendance, and unveiled our new vision, AI-first Work Platform for Human Connection. This update marks an exciting milestone as we extend our strength as a unified communication and collaboration platform into becoming an AI-first work platform. Our goal is to empower customers to navigate today’s work challenges, streamline information, prioritizing tasks and making smarter use of time.
Management has released AI Companion 2.0, which is an agentic AI technology; AI Companion 2.0 is able to see a broader window of context and gather information from internal and external sources; Zoom AI Companion monthly active users grew 59% sequentially in 2024 Q3; Zoom has over 4 million accounts that have enabled AI Companion; management thinks customers really like Zoom AI Companion; customer feedback for AI Companion has been extremely positive; management does not intend to charge customers for AI Companion
At Zoomtopia, we took meaningful steps towards that vision with the release of AI Companion 2.0…
…This release builds upon the awesome quality of Zoom AI Companion 1.0 across features like Meeting Summary, Meeting Query and Smart Compose, and brings it together in a way that evolves beyond task-specific AI towards agentic AI. This major update allows the AI Companion to see a broader window of context, synthesize the information from internal and external sources, and orchestrate action across the platform. AI Companion 2.0 raises the bar for AI and demonstrates to customers that we understand their needs…
…We saw progress towards our AI-first vision with Zoom AI Companion monthly active users growing 59% quarter-over-quarter…
…At Zoomtopia, we mentioned that there are over 4 million accounts who are already enabled AI Companion. Given the quality, ease of use and no additional cost, the customer really like Zoom AI Companion…
…Feedback from our customers at Zoomtopia Zoom AI Companion 2.0 were extremely positive because, first of all, they look at our innovation, the speed, right? And the — a lot of features built into the AI Companion 2.0, again, at no additional cost, right? At the same time, Enterprise customers also want to have some flexibility. That’s why we also introduced customized AI Companion and also AI Companion Studio. And that will be available first half of next year and also we can monetize…
…We are not going to charge the customer for AI Companion, at no additional cost
Zscaler is using Zoom AI Companion to improve productivity across the whole company; large enterprises such as HSBC and Exxon Mobil are also using Zoom AI Companion
Praniti Lakhwara, CIO of Zscaler, provided a great example of how Zoom AI Companion helped democratize AI and enhance productivity across the organization, without sacrificing security and privacy. And it wasn’t just Zscaler. the RealReal, HSBC, ExxonMobil and Lake Flato Architects shared similar stories about Zoom’s secure, easy-to-use solutions, helping them thrive in the age of AI and flexible work.
Zoom’s management recently introduced a road map of AI products that expands Zoom’s market opportunity; Custom AI Companion add-on, including paid add-ons for healthcare and education, will be released in 2025 H1; management built the monetisable parts of AI Companion after gathering customer feedback
Building on our vision for democratizing AI, we introduced a road map of TAM-expanding AI products that create additional business value through customization, personalization and alignment to specific industries or use cases.
Custom AI Companion add-on, which will be released in the first half of next year, aims to meet our customers where they are in their AI journey by plugging into knowledge bases, integrating with third-party apps and personalizing experiences like custom AI avatars and AI coaching. Additionally, we announced that we’ll also have Custom AI Companion paid add-ons for health care and education available as early as the first quarter of next year…
…The reason why we introduced the Customized AI Companion or AI Companion Studio because, a few quarters ago — and we talked to many Enterprise customers. They shared with us feedback, right? So they like AI Companion. Also, they want to make sure, hey, some customers, they already build their own AI large language model. How to [ federate ] that into our federated AI approach. And some customers, they have very large content, like a knowledge base, how to connect with that. Some customers, they have with other beginning systems, right, like a ServiceNow, Atlassian and Workday, a lot of Box and HubSpot, how to connect those data sources, right? And also even from an employee perspective, right, they won’t have a customized avatar, like AI to — as a personal culture as well. So meaning those customers, they have customized requirements. To support those customer requirements, we need to make sure we have AI infrastructure and technology ready, right? That’s the reason why we introduced the AI Companion, the Customized AI Companion. The goal is really working together with integrated customers to tailored for each Enterprise customer. That’s the reason why it’s not free.
I think the feedback from Zoomtopia is very positive because, again, those features are not built by our — just the several product managers, engineers think about let’s build that. We already solicited feedback from our Enterprise content before, those features that I think can truly satisfy their needs.
Zoon’s management thinks that Zoom is very well-positioned because it is providing AI-powered tools to customers at no additional cost, unlike other competitors
Given our strength on the quality plus at no additional cost, Zoom is much better positioned. In particular, customers look at all the vendors when they try to consult and look at — again, the AI cost is not small, right? You look at some of the competitors, per user per month, $30, right? And look at Zoom, better quality at no additional cost. That’s the reason why it comes with a total cost of ownership. Customers look at Zoom, I think, much better positioned…
…Again, almost every business, they subscribe to multiple software services. If each software service vendors they are going to charge the customer with AI, guess what, every business is — they have to spend more. That’s the reason why they trust Zoom, and I think we are much better positioned.
Zoom’s management is seeing some customers find new budgets to invest in AI, whereas some customers are reallocating budgets from other areas towards AI
Every company, I think now they are all thinking about where they should allocate the budget, right? Where should they get more money or fund, right, to support AI? I think every company is different. And some internal customers, and they have a new budget. Some customers, they consolidated into the few vendors and some customers, they just want to say, hey, maybe actually save the money from other areas and to shift the budget towards embracing AI.
Zoom’s management thinks Zoom will need to continue investing in AI, but they are not worried about the costs because the AI features will be monetised
Look at AI, right? So we have to invest more, right? And I think a few areas, right? One is look at our Zoom Workplace platform, right? We have to [ invent ] more talent, deploy more GPUs and also use more of the cloud, basically GPUs, as well as we keep improving the AI quality and innovate on AI features. That’s for Workplace. And at the same time, we are going to introduce the customized AI Companion, also AI Studio next year. Not only do we offer the free service for AI Companion, but those Enterprise customization certainly can help us in terms of monetization. At the same time, we leverage the technology we build for the workplace, apply that to the Contact Center, like Zoom Virtual Agent, right, and also some other Contact Center features. We can share the same AI infrastructure and also a lot of technology components and also can be shared with Zoom Contact Center.
Where AI Companion is not free, the Contact Center is different, right? We also can monetize. Essentially, we build the same common AI infrastructure architecture and Workplace — Customized AI Companion, we can monetize. Contact Center, also, we can monetize. I think more and more — and like today, you see you keep investing more and more, and soon, we can also monetize more as well. That’s why I think we do not worry about the cost in the long run at all, I mean, the AI investment because with the monetization coming in, certainly can help us more. So, so far, we feel very comfortable.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google and GCP), Amazon (parent of AWS), Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Veeva Systems, Wix, and Zoom Video Communications. Holdings are subject to change at any time.