More Of The Latest Thoughts From American Technology Companies On AI (2024 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

Earlier this month, I published the two-part article, The Latest Thoughts From American Technology Companies On AI (2024 Q4) (see here and here). In them, I shared commentary in earnings conference calls for the fourth quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management will be offering new Firefly web app subscriptions that will support both Adobe’s Firefly AI models and 3rd-party models; management envisions the Firefly app as the umbrella destination for ideation; management recently introduced Adobe’s new Firefly video model into the Firefly app offering; management will be introducing Creative Cloud offerings with Firefly tiering; the Firefly video model has been very well-received by brands and creative professionals; users of the Firefly video model can generate video clips from a text prompt or image; the Firefly web app allows users to generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages; the Firefly web app subscription plans include Firefly Standard, Firefly Pro, and Firefly Premium; more than 90% of paid users of the Firefly web app have been generating videos; Firefly has powered 20 billion generations (16 billion in 2024 Q3) since its launch in March 2023, and is now doing more than 1 billion generations a month; management thinks the commercially-safe aspect of Firefly models is very important to users; management thinks the high-level of creative control users get with Firefly models is very important to them; the adoption rates of the Firefly paid plan signals to management that Firefly is adding value to creative professionals

In addition to Creative Cloud, we will offer new Firefly web app subscriptions that integrate and are an on-ramp for our web and mobile products. While Adobe’s commercially safe Firefly models will be integral to this offering, we will support additional third-party models to be part of this creative process. The Firefly app will be the umbrella destination for new creative categories like ideation. We recently introduced and incorporated our new Firefly video model into this offering, adding to the already supported image, vector and design models. In addition to monetizing stand-alone subscriptions for Firefly, we will introduce multiple Creative Cloud offerings that include Firefly tiering…

…The release of the Adobe Firefly Video model in February, a commercially-safe generative AI video model, has been very positively received by brands and creative professionals who have already started using it to create production-ready content. Users can generate video clips from a text prompt or image, use camera angles to control shots, create distinct scenes with 3D sketches, craft atmospheric elements and develop custom motion design elements. We’re thrilled to see creative professionals and enterprises and agencies, including Dentsu, PepsiCo and Stagwell finding success with the video model….

…In addition to generating images, videos and designs from text, the app lets you generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages. We also launched 2 new plans as part of this release, Firefly Standard and Firefly Pro and began the rollout of our third plan, Firefly Premium, yesterday. User engagement has been strong with over 90% of paid users generating videos…

…Users have generated over 20 billion assets with Firefly…

…We’re doing more than 1 billion generations now a month and 90% of people using Firefly the app also saw — are generating video as well as part of that…

…For Firefly, we have imaging, vector, design, video, voice, video and voice coming out just a couple of weeks ago, off to a good start. I know there have been some questions about how important is commercially safety of the models. They’re very important. A lot of enterprises are turning to them for the quality, the breadth but also the commercial safety, the creative control that we give them around being able to really match structure, style, set key frames for precise video generation, 3D to image, image to video…

…If we look at the early adoption rates of the Firefly paid plan, it really tells us both of these stories. We have a high degree of conviction that it’s adding value and being used by Creative Professionals,

Adobe’s management thinks that marketing professionals will need to create and deliver an unprecedented volume of personalised content and that marketing professionals will need custom, commercially safe AI models and AI agents to achieve this, and this is where Adobe GenStudio and Firefly Services can play important roles; management is seeing customers turn to Firefly Services and Custom Models for scaling on-brand marketing content production; there are over 1,400 custom models created since launch of Firefly Services and Custom Models; Adobe GenStudio for Performance Marketing has won leading brands recently as customers; Adobe GenStudio for Performance Marketing has partnerships with leading digital advertising companies

Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile applications, e-mail, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom, commercially safe models and agents tailored to address the inefficiencies of the content supply chain. With Adobe GenStudio and Firefly Services, Adobe is transforming how brands and their agency partners collaborate on marketing campaigns, unlocking new levels of creativity, personalization and efficiency. The combination of the Adobe Experience Platform and apps and Adobe GenStudio is the most comprehensive marketing platform to deliver on this vision…

…We had another great quarter in the enterprise with more customers turning to Firefly Services and Custom Models to scale on-brand content production for marketing use cases, including leading brands such as Deloitte Digital, IBM, IPG Health, Mattel and Tapestry. Tapestry, for example, has implemented a new and highly productive digital twin workflow using Custom Models and Firefly…

…Strong demand for Firefly Services and Custom Models as part of the GenStudio solution with over 1,400 custom models since launch.

GenStudio for Performance Marketing wins at leading brands including AT&T, Lennar, Lenovo, Lumen, Nebraska Furniture Mart, Red Hat, Thai Airways, and University of Phoenix.

Strong partnership momentum with GenStudio for Performance Marketing supporting ad creation and activation for Google, Meta, Microsoft Ads, Snap, and TikTok and several partners including Accenture, EY, IPG, Merkle and PWC offering vertical extension apps.

Adobe’s generative AI solutions are infused across the company’s products and management sees the generative AI solutions as a factor driving billions in annualised recurring revenue (ARR) for the company from customer acquisition to customer retention and upselling; Adobe has AI-first stand-alone and add-on products such as Acrobat AI Assistant, the Firefly App and Services, and GenStudio for Performance Marketing; the AI-first stand-alone and add-on products already accounted for $125 million in book of business for Adobe in 2024 Q4 (FY2025 Q1), and management expects this book of business to double by the end of FY2025; management thinks that the monetisation of Adobe’s AI services goes beyond the $125 million in book of business and also incorporates customers who subscribe to Adobe’s services and use the AI features

Our generative AI innovation is infused across the breadth of our products, and its impact is influencing billions of ARR across acquisition, retention and value expansion as customers benefit from these new capabilities. This strength is also reflected in our AI-first stand-alone and add-on products such as Acrobat AI Assistant, Firefly App and Services and GenStudio for Performance Marketing, which have already contributed greater than $125 million book of business exiting Q1 fiscal ’25. And we expect this AI book of business to double by the end of fiscal ’25…

…A significant amount of the AI monetization is also happening in terms of attracting people to our subscription, making sure they are retained and having them drive higher-value price SKUs. So when somebody buys Creative Cloud or when somebody buys Document Cloud, in effect, they are actually monetizing AI. But in addition to that, Brent, what we wanted to do was give you a flavor for the new stand-alone products that we have when we’ve talked about introducing Acrobat AI Assistant and rolling that out in different languages, Firefly, and making sure that we have a new subscription model associated with that on the web, Firefly Services for the enterprise and GenStudio. So the $125 million book of business that we talked about exiting Q1 only relates to that new book of business.

Adobe’s management is seeing every CMO (Chief Marketing Officer) being very interested in using generative AI in their content supply chain

Every CMO that we talk to, every agency that we work with, they’re all very interested in how generative AI can be used to transform how the content supply chain works.

Adobe’s management sees AI as bringing an even larger opportunity for Adobe

I am more excited about the larger opportunity without a doubt as a result of AI. And we’ve talked about this, Kash. If you don’t take advantage of AI, it’s a disruption. In our particular case, the intent is clearly to show how it’s a tailwind.

Adobe’s management is happy to support 3rd-party models within the Firefly web app or within other Adobe products so long as the models deliver value to users

We’ll support all of the creative third-party models that people want to support, whether it’s a custom model we create for them or whether it’s any other third-party model within Firefly as an app and within Photoshop, you’re going to see support for that as well. And so think of it as we are the way in which those models actually deliver value to a user. And so it’s actually just like we did with Photoshop plug-ins in the past, you’re going to see those models supported within our flagship applications.

Adobe’s management is seeing very strong attach rate and adoption of generative AI features in Adobe’s products with creative professionals

This cohort of Creative Professionals, we see very strong attach and adoption of the generative AI features we put in the product partially because they’re well integrated and very discoverable and because they just work and people get a lot of value out of that. So what you will see is you’ll start to see us integrating these new capabilities, these premium capabilities that are in the Firefly Standard, Pro and Premium plans more deeply into the creative workflow so more people have the opportunity to discover them.

Meituan (OTC: MPNGY)

Meituan’s autonomous vehicles and drones have accumulated 4.9 million and 1.45 million in orders-fulfilled by end-2024; Meituan’s drones started operating in Dubai recently

By year end of 2024, the accumulated number of commercial orders fulfilled by our autonomous vehicles and drones have reached 4.9 million and 1.45 million, respectively. Our drone business also started commercial operation in Dubai recently.

Meituan’s management wants to expand Meituan’s investments in AI, and is fully committed to integrating AI into Meituan’s platform; management’s AI strategy for Meituan has 3 layers, which are (1) integrating AI into employees’ work, (2) infusing AI into Meituan’s products, and (3) building Meituan’s own large language model

We will actively embrace and expand investment in cutting-edge technologies, such as AI or unmanned aerial delivery or autonomous delivery service vehicles, and accelerate the application of these technologies. And we are committed to fully integrating AI into consumers’ daily lives and help people eat better, live better…

…Our AI strategy builds upon 3 layers. The first one is AI at work. We are integrating AI in our employees’ day-to-day work and our daily business operations and to significantly enhance the productivity and work experience for our over 400,000 employees. And then second layer is AI in products. So we will use AI to upgrade our existing products and services, both 2B and 2C. And we will also launch brand-new AI-native products to better serve our consumers, merchants, couriers and business partners…

…The third layer is building our own in-house large language model, and we plan to continue to invest and enhance our in-house large language model with increased CapEx.

Meituan’s management has developed Meituan’s in-house large language model named Longcat; management has rolled out Longcat alongside 3rd-party models to improve employees’ productivity; Longcat has been useful for AI coding, conducting smart meetings, short-form video generation, for AI sales assistance, and more; Longcat has been used to develop an in-house AI customer service agent, which has driven a 20% improvement in efficiency and a 7.5 percentage points improvement in customer satisfaction; the AI sales assistant reduced the workload of Meituan’s business development (BD) team by 44% during the Spring Festival holidays; 27% of new code in Meituan is currently generated by its AI coding tools

On the first layer, AI at work, on the employee productivity front, we have our — we have developed our in-house large language model. It’s called longcat. By putting longcat side by side with external models, we have rolled out our very highly efficient tools for our employees, including AI coding, smart meeting and document assistant, and also, it’s quite useful in graphic design and short-form video generation and also AI sales assistance. These tools have substantially boost employee productivity and working experience…

…We have developed an intelligent AI customer service agent using our in-house large language model. So after the pilot operation, the results show more than 20% enhanced efficiency. And moreover, the customer satisfaction rate has improved over 7.5 percentage points…

…During this year’s Spring Festival holidays, we gathered an updated business information of our 1.2 million merchants on our platform with AI sales assistant. So it very effectively reduced the workload of our BD team, yes, by 44% and further enhanced the accuracy of the listed merchant information on our platform…

…Right now, in our company, about 27% of new code is generated by AI coding tools.

Meituan’s management is using AI to help merchants with online store design, information enhancement, and display and operation management; management is testing an AI assistant to improve the consumer experience in their search and transactions; management will launch a brand-new advanced AI assistant later this year that will give everyone a free personal assistant; the upcoming advanced AI assistant will be able to satisfy a lot of consumer-needs in the physical world because in order to bring AI to the physical world, physical infrastructure is needed and Meituan has that

We use AI across multiple categories by providing various tools such as smart online store design and smart merchant information enhancement and display and operation management…

…On the consumer side, we have already started testing AI assistant in some categories to enhance customer — consumer experience for their search and transaction on our platform. And for example, we have rolled out a restaurant assistant and travel assistant — reservation assistant. They can chat with the users, either by text or voice, making things more convenient and easier to use for users. And right now, we are already working on a brand-new AI native product. We expect to launch this more advanced AI assistant later this year and to cover all Meituan services so that everyone can have a free personal assistant. So based on our rich off-line service offerings and efficient on-demand delivery network, I think we will be able to handle many personalized needs in local services. And whether it’s ordering food delivery or making a restaurant reservation or purchasing group deals or ordering groceries or planning trips or booking hotels, I think we have got it covered with a one-stop, and we are going to deliver it to you on time…

…Our AI assistant will not only offer consumer services in the digital world, not just a chatbot, but it’s going to be able to satisfy a lot of their needs in the physical world because in order to bring AI to the physical world, you need more than just very smart algorithms or models. You need infrastructure in the physical world, and that’s our advantage…

…We have built a big infrastructure in the physical world with digital connections. We believe that, that kind of infrastructure is going to be very valuable when we are moving to the era of physical AI.

Meituan’s management expects to incur a lot of capex to improve Meituan’s in-house large language model, Longcat; to develop Longcat, management made the procurement of GPUs in 2024 a top priority, and expects to further scale GPU-related capital expenditure in 2025; Longcat has quite good evaluation results in China; Longcat’s API core volume has increased from 10% at the beginning of 2024 to 68% currently

On the algorithm model and compute side, it’s going to need a lot of CapEx and a very good foundation model. So in the past year, to ensure adequate supply of GPU resources has been a top priority for us. And even as we allocate meaningful resources in shareholder return and new initiatives, we keep investing billions in GPU resources. So our capital — CapEx this year has been substantial. And this year, we plan to further scale our investment in this very critical area. And thanks to our infrastructure and large language model team, we have made significant optimization, both in efficiency and effectiveness. And as a result, our in-house large language model, longcat, has achieved quite good evaluation results comparable to the top-tier models in China…

…The API core volume for Longcat has increased from 10% at the beginning of last year to 68% currently, so — which further validates the effectiveness of our in-house foundation model.

Meituan’s management believes that AI is going to give a massive push to the robotics industry; Meituan has been researching autonomous vehicles since 2016 and drones since 2017; management has made several investments in leading robotics and autonomous driving start-ups; management expects Meituan’s efforts in robotics and AI to be even more tightly integrated in the future

I think AI is going to give a massive push to the development of robotics. So we have been a very early mover when it comes to autonomous delivery vehicles and drones. So actually, we started our R&D in autonomous vehicles in late ’26 (sic) [ late ’16 ]. And we started our R&D in drones in 2017. So we have been working on this for many years, and we are making very good progress. So right now, we are looking to ways to apply AI in the on-demand delivery field. So apart from our in-house research — in-house R&D, we have also made quite several investments in leading start-ups in the robotics and autonomous driving sector to support their growth…

…In future, our robotics and AI will be even more tightly integrated, and we will keep improving in the areas such as autonomous delivery and logistics and automations because right now, apart — besides the last-mile delivery of on-demand delivery, we also operate a lot of rather big warehouses, and that will be very good use cases for automation technologies.

MongoDB (NASDAQ: MDB)

MongoDB’s management expects customers to start building AI prototypes and AI apps in production in 2025 (FY2026), but management expects the progress to be gradual, and so MongoDB’s business will only benefit modestly from AI in 2025 (FY2026); there are high-profile AI companies building on top of MongoDB Atlas, but in general, customers’ journeys with building AI applications will be gradual; management thinks that customers are slow in building AI applications because they lack AI skills and because there are still questions on the trustworthiness of AI applications; management sees the AI applications of today as being fairly simplistic, but thinks that AI applications will become more sophisticated as people become more comfortable with the technology

In fiscal ’26, we expect our customers will continue on their AI journey from experimenting with new technology stacks to building prototypes to join apps and production. We expect the progress to remain gradual as most enterprise customers are still developing in-house skills to leverage AI effectively. Consequently, we expect the benefits of AI to be only modestly incremental to revenue growth in fiscal ’26…

…We have some high-profile AI companies who are building on top of Atlas. I’m not at liberty to name who they are, but in general, I would say that the journey for customers is going to be gradual. I would say one is a lack of AI skills in their organizations. They really don’t have a lot of experience and it’s compounded by the rapid evolution of AI technology that they feel like it’s very hard for them to kind of think about like what’s stack to use and so on and so forth. The second, as I mentioned earlier, on the Voyage question, there’s also a real worry about the trustworthiness of a lot of these applications. So I would say the use cases you’re seeing are fairly simplistic — customer chat bots, maybe document summarization, maybe some very simple [indiscernible] workflows. But I do think that, that is we are in the early innings, and I expect a sophistication to increase as people get more and more comfortable,

In 2024 (FY2025), MongoDB started demonstrating that the modernisation of the technology-stack for applications (i.e. MongoDB’s Relational Migrator service) can be reduced with the help of AI tools; management will expand customer engagement for the modernisation so that it can contribute meaningfully to MongoDB’s business in 2026 (FY2027) and beyond; management will start with Java apps that run on Oracle; management sees a significant revenue opportunity in the modernisation of apps; MongoDB has successfully modernised financial applications for one of Europe’s largest ISVs (independent software vendors); management is even more confident of Relational Migrator now than in the past; Relational Migrator is tackling a very tough problem because it involves massive legacy code, and the use of AI in deciphering the code is very helpful; management is seeing a lot of interest from customers for Relational Migrator because the customers are in pain from their technical debt, and their legacy technology stack cannot handle AI applications

In fiscal ’25, our pilots demonstrated that AI tooling combined with services can reduce the cycle time of modernization. This year, we’ll expand our customer engagements so that app monetization can meaningfully contribute to our new business growth in fiscal ’27 and beyond. To start with, and based on customer demand, we are specifically targeting Java apps running on Oracle, which often have thousands of complex store procedures that need to be understood, converted and tested to successfully monetize the application. We addressed this through a combination of AI tools and agents along with inspection verification by delivery teams. Though the complexity of this work is high, the revenue opportunity for modernizing those applications is significant. For example, we successfully modernize our financial application for one of the largest ISVs in Europe, and we’re now in talks to modernize the majority of the legacy estate…

…[Question] What sort of momentum have you seen with relational migrator. And maybe how should we be thinking about that as a growth driver going forward?

[Answer] Our confidence and bullish on the space is even higher today than it was before…

…When you’re looking at a legacy app that’s got hundreds — tens of thousands, if not thousands, not tens of thousands of store procedures being able to reason about that code, being able to decipher that code and then ultimately to convert that code takes — is a lot of effort. And — but the good news is that we are seeing a lot of progress in that area. We see a lot of interest from our customers in this area because they are in so much pain with all the technical debt they’ve assumed. Second is that when they think about the future and how they enable AI in these applications, there’s no way they can do this on their legacy platforms. And so they’re motivated to try and modernize as quickly as possible.

MongoDB’s management sees AI transforming software from a static tool into a decision-making partner, but the rate of change is governed by the quality of the software’s data infrastructure; legacy databases cannot keep up with the requirements of AI and this is where MongoDB’s document-model database is advantageous; MongoDB’s database simplifies AI development by providing an all-in-one solution incorporating all the necessary pieces, including an operational data store, a vector database, and embedding and reranking models; MongoDB’s database provides developers with a structured approach when they are building AI applications; management sees AI applications being much better than traditional software in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language

AI is transforming software from a static tool into a dynamic decision-making partner. No longer limited to predefined tasks, AI-powered applications will continuously learn from real-time data, but this software can only adapt as fast as the data infrastructure is built on and legacy systems simply cannot keep up. Legacy technology stacks were not designed for continuous adaptation. Complex architectures, batch processing and rigid data models create friction at every step, slowing development, limiting organization’s ability to act quickly and making even small updates time consuming and risky. AI will only magnify these challenges. MongoDB was built for change. MongoDB was designed from the outset to remove the constraints of legacy databases, enabling businesses to scale, adapt and innovate at AI speed. Our flexible document model handles all types of data while seamless scalability ensures high performance for unpredictable workloads…

…We also simplify AI development by natively including vector and tech search directly in the database providing a seamless developer experience that reduces cognitive load, system complexity, risk and operational overhead, all with the transactional, operational and security benefits intrinsic to MongoDB. But technology alone isn’t enough. MongoDB provides a structured solution-oriented approach that addresses the challenges customers have with the rapid evolution of AI technology, high complexity and a lack of in-house skills. We are focused on helping customers move from AI experimentation to production faster with best practices that reduce risk and maximize impact…

…AI-powered applications excel where traditional software often falls short, particularly in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language…

…MongoDB demarcatizes the process of building trustworthy AI applications right out of the box. Instead of cobbling together all the necessary piece parts and operational data store, a vector database and embedding and reranking models, MongoDB delivers all of it with a compelling developer experience…

…We think architecturally, we have a huge advantage of the competition. One, the document model really supports different types of data structured, semi-structured and unstructured. We embed a search and Vector Search onto a platform. No one else does that. Then we’ve now with the Voyage AI, we have the most accurate embedding and reranking models to really address the quality and trust issue. And all this is going to be put together in a very elegant developer experience that reduces friction and enables them to move fast.

MongoDB acquired Voyage AI for $220 million, $200 million of which was paid in MongoDB shares; Voyage AI helps MongoDB’s database solve the hallucination issue – a big problem with AI applications – and make AI applications more trustworthy; management thinks the best way to ensure accurate results with AI applications is through high-quality data retrieval, and high-quality data retrieval is enabled by vector embedding and reranking models; Voyage AI’s vector embedding and reranking models have excellent ratings in the Hugging Face community and are used by important AI companies; Voyage AI has an excellent AI team; through Voyage AI, MongoDB can offer best-in-class embedding and reranking models; ISVs (independent software vendors) have gotten better performance when they switched from other embedding models to Voyage AI’s models; Voyage AI’s models increase the trustworthiness of the most demanding and mission-critical AI applications; Voyage AI’s models will only be available on Atlas

With the Voyage AI acquisition, MongoDB makes AI applications more trustworthy by pairing real-time data and sophisticated embedding and retreatment models that ensure accurate and relevant results…

…Our decision to acquire Voyage AI addresses one of the biggest problems customers have when building and deploying AI applications, the risk of hallucinations…

…The best way to ensure accurate results is through high-quality data retrieval, which shows that not only the most relevant information is extracted from an organization’s data with precision, high-quality retrieval is enabled by vector embedding and reranking models. Voyage AI has embedding and reranking models and are among the highest rated in the Hugging Face community for retrieval, classification, clustering and reranking and are used by AI leaders like Anthropic, LangChain, Harvey and Replit. Voyage AI led by Stanford professor, Tang Yuma, who has assembled a world-class AI research team from AI Labs at Stanford, MIT, Berkeley and Princeton. With this acquisition, MongoDB will offer best-in-class embedding and reranking models to power native AI retrievable…

…Let me address how the acquisition of Voyage AI will impact our financials. We disclosed last week that the total consideration was $220 million. Most Voyage shareholders received a consideration in MongoDB stock with only $20 million being paid out in cash…

…We know a lot of ISVs have already reached out to us since the acquisition saying they switched to Voyage from other model providers and they got far better performance. So the value of Voyage is being able to increase the quality and hence the trustworthiness of these AI applications that people are building in order to serve the most demanding and mission-critical use cases…

…Some of these new capabilities like Voyage now that will be available only on Atlas.

Swisscom was able to deploy a generative AI application in just 12 weeks using MongoDB Atlas

Swisscom, Switzerland’s leading provider of mobile, Internet and TV services deployed in new GenAI app in just 12 weeks using Atlas. Swisscom implemented Atlas to power a RAG application for the East Foresight library transforming unstructured data such as reports, recordings and graphics into vector bettings that large language models can interpret. This enables Vector Search to find any relevant contact resulting in more accurate and tailored responses for users.

If an LLM (large language model) is a brain, a database is memory, and embedding models are a way to find the right information for the right question; embedding models provide significant performance gains when used with LLMs

So think about the LLM as the brain. Think about the database is about your memory and the state of where how things are. And so — and then think about embedding as an ability to find the right information for the right question. So imagine you have a very smart person, say, like Albert Einstein on your staff and you’re asking him, in this case, the LLM, a particular question. While Einstein still needs to go do some homework based on what the question is about finding some information before he can formulate an answer. Rather than reading every book in a library, what the embedding models do is essentially act like a library and pointing Einstein to the right section, the right aisle, the right shelf, the right book and the right chapter on the right page, to get the exact information to formulate an accurate and high-quality response. So the performance gains you get a leveraging embedding models is significant.

Okta (NASDAQ: OKTA)

The emergence of AI agents has contributed to the growing importance to secure identity; management will provide access to Auth For GenAI on the Auth0 platform in March 2025; 200-plus startups and large enterprises are on the waitlist for Auth For GenAI; Auth For GenAI allows AI agents to securely call APIs; management is seeing that companies are trying to build agentic systems, only to run into problems with giving these agents access to systems securely; within AI, management sees agentic AI as the most applicable for Okta’s business in the medium term

With the steady rise of cloud adoption, machine identities and now AI agents, there has never been a more critical time to secure identity…

…On the Auth0 platform, we announced Auth For GenAI. We’ll begin early access this month. We already have a wait list of eager customers ranging from early startups to Fortune 100 organizations. Auth for GenAI is developed to help customers securely build and scale their Gen AI applications. This suite of features allows AI agents to securely call APIs on behalf of users while enforcing the right level of access to sensitive information…

…People are trying to stitch together agentic platforms and write their own agentic systems and what they run smack into is, wait a minute. How am I going to get these agents access all these systems if I don’t even know what’s in these systems and I don’t even know the access permissions that are there and how to securely authenticate them, so that’s driving the business…

…I’ll focus on the agentic part of AI. That’s probably the most, in the medium term, that’s probably the most applicable to our business…

…On the agent side, the equivalent of a lot of these deployments have like passwords hardcoded in the agent. So if that agent gets compromised, it’s the equivalent of your monitor having a bunch of sticky notes on it with your passwords before single sign-on. So Auth for GenAI gives you a protocol in a way to do that securely. So you can store these tokens and have these tokens that are secured. And then if that agent needs to pop out and get some approval from the user, Auth for GenAI supports that. So you can get a step-up biometric authentication from the user and say, “Hey, I want to check Jonathan’s fingerprint to make sure before I book this trip or I spend this money, it’s really Jonathan.” So those 3 parts are what Auth for GenAI is, and we’re super, super excited about it. We have a waitlist. Over 200-plus Fortune 100s and startups are on that thing.

Okta’s management thinks agentic AI is a real phenomenon and will turbocharge machine identity for Okta by 2 orders of magnitude higher; already today, a good part of Okta’s business is providing machine identity; management is the most excited about the customer identity part of Okta’s business when it comes to agentic AI because companies will start having agentic AIs as customers too; management thinks Okta will be monetise agentic AI from both people building agents, and people using agents

The agenetic revolution is real, and the power of AI and the power of these language models, the interaction modalities that you can have with these systems these machines doing things on your path and what they can do and how they can infer next actions, et cetera, et cetera. You all know it’s really real. But the way to think about it from an Okta perspective, it is like machine identity on steroids, turbocharged to like 2 orders of magnitude higher. So that’s like really exciting for us because what do we do. A good part of our business is actually logging in machines right now. Auth0 has the machine-to-machine tokens where people, if they build some kind of web app that services other machines, they can use Auth0 for the login for that. Okta has similar capabilities. And now you have not only that basic authentication challenge but you have the — all of these applications as you get 2 orders of magnitude, more things logging in, you have to really worry about the fine grain authorization into your services…

…[Question] Which side of the business are you more excited about from an agentic AI perspective?

[Answer] I think the customer identity side is more exciting. I think it’s a little bit of a — my answer is a little bit of a — I’m kind of like having both ways because a lot of the — when you talk about developers building agentic AI, they’re doing it inside of enterprises. So like the pattern I was talking about earlier, there’s these teams and these companies that have been tasked with we hear about this [ agent ] and make it work. And the first thing they have to do is I’ve had many conversations with customers where they’ve been in these discussions and we want — we did a POC and now we’re worried about doing it broadly, but the task was basically hook everything up to our existing — hook these agents up to all of our existing systems. And before we could do that inside of enterprise, we had to get a good identity foundation in front of all these things. And so it’s kind of like similar to your building something and you’re a developer, you’re exposing APIs, you’re doing fine grain authorization. You’re taking another — you’re using another platform or you’re building your own agentic AI platform, and you’re having to talk to those systems and those APIs to do things on user’s behalf, so you’re a developer, but it’s kind of like a workforce use case, but I think people building these systems and getting the benefit from that is really exciting…

…We can monetize it on “both side”, meaning people building the agents and people using the agents. The agents have to log in and they have to log into something. So I think it’s potential to monetize it on both sides.

Okta’s management thinks the software industry does not yet know how to account for AI agents in software deals; management thinks that companies will eventually be buying software licenses for both people and AI agents

One of the things that we don’t have today is the industry doesn’t have a way to like identify an agent. I don’t mean in the sense of like authenticating or validated agent. I mean to actually a universal vernacular for how to record an agent, how to track it and how to account for it. And so I think that’s something you’ll see coming. You’ll see there will be actually a type of account, an Okta that’s an agent account. You’ll see companies starting to — when they buy software, they say, hey, I buy these many people and these many agentic licenses. And that’s not quite there yet. Of course, platforms that are coming out with agent versions have this to some degree, but there isn’t a common cross-company, cross enterprise definition of an agent, which is an interesting opportunity for us actually.

Sea Ltd (NYSE: SE)

Sea’s management is using AI in Shopee to understand shoppers’ queries better and to help sellers enhance product listings, and these AI initiatives have improved purchase conversion rates and sellers’ willingness to spend on advertising; management has upgraded Shopee’s chatbots with AI and this led to meaningful improvement in customer service satisfaction score and customer service cost-per-contact; management is using AI to improve the shopper return-refund process and has seen a 40% year-on-year decrease in resolution times in Asia markets; management thinks Shopee is still early in the AI adoption curve

We continue to adopt AI to improve service quality in a practical and effective manner. By using large language models to understand queries, we have made search and discovery more accurate, helping users find relevant products faster. We provide our sellers with AI tools to enhance product listings by improving descriptions, images, and videos. These initiatives have improved purchase conversion rates while also making sellers more willing to spend on ads, boosting our ad revenue…

… After upgrading our chatbots with AI, we saw a meaningful increase in our customer service satisfaction score over the past year, and a reduction in our customer service cost-per-contact by nearly 30% year-on-year. We also used large language model capabilities to enhance our buyer return-refund process, addressing a key e-commerce pain-point. In the fourth quarter, we improved resolution times in our Asia markets by more than 40% year-on-year, with nearly six in ten cases resolved within one day. We believe we are still early in the AI adoption curve and remain committed to exploring AI-driven innovations to improve efficiency and deliver better experiences for our users.

Sea’s management thinks the use of AI is helping Sea both monetise its services better, and save costs

[Question] I just wanted to get some color with regard to the benefit from AI. Are we actually seeing cost efficiency, i.e., the use of AI actually save a lot of the manual labor cost? So that helps to achieve a lot of cost savings? Or are we actually seeing the monetization is getting better coming from AI?

[Answer] We are seeing both, in fact, for example, in our search and recommendations, we actually use the large language model to better understand user queries, making certain discovery a lot more accurate and helping users find relevant faster… We are also using the AI to understand the product a lot better like historically, it was a fintech matching, but now we can use existing pictures and the descriptions and the reviews to generate a lot more richer understanding of the product. And all those help us essentially matching, our product users’ intention a lot better. 

We are also having a lot of AIGC, AI-generated content in our platform. We provide that as a tool to our sellers to be able to produce image, a description of the product or the videos, especially a lot better compared to what they had before.

And both of this increased our conversions meaningfully in our platform.

On the other side, on the cost savings side, I think in Forrest’s opening, we talked about the chatbot, the — if you look at our queries, about 80% of the queries are answered by the chatbot already, which is a meaningful cost savings for the — for our operations. I think that’s also why you can see that our cost management for e-commerce is doing quite well. Even for the 20% answered by the agent, we have an AI tool for the agent to be able to understand the context a lot better, so can help them to respond a lot faster to the customers,

Tencent (OTC: TCEHY)

Tencent’s AI initiatives can be traced to 2016; management has been investing in Tencent’s proprietary foundation model, HunYuan, since early 2023; management sees HunYuan as the foundation for Tencent’s consumer and enterprise businesses

Our AI initiatives really trace back to 2016 when we first established our AI lab. Since 2023, early part of that, we have been investing heavily in our proprietary HunYuan foundation model, which forms an important technology foundation for our consumer and enterprise-facing businesses and will serve as a growth driver for us in the long run. Our investments in HunYuan enable us to develop end-to-end foundation model capabilities in terms of infrastructure, algorithm, training, alignment and data management and also to tailor solutions for the different needs of internal and external use cases.

Tencent’s management has released multimodal HunYuan foundation models across image, video, and 3D generation; the multimodal HunYuan foundation models have received excellent scores in AI benchmarking

In addition to LLMs, we have released multimodal HunYuan foundation models with capabilities that span across image, video and 3D generation. HunYuan’s image generation models achieved the highest score from FlagEval in December of last year. In video generation, our model excels in video output quality and ranked first on Hugging Face in December of last year. 

Tencent’s management has been actively releasing Tencent’s AI models to the open source community

Our 3D generation model was the industry’s first open source model supporting text and image to 3D generation. In addition to that, we also contribute to the open source community actively and have open sourced a series of advanced models in the HunYuan family for 3D generation, video generation, large language and image generation. Several of these models have gained great popularity among developers worldwide.

For Tencent’s consumer-facing AI products, management has been utilising different AI models because they believe that a combination of models can handle complex tasks better than a single model; Tencent’s native AI application, Yuanbao, provides access to multiple models; Yuanbao’s DAU (daily active users) increased 20-fold from February 2025 to March 2025; management has been testing AI features in Weixin to improve the user experience and will be adding more AI features over time; management will be introducing a lot more consumer-facing AI applications in the future; management thinks consumer AI is in a very early stage, but they can see Yuanbao becoming a strong AI native assistant helping with deep research, and the Ema Copilot being a personal and collaborative library; management is looking to infuse AI into each of Tencent’s existing consumer products

Going to our consumer-facing AI products. We adopt a multimodal strategy to provide the best AI experience to our users, so we can leverage all available models to serve different user needs. We need this because different AI models are optimized for different capabilities, performance metrics and use cases and a combination of various models can handle complex tasks better than a single model…

…On the product front, our AI native application, Yuanbao, provides access to multiple models, including Chain of Thought reasoning models such as HunYuan T1 and DeepSeek R1 and fast-thinking model HunYuan Turbo S with the option of integrating web search results. Yuanbao search results can directly access high-quality proprietary content from Tencent ecosystem, such as official accounts and video accounts. By leveraging HunYuan’s multimodal capabilities, Yuanbao can process prompts in images, voice and documents in addition to text. Our cloud infrastructure supports stable and uncapped access to leading models. From February to March, Yuanbao’s DAU increased 20-fold to become the third highest AI native mobile application in China by DAU…

…We have also started testing AI features in Weixin to enhance user experience, such as for search, language input and content generation and we will be adding more AI features in Weixin going forward…

…We actually have a whole host of different consumer-facing applications and you should expect more to come. I think AI is actually in a very early stage. So it’s really hard to talk about what the eventual state would look like. But I would say, one, each product will continue to evolve into very useful and even more powerful products for users. So Yuanbao can be sort of a very strong AI native assistant and the Ema copilot could be your personal library and also a collaborative library for team collaborations. And Weixin can have many, many different features to come, right? And in addition to these products, I think our other products would have AI experiences, including QQ, including browser and other products. So I think we would see more and more AI — consumer AI-facing products. And at the same time, each one of the products will continue to evolve…

…Each one of our products would actually try to look for unique use cases in which they can leverage AI to provide a great user experience to their users…

…Yuanbao, well, right now, it is a chatbot and search. But over time, I think it would actually proliferate into a all-capable AI assistant with many different functionalities serving different types of people. So if — it would range from sort of students who want to learn and it would include all kinds of different people who, actually knowledge workers who want to complete their work and would sort of cover deep research, which allows people to very deep research into different topics.

Tencent’s management thinks that there are advantages to both developing Tencent’s own foundation models and using 3rd-party models

By investing in our own foundation models, we are able to fully leverage our proprietary data to tailor solutions to meet customized internal and customer needs, while at the same time, making use of external models allowed us to benefit from innovations across the industry.

Tencent’s management has been accelerating AI integration into Tencent’s cloud businesses, including its infrastructure as a service business, its platform as a service business, and its software as a service business; the AI-powered transcription and meeting summarisation functions in Tencent Meeting saw a year-on-year doubling in monthly active users to 15 million

We have been accelerating AI integration into our cloud business across our infrastructure, platform and Software as a Service solutions.

Through our Infrastructure as a Service solutions, enterprise customers can achieve high-performance AI training and inference capabilities at scale and developers can access and deploy mainstream foundation models.

For Platform as a Service, PaaS, our TI platform supports model fine-tuning and inference demands with flexibility, will provide powerful solutions supporting enterprise customers in customizing AI assistants using their own proprietary data and developers in generating mini programs and mobile applications through natural language prompts.

Our SaaS products increasingly benefit from AI-powered tools. Real-time transcription and meeting summarization functions in Tencent Meeting gained significant popularity resulting in monthly active users for these AI functions doubling year-on-year to 15 million. Tencent Docs also enhanced the user productivity and content generation and processing.

Tencent’s AI cloud revenue doubled in in 2024, despite management having limited the availability of GPUs for cloud services in preference for internal use-cases for ad tech, foundation model training, and inference for Yuanbao and Weixin; management has stepped up the purchase of GPUs in 2024 Q4 and expects the revenue growth of cloud services to accelerate as the new GPUs are deployed for external use cases; Tencent’s capital expenditures in 2024 Q4 increased more than 3x to US$10.7 billion from a year ago because of the higher purchases of GPUs; management believes the step-up in capex in 2024 Q4 is to a new higher steady state

In 2024, our AI cloud revenue approximately doubled year-on-year. Increased allocation of GPUs for internal use cases initially for ad tech and foundation model training and more recently on AI inference for Yuanbao and Weixin has limited our provision of GPUs to external clients and thus constrained our cloud services revenue growth. For external workloads, we have prioritized available GPUs towards high-value use cases and clients. Since the fourth quarter of 2024, we have stepped up our purchase of GPUs. And as we deploy these GPUs, we expect to accelerate the revenue growth of our overall cloud services…

…As the capabilities and benefits of AI become clearer, we have stepped up our AI investments to meet our internal business needs, train foundation models and support searching demand for inference we’re experiencing from our users. To consolidate our resources around this all important AI effort, we have reorganized our AI teams to sharpen focus on both fast product innovation and deep model research. Matching our stepped-up execution momentum and decision-making velocity, we increased annual CapEx more than threefold to USD 10.7 billion in 2024, equivalent to approximately 12% of our revenue with a notable uplift in fourth quarter of the year as we bought more GPUs for both inference needs as well as for our cloud services…

…We did step up CapEx to a new sort of higher steady state in the fourth quarter of last year…

…Part of the reason why you see such a big step up in terms of the CapEx in the fourth quarter is because we have a bunch of rush orders for GPUs for both inference as well as for our cloud service. And we would only be able to capture the large increase in terms of IaaS service demand when we actually install these GPUs into the data center, which would take some time. So I would say we probably have not really captured a lot of that during the first quarter. But over time, we will capture quite a bit of it with the arrival and installation of the GPUs.

Tencent’s management already sees positive returns for Tencent from their investment in AI; the positive returns come in 3 areas, namely, in advertising, in games, and in video and music services; in advertising, Tencent has been using AI to approve ad content more efficiently, improve ad targeting, streamline the ad creative process for advertisers, and deliver higher return on investment for advertisers; Tencent’s marketing services experienced revenue growth of 20% in 2024 because of AI integration, despite a challenging macro environment; in games, Tencent is using AI to improve content production efficiency and build in-game chat bots, among other uses; in video and music services, Tencent is using AI to improve productivity in content creation and effectively boost content discovery

We believe our investment in AI has already been generating positive returns for us…

…For advertising, we enhanced our advertising system with neural network AI capabilities since 2015. We rebuilt ad tech platform using large model capabilities since 2020, enabling long sequence user behavior analysis across multiple properties which resulted in increased user engagement and higher click-through rates. Since 2023, we have been adding large language model capabilities to facilitate more efficient approvals of ad content, to better understand merchandise categories and users commercial intent for more precise ad targeting and to provide generative AI tools for advertisers to streamline the ad creative process, leveraging AI-powered ad targeting capabilities and generative AI ad creative solutions. Our marketing services business is already a clear beneficiary of AI integration with revenue growth of 20% in 2024 amid challenging macro environment.

In games, we adopted machine learning technology in our PvP games since 2017. We leveraged AI in games to optimize matching experience, improve game balance and facilitate AI coaching for new players, empowering our evergreen games strategy. Our games business is now integrating large language model capabilities, enhanced 3D content production efficiency and to empower in-game chatbots.

For our video and music services, we’re leveraging AI to improve productivity in animation, live action video and music content creation. Our content recommendation algorithms are powered by AI and are proven effective in boosting content discovery. These initiatives enables us to better unlock the potential of our great content platforms…

…Across pretty much every industry we monitor the AI enhancements we’re deploying and delivering superior return on investment for advertisers versus what they previously enjoyed and versus what’s available elsewhere.

Tencent’s management expects to further increase capital expenditure in 2025 and for capital expenditure to be a low-teens percentage of revenue for the year; while capital expenditure in 2025 is expected to increase, the rate of growth has slowed down significantly

We intend to further increase our capital expenditures in 2025 and expect our CapEx to account for low teens percentage of our revenue…

…[Question] You guided a CapEx to revenue ratio of low-teens for 2025, which is a similar ratio as for ’24. So basically, this guidance implies a significant slowdown of CapEx growth.

Tencent’s management sees several nuances on the impact to Tencent’s profit margins from the higher AI capital expenditures expected, but they are optimistic that Tencent will be able to protect its margins; the AI capital expenditures go into 4 main buckets, namely, (1) ad tech and games, (2) large language model training, (3) renting out GPUs in the cloud business, and (4) consumer-facing inference; management sees good margins in the 1st bucket, decent margins in the 3rd bucket, and potentially some margin-pressure in the 4th bucket; but in the 4th bucket, management sees (1) the potential to monetise consumer-facing inference through a combination of advertising revenue and value-added services, and (2) avenues to reduce unit costs through software and better algorithms

[Question] As we step up the CapEx on AI, our margin will be inevitably dragged by additional depreciation and R&D expenses. So over the past few years, we have seen meaningful increase in margin as we focus on high-quality growth. So going forward, how should we balance between growth and profitability improvement?

[Answer] It’s worth digging into exactly where that CapEx is going to understand whether the depreciation becomes a margin pressure or not. So the most immediate use of the CapEx is GPUs to support our ad tech and to a lesser extent, our games businesses. And you can see from our results, you can hear from what Martin talked about, that, that CapEx actually generates good margins, high returns.

A second use of CapEx was GPUs for large language model training…

…Third, there’s CapEx related to our cloud business, which, we buy this GPU servers, we rent them out to customers, we generate a return. It may not be the highest return business in our portfolio but nonetheless, it’s a positive return. It covers the cost of the GPUs and therefore, the attendant depreciation.

And then finally, where I think there is potentially the short-term pressure is the CapEx for 2C [to-consumers] inference. And it — that is an additional cost pressure but we believe it’s a manageable cost pressure because that CapEx is a subset of the total CapEx. And we’re also optimistic that over time those — the 2C inference activity that we’re generating, just like previous activity within different Tencent platforms will be monetizing through a combination of advertising revenue and value-added services. So overall, while we understand that you have questions around the step-up in CapEx and how that translates into profitability over time, we’re actually quite optimistic that we can continue to grow the business while protecting margins…

…In the inference for consumer-facing product. There’s actually a lot of avenues through which we can actually reduce the unit cost by technical means, by software and by better algorithms. So I think that’s also sort of a factor to keep in mind.

Tencent’s management believes that the AI industry is now getting much higher productivity on large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected, as a result of DeepSeek’s breakthroughs; previously, the belief was that each new generation of large language models would require an order of magnitude more GPUs; Tencent’s AI-related capital expenditure is the largest amongst Chinese technology companies; management thinks that Chinese technology companies are spending less on capital expenditure as a percentage of revenue than Western peers because Chinese companies have been prioritizing efficient utilization of GPUs without impairing the ultimate effectiveness of the AI technology developed

There was a period of time last year when there was a belief that every new generation of large language model required an order of magnitude more GPUs. That period of time ended with the breakthroughs that DeepSeek demonstrated. And now the industry and we within the industry are getting much higher productivity on a large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected…

…There was a period last year when people asked us if our CapEx was big enough relative to our China peers, relative to our global peers. And now out of the listed companies, I think we had the largest CapEx of any China tech company in the fourth quarter. So we’re at the forefront among our China peers. In general, the China tech companies are spending less on CapEx as a percentage of revenue than some of their Western peers. But we believe for some time that’s because the Chinese companies are generally prioritizing efficiency and utilization — efficient utilization of the GPU servers. And that doesn’t necessarily impair the ultimate effectiveness of the technology that’s being developed. And I think DeepSeek’s success really sort of symbolized and solidified, demonstrated that, that reality.

Tencent’s management thinks AI can benefit Tencent’s games business in 3 ways, namely, (1) a direct, more short-term benefit in helping game developers be more productive, (2) an indirect, more long-term benefit in terms of games becoming an important element of human expression in an AI-dominated world, and (3) allow evergreen games to be more evergreen

We do believe that games benefit in a direct and potentially a less direct way from AI technology enhancements. The direct way is the game developers using AI to assist them in creating more content more quickly and serving more users more effectively. And then the indirect way, which may be more of a multi-decade rather than the second half of this year story is that as humanity uses AI more broadly, then we think there’ll be more time and also more desire for high agency activities among people who are now empowered by AI. And so one of the best ways for them to express themselves in a high agency way rather than a passive way is through interactive entertainment, which is games…

…We actually felt AI would allow evergreen games to be more evergreen. And we are already seeing sort of how AI can help us to execute and magnify our evergreen strategy. And part of it is within production, right, you can actually produce great content now within a shorter period of time so that you can keep updating the games with higher frequency of high-quality content. And with the PvE experience, when you have smarter box, right, you actually sort of make the game more exciting and more like PvP. And within PvP, a lot of the matching and balancing and coaching of new users can actually sort of be done in a much better way when you apply AI.

Tencent’s management sees strong competitive advantages that Tencent has when it comes to AI agents because of the large user base of Tencent’s products and the huge variety of activities that happen within Tencent’s products

We would be able to build stand-alone AI agents by leveraging models that are of great quality and at the same time by leveraging the fact that we have a lot of consumers on our different software platforms like our browser, like Yuanbao over time. But at the same time, right, even within Weixin and within QQ, we can have AI agents. And the AI agents can actually leverage the ecosystem within the apps and provide really great service to our users by completing complex tasks, right? If you look at Weixin, for example, Weixin has got a lot of users, a very long user time per day as well as high frequency of users opening up the app, that’s 1 advantage. The second advantage is that if you look at the activities within Weixin is actually very, very diversified, right? It’s not just sort of entertainment, it’s not just transactions, it’s actually sort of social communication and content and a lot of people will conduct their work within Weixin, a lot of people conduct their learning within Weixin and there are a lot of transactions that go through Weixin. And there’s a multitude of Mini Programs, which actually allowed all sorts of different activities to be carried out, right? So if you look at the Mini program ecosystem, we can easily build an agent based on a model that actually can connect to a lot of the different Mini Programs and have activities and complex tasks completed for our users. So I think those are all very distinctive advantages that we have.

Tencent’s management believes that AI search will eventually replace traditional search

At a high level, if we look at the history of web search subsuming web directory, if we look at our own behavior, with AI prompts vis-a-vis traditional search, I think it’s possible that AI search will subsume traditional search because ultimately, web directory traditional search, AI prompt, all represent mechanisms for accessing the Internet’s knowledge graph.

Tencent’s management believes that in China, AI chatbots will be monetised first through performance advertising followed by value-added services, as opposed to in the West, where AI chatbots have been monetised first through subscription models followed by performance advertising

In terms of how the AI prompt will be monetized, time will tell but I think that we can already see in the Western world, the first monetization is through subscription models and then over time, performance advertising will follow. I think in China, it will start with performance advertising and then value-added services will follow.

Veeva Systems (NYSE: VEEV)

Veeva’s management’s AI strategy for Veeva is to have its Commercial Cloud, Development Cloud, and Quality Cloud be the life sciences industry’s standard core systems of record; management is making Veeva’s data readily available for the development of AI applications by Veeva and 3rd parties through the Direct Data API released in 2024; management is seeing good uptake on the Direct Data API; the Direct Data API will be free to all of Veeva’s customers because management wants people to be building on the API; management found a way to offer the Direct Data API with lesser compute resources than originally planned for; Veeva is already using the Direct Data API internally, and 10 customers are already using it; it takes time for developers to be used to the Direct Data API, because it’s a fundamentally new type of API, but it’s a great API; management believes that Direct Data API will enable the life sciences industry to leverage their core data through AI faster than any other industry

We also executed well on our AI strategy. Commercial Cloud, Development Cloud, and Quality Cloud are becoming the industry’s standard core systems of record. With significant technology innovation including the Direct Data API released this year, we are making the data from our applications readily available for the development of relevant, timely AI solutions built by Veeva, our customers, and partners…

…We are seeing good uptake of the Direct Data API. And we — yes, as you mentioned, we recently announced that that’s going to be free to all of our customers. And — the reason there is we want everybody building on that pack of API. It’s just a much better, faster API for many use cases, and we found a way to do it where it was not going to consume as many compute resources as we thought it was…

…We are using it internally, for example, for connecting different parts of our clinical suite, different parts of our safety suite together and our partners are starting to do it. We have more than 10 customers that are already doing it. Some of them are large customers. it takes some time because it’s a different paradigm for integration. People have been using a hammer for a long time. And now you’re giving them a Jack Hammer and they got to learn how to use it. But we are super enthused. It’s a fundamental new type of API where you can get like all of the data out of your Vault, super quickly…

…I’m really pleased about what we’re doing for the life sciences industry because many of our core systems are Veeva and now their core systems are going to be enabled with this fundamental new API that’s going to allow them to leverage their core data faster than any other industry.

Reminder from management that Veeva recently announced 3 new AI solutions, namely, Vault CRM Bot, CRM Voice Control, and MLR Bot; management has more AI solutions in the pipeline, but the timing for release is still unclear; management wants to invest more in AI solutions and they think the company has strong leadership in that area

We announced new Veeva AI Solutions including Vault CRM Bot, CRM Voice Control, and MLR Bot…

……Now with CRM Voice Control that we’ll be bringing out this year, and also CRM Bot and the MLR Bot, medical legal regulatory review, and we have quite a few others in the plan, too. We don’t know exactly which ones we’ll bring out when, but we have — we’re putting more investment in AI solutions. We centralized the group around that, so we can develop. I have a strong leader there and develop more core competency around around AI.

Veeva’s management was initially a little skeptical of AI because of the amount of money flowing in, and the amount of hype surrounding it

[Question] I want to start with the AI offerings that you’ve built out Peter, maybe if you were on the tape back a year, there was a little bit of a perception from the investment community. That you were coming off as maybe a little bit skeptical on AI, but now you’ve come out with a lot of these products. Maybe can you walk us through kind of what’s driven the desire or the momentum to push out these products kind of quickly?

[Answer] AI is certainly captivating technology, right? So much money going into it, so much progress, and so much hype.

Veeva’s management thinks AI is shaking out the way they expected to, which is the existence of many large language models; management also thinks the development of AI has become more stable

If we just stay at that level, I’m really pleased that things are starting to shake out roughly how we thought they were going to take out. There’s not going to be one large language model, there are going to be multiple. There’s not going to be 50, but there’s going to be a good handful and they’re going to specialize in different areas. And it’s not so unstable anymore, where you wake up and everything changes, right? DeepSeek came out, came out, Yes, well, guess what? The world keeps turning. NVIDIA is going to have their own model? That’s okay, and the world keeps turning. So I think it’s starting to settle out.

Veeva’s management sees the infrastructure layer of AI as being really valuable, but they also see a lot of value in building specific use cases on top of the infrastructure layer, and that is where they want Veeva to play in

So it’s settling out that these core large language models are going to be at the platform level and that’s super valuable, right? That’s not where companies like Veeva play in, that core infrastructure level. It’s very valuable. But there’s a lot of great value on specific use cases on top that can be used in the workflow. So that’s what we’re doing now, focusing on our AI solutions.

Veeva’s management is using AI internally but it’s still early days and it has yet to contribute improvements to Veeva’s margin; Veeva’s expected margin improvements for 2025 (FY2026) is not related to AI usage

[Question] Going back to the topic of AI… how you’re leaning into kind of internal utilization too, if we think about kind of some of the margin strength you’re delivering throughout the business?

[Answer] Around the internal use of AI and the extent to which that was contributing to margins, I think. And I think the short answer there is it’s an area that we’re really excited about internally as well. We’re building strategies around, but it’s not a major contributor to the margin expansion that we saw in Q4 or in the coming year. So it’s something we’re looking into. We’re building strategies around. It’s not something we’re counting on, though, to deliver on this year’s guidance.

In 2023 and 2024, Veeva’s management was seeing customers get distracted from core technology spending because the customers were chasing AI; management is no longer seeing the AI distraction at play

I believe we called it before AI disruption, maybe that was 18 months or so a year ago. I think that’s largely behind us. Our customers have settled into what AI is and what it does. They’re still doing some innovation projects, but it’s not consuming them or distracting from the core work. So I think we’re largely through that [ 3 ] of AI distraction now.

Veeva’s management thinks that Veeva is the fastest path to AI for a life sciences industry CRM because any AI features will have to be embedded in the workflow of a life sciences company

It turns out Veeva is the fastest path to AI that you can use in CRM because it has to be done in the workflow of what you’re doing. This is not some generic AI. This is AI for pre-call planning for compliance, for how to — for the things that a pharmaceutical rep does in a compliant way based on the data sources that are needed in CRM. So Veeva is the fastest path to AI.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Meituan, MongoDB, Okta, Tencent, and Veeva Systems. Holdings are subject to change at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *