More Of The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management introduced multiple generative AI models in the Firefly family in 2024 and now has a generative video model; Adobe’s generative AI models are designed to be safe for commercial usage; the Firefly models are integrated across Adobe’s software products, which brings value to creative professionals across the world; Firefly has powered 16 billion generations (12 billion in 2024 Q2) since its launch in March 2023 and each month in 2024 Q3 has set a new record in generations; the new Firefly video model is in limited beta, but has already gathered massive customer interest (the model has driven a 70% increase in Premier Pro beta users since its introduction) and will be generally available in early-2025; recent improvements to the Firefly models include 4x faster image generation; enterprises such as Tapestry and Pepsi are using Firefly Services to scale content production; Firefly is the foundation of Adobe’s AI-related innovation; management is using Firefly to drive top-of-funnel user-acquisition for Adobe

2024 was also a transformative year of product innovation, where we delivered foundational technology platforms. We introduced multiple generative AI models in the Adobe Firefly family, including imaging, vector design and, most recently, video. Adobe now has a comprehensive set of generative AI models designed to be commercially safe for creative content, offering unprecedented levels of output quality and user control in our applications…

…The deep integration of Firefly across our flagship applications in Creative Cloud, Document Cloud, and Experience Cloud is driving record customer adoption and usage. Firefly-powered generations across our tools surpassed 16 billion, with every month this past quarter setting a new record…

…We have made major strides with our generative AI models with the introduction of Firefly Image Model 3 enhancements to our vector models, richer design models, and the all-new Firefly Video Model. These models are incredibly powerful on their own and their deep integration into our tools like Lightroom, Photoshop, Premiere, InDesign and Express have brought incredible value to millions of creative professionals around the world…

…The launch of the Firefly Video Model and its unique integration in Premier Pro and limited public beta garnered massive customer interest, and we look forward to making it more broadly available in early 2025.  This feature drove a 70% increase in the number of Premier Pro beta users since it was introduced at MAX. Enhancements to Firefly image, vector, and design models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Premiere Pro and Adobe Express…

…Firefly Services adoption continued to ramp as enterprises such as Pepsi and Tapestry use it to scale content production, given the robust APIs and ease of creating custom models that are designed to be commercially safe…

…This year, we introduced Firefly Services. That’s been — that’s off to a great start. We have a lot of customers that are using that. A couple we talked about on the call include Tapestry. They’re using it for scaled content production. Pepsi, for their Gatorade brand, is enabling their customers to personalize any merchandise that they’re buying in particular, starting with Gatorade bottles. And these have been very, very productive for them, and we are seeing this leveraged by a host of other companies for everything from localization at scale to personalization at scale to user engagement or just raw content production at scale as well…

…You’re exactly right in terms of Firefly is a platform and a foundation that we’re leveraging across many different products. As we talked about, everything from Express and Lightroom and even in Acrobat on mobile for a broad-based but then also in our core Creative products, Photoshop, Illustrator, Premiere. And as we’ve alluded to a number of times on this call, with the introduction of video, even a stand-alone offer for Firefly that we think will be more valuable from a tiering perspective there. And then into Firefly Services through APIs in connection to GenStudio. So we are looking at leveraging the power of this AI foundation in all the activities…

…We see that when we invest in mobile and web, we are getting some very positive signals in terms of user adoption and user conversion rate. So we’re using Firefly very actively to do that.

Adobe’s management has combined content and data in Adobe GenStudio to integrate content creation with marketing, leading to an end-to-end content supply chain solution; the Adobe GenStudio portfolio has a new addition in Adobe GenStudio for Performance Marketing, which has seen strong customer demand since becoming generally available recently; management is expanding the go-to-market teams to sell GenStudio solutions that cut across the Digital Media and Digital Experience segments and early success has been found, with management expecting acceleration in this pipeline throughout FY2025 and beyond

We set the stage to drive an AI content revolution by bringing content and data together in Adobe GenStudio integrating high-velocity creative expression with enterprise activation. The release of Adobe GenStudio for performance marketing integrates Creative Cloud, Express, and Experience Cloud and extends our end-to-end content supply chain solution, empowering freelancers, agencies, and enterprises to accelerate the delivery of content, advertising and marketing campaigns…

…We have brought our Creative and Experience Clouds together through the introduction of Firefly Services and GenStudio, addressing the growing need for scaled content production in enterprises…

… GenStudio enables agencies and enterprises to unlock new levels of creativity and efficiency across content creation and production, workflow and planning, asset management, delivery and activation and reporting and insights. 

Adobe GenStudio for Performance Marketing is a great addition to the GenStudio portfolio, offering an integrated application to create paid social ads, display ads, banners, and marketing e-mails by leveraging preapproved on-brand content. It brings together creative teams that define the foundational requirements of a brand, including guidelines around brand voice, channels, and images with marketing teams that need to deliver numerous content variations with speed and agility. We are seeing strong customer demand for Adobe GenStudio for Performance Marketing since its general availability at MAX…

… We’re expanding our enterprise go-to-market teams to sell these integrated solutions that cut across Digital Media and Digital Experience globally under the new GenStudio umbrella. We have seen early success for this strategy that included Express and Firefly Services in Q4. As we enable our worldwide field organization in Q1, we anticipate acceleration of this pipeline throughout the rest of the year and beyond.

Adobe’s management introduced AI Assistant in Acrobat and Reader in FY2024; users of AI Assistant completed their document-tasks 4x faster on average; AI Assistant is now available across desktop, web, and mobile; management introduced specialised AI for specific document-types and tasks in 2024 Q3 (FY2024 Q4); management saw AI Assistant conversations double sequentially in 2024 Q3; AI Assistant is off to an incredibly strong start and management sees it continuing to accelerate; AI Assistant allows users to have conversations with multiple documents, some of which are not even PDFs, and it turns Acrobat into a general-purpose productivity platform; the rollout of AI Assistant in more languages and documents gives Acrobat’s growth more durability

We took a major step forward in FY ’24 with the introduction of AI Assistant in Acrobat and Reader. AI Assistant and other AI features like Liquid Mode and Firefly are accelerating productivity through faster insights, smarter document editing and integrated image generation. A recent productivity study found that users leveraging AI Assistant completed their document-related tasks 4x faster on average. AI Assistant is now available in Acrobat across desktop, web, and mobile and integrated into our Edge, Chrome, and Microsoft Teams extensions. In Q4, we continued to extend its value with specialized AI for contracts and scanned documents, support for additional languages, and the ability to analyze larger documents…

… We saw AI Assistant conversations double quarter-over-quarter, driving deeper customer value…

… AI Assistant for Acrobat is off to an incredibly strong start and we see it continuing to accelerate…

…One of the big things that I think has been unlocked this year is moving, not just by looking at a PDF that you happen to be viewing, but being able to look at and have a conversation with multiple documents, some of which don’t even have to be PDF. So that transition and that gives us the ability to really take Acrobat and make it more of a general purpose productivity platform…

…The thing I’ll add to that is the durability of that, to your point, in languages, as we roll that out in languages, as we roll it out across multiple documents and as we roll it out in enterprises and B2B specifically. So again, significant headroom in terms of the innovation agenda of how Acrobat can be made even more meaningful as a knowledge tool within the enterprise.  

Adobe’s management will soon introduce a new higher-priced Firefly offering that includes the video models; management thinks the higher-priced Firefly offering will help to increase ARPU (average revenue per user); management sees video generation as a high-value activity, which gives Adobe the ability to introduce higher subscription tiers that come with video generation; management sees consumption of AI services adding to Adobe’s ARR (annual recurring revenue) in 2 ways in FY2025, namely, (1) pure consumption-based pricing, and (2) consumption leading to a higher pricing-tier; management has learnt from pricing experiments for AI services and found that the right model for Adobe is a combination of access to features and usage-limits

We will soon introduce a new higher-priced Firefly offering that includes our video models as a comprehensive AI solution for creative professionals. This will allow us to monetize new users, provide additional value to existing customers, and increase ARPU…

…Video generation is a much higher-value activity than image generation. And as a result, it gives us the ability to start to tier Creative Cloud more actively there…

…You’re going to see “consumption” add to ARR in 2 or maybe 3 ways more so in ’25 than in ’24. The first, and David alluded to this, is if you have a video offering and that video offering, that will be a pure consumption pricing associated with it. I think the second is in GenStudio and for enterprises and what they are seeing. With respect to Firefly Services, which, again, I think David touched on how much momentum we are seeing in that business. So that is, in effect, a consumption business as it relates to the enterprise so I think that will also continue to increase. And then I think you’ll see us with perhaps more premium price offering. So the intention is that consumption is what’s driving the increased ARR, but it may be as a result of a tier in the pricing rather than a consumption model where people actually have to monitor it. So it’s just another way, much like AI Assistant is of monetizing it, but it’s not like we’re going to be tracking every single generation for the user, it will just be at a different tier…

… What we’ve done over the last year, there’s been a bit of experimentation, obviously, in the core Creative applications. We’ve done the generative credits model. What we saw with Acrobat was this idea of a separate package and a separate SKU that created a tier that people were able to access the feature through. And as we learn from all of these, we think, as Shantanu had mentioned earlier, that the right tiering model for us is going to be a combination of feature, access to certain features and usage limits on it. So the higher the tier, the more features you get and the more usage you get of it.

The Adobe Experience Platform (AEP) AI Assistant helps marketers automate tasks and generate new audiences and journeys

Adobe Experience Platform AI Assistant empowers marketers to automate tasks and generate new audiences and journeys. Adobe Experience Manager generates variations, provides dynamic and personalized content creation natively through AEM, enabling customers to deliver more compelling and engaging experiences on their websites.

Adobe’s management thinks there are 3 foundational differences in the company’s AI models and what the rest are doing, namely, (1) commercially safe models, (2) incredible control of the models, and (3) the integration of the models into products

The foundational difference between what we do and what everyone else does in the market really comes down to 3 things: one is commercially safe, the way we train the models; two is the incredible control we bake into the model; and three is the integration that we make with these models into our products, increasingly, of course, in our CC flagship applications but also in Express and Legroom and these kinds of applications but also in Anil’s DX products as well. So that set of things is a critical part of the foundation and a durable differentiator for us as we go forward.

Adobe’s management is seeing that users are onboarded to products faster when using generative AI capabilities; management is seeing that users who use generative AI features have higher retention rates

We are seeing in the core Creative business, when people try something like Photoshop, the onboarding experience is faster to success because of the use of generative AI and generative capabilities. So you’ll start to see us continuing to drive more proliferation of those capabilities earlier in the user journeys, and that has been proven very productive. But we also noticed that more people use generative AI. Again, we’ve always had good retention rates, but the more people use generative AI, the longer they retain as well. 

MongoDB (NASDAQ: MDB)

MongoDB’s management is seeing a lot of large customers want to run workloads, even AI workloads, in on-premise format

We definitely see lots of large customers who are very, very committed to running workloads on-prem. We even see some customers want who are on to run AI workloads on-prem…

… I think you have some customers who are very committed to running a big part of the estate on-prem. So by definition, then if they’re going to build an AI workload, it has to be run on-prem, which means that they also need access to GPUs, and they’re doing that. And then other customers are leveraging basically renting GPUs from the cloud providers and building their own AI workloads.    

MongoDB’s initiative to accelerate legacy app modernisation with AI (Relational Migrator) has seen a 50% reduction in the cost to modernise in its early days; customer interest in this initiative is exceeding management’s expectations; management expects modernisation projects to include large services engagements and MongoDB is increasing its professional services delivery capabilities; management is building new tools to accelerate future monetisation of service engagements; management has growing confidence that the monetisation of modernisation capabilities will be a significant growth driver for MongoDB in the long term; there are a confluence of events, including the emergence of generative AI to significantly reduce the time needed for migration of databases, that make the modernisation opportunity attractive for MongoDB; the buildout of MongoDB’s professional services capabilities will impact the company’s gross margin

We are optimistic about the opportunity to accelerate legacy app modernization using AI and are investing more in this area. As you recall, we ran a few successful pilots earlier in this year, demonstrating that AI tooling combined with professional services and our relational migrator product, can significantly reduce the time, cost and risk of migrating legacy applications on to MongoDB. While it’s early days, we have observed a more than 50% reduction in the cost to modernize. On the back of these strong early results, additional customer interest is exceeding our expectations. 

Large enterprises in every industry and geography are experiencing acute pain from their legacy infrastructure and are eager for more agile performance and cost-effective solutions. Not only our customers excited to engage with us, they also want to focus on some of the most important applications in their enterprise further demonstrating the level of interest and size of the long-term opportunity.

As relational applications encompass a wide variety of database types, programming languages, versions and other customer-specific variables, we expect modernization projects to continue to include meaningful services engagements in the short and medium term. Consequently, we are increasing our professional services delivery capabilities, both directly and through partners. In the long run, we expect to automate and simplify large parts of the modernization process. To that end, we are leveraging the learnings from early service engagements to develop new tools to accelerate future monetization efforts. Although it’s early days and scaling our legacy app monetization capabilities will take time, we have increased conviction that this motion will significantly add to our growth in the long term…

…We’re so excited about the opportunity to go after legacy applications is that, one, it seems like there’s a confluence of events happening. One is that the increasing cost and tax of supporting and managing these legacy apps are just going up enough. Second, for many customers who are in regulated industries, the regulators are calling their the fact that they’re running on these legacy apps a systemic risk, so they can no longer kick the can down the road. Third, also because they no longer kick the can around, some vendors are going end of life, So they have to make a decision to migrate those applications to a more modern tech stack. Fourth, because Gen AI is so predicated on data and to build a competitive advantage, you need to leverage your proprietary data. People want to access that data and be able to do so easily. And so that’s another reason for them to want to modernize…

…we always could help them very easily move the data and map the schema from a relational schema to a document schema. The hardest part was essentially rewriting the application. Now with the advent of GenAI, you can now significantly reduce the time. One, you can use GenAI to analyze the existing code. Two, you can use GenAI to reverse engineer tests to test what the code does. And then three, you can use GenAI to build new code and then use this test to ensure that the new code produce the same results as the old code. And so all that time and effort is suddenly cut in a meaningful way…

…We’re really building out that capacity in order to meet the demand that we’re seeing relative to the opportunity. We’re calling it in particular because it has a gross margin impact because that’s where that will typically show up. 

MongoDB’s management thinks that the company’s database is uniquely suited for the query-rich and complex data structures commonly found in AI applications; AI-powered recommendation systems have to consider complex data structures, beyond a customer’s purchase history; MongoDB’s database unifies source data, metadata, operational data and vector data in all 1 platform, providing a better developer experience; management thinks MongoDB is well-positioned for AI agents because AI agents that perform tasks need to interact with complex data structures, and MongoDB’s database is well-suited for this

MongoDB is uniquely equipped to query-rich and complex data structures typical of AI applications. The ability of a database to query-rich and complex data structures is crucial because AI applications often rely on highly detailed, interrelated and nuanced data to make accurate predictions and decisions. For example, a recommendation system doesn’t just analyze a single customer’s purchase but also considers their browsing history, peer group behavior and product categories requiring a database that can query and ensuring these complex data structures. In addition, MongoDB’s architecture unified source data, metadata, operational data and vector data in all 1 platform, updating the need for multiple database systems and complex back-end architectures. This enables a more compelling developer experience than any other alternative…

…When you think about agents, there’s jobs, there’s sorry, there’s a job, this project and then this task. Right now, the agents that are being rolled out are really focused on task, like, say, something from Sierra or some other companies that are rolling out agents. But you’re right, what they deem to do is to deal with being able to create a rich and complex data structure.

Now why is this important for in AI is that AI models don’t just look at isolated data points, but they need to understand relationships, hierarchies and patterns within the data. They need to be able to essentially get real-time insights. For example, if you have a chat bot where someone’s clearing customers kind of trying to get some update on the order they placed 5 minutes ago because they may have not gotten any confirmation, your chatbot needs to be able to deal with real-time information. You need to be able to deal with basically handling very advanced use cases, understanding like do things like fraud detection, to understand behaviors on supply chain, you need to understand intricate data relationships. All these things are consistent with MongoDB offers. And so we believe that at the end of the day, we are well positioned to handle this.

And the other thing that I would say is that we’ve embedded in a very natural way, search and vector search. So we’re just not an OLTP [online transaction processing] database. We do tech search and vector search, and that’s all one experience and no other platform offers that, and we think we have a real advantage. 

In the AI market, MongoDB’s management is seeing most customers still being in the experimental stage, but the number of AI apps in production is increasing; MongoDB has thousands of AI apps on its platform, but only a small number have achieved enterprise-scale; there’s one AI app on MongoDB’s platform that has grown 10x since the start of 2024 and is a 7-figure workload today; management believes that as AI technology matures, there will be more AI apps that attain product-market fit but it’s difficult to predict when this will happen; management remains confident that MongoDB will capture its share of successful AI applications, as MongoDB is popular with developers building sophisticated AI apps; there are no compelling AI models for smartphones at the moment because phones do not have sufficient computing power

From what we see in the AI market today, most customers are still in the experimental stage as they work to understand the effectiveness of the underlying tech stack and build early proof-of-concept applications. However, we are seeing an increasing number of AI apps in production. Today, we have thousands of AI apps on our platform.  What we don’t yet see is many of these apps actually achieving meaningful product-market fit and therefore, significant traction. In fact, as you take a step back and look at the entire universe of AI apps, a very small percentage of them have achieved the type of scale that we commonly see with enterprise-specific applications. We do have some AI apps that are growing quickly, including one that is already a 7-figure workload that has grown 10x since the beginning of the year.

Similar to prior platform shifts as the usefulness of AI tech improves and becomes more cost-effective we will see the emergence of many more AI apps that do nail product market fit, but it’s difficult to predict when that will happen more broadly. We remain confident that we will capture our fair share of these successful AI applications as we see that our platform is popular with developers building more sophisticated AI use cases…

…Today, we don’t have a very compelling model designed for our phones, right? Because today, the phones don’t have the computing horsepower to run complex models. So you don’t see a ton of very, very successful consumer apps besides, say, ChatGPT or Claude.

MongoDB’s management is building enterprise-grade Atlas Vector Search functionality into the company’s platform so that MongoDB will be in an even better position to win AI opportunities; management is bringing vector search into MongoDB’s community and EA (Enterprise Advance, which is the company’s non-Atlas business) offerings

We continue investing in our product capabilities, including enterprise-grade Atlas Vector Search functionality to build on this momentum and even better position MongoDB to capture the AI opportunity. In addition, as previously announced, we are bringing search and vector service to our community and EA offerings, leveraging our run-anywhere competitive advantage in the world of AI…

…We are investing in our what we call our EA business. First, we’re starting by investing with Search and Vector Search and a community product. That does a couple of things for us. One, whenever anyone starts with MongoDB with the open source product, they need get all the benefits of that complete and highly integrated platform. Two, those capabilities will then migrate to EA. So EA for us is an investment strategy.

MongoDB’s management is expanding the MongoDB AI Applications Program (MAAP); the MAAP has signed on new partners, including with Meta; management expects more of the MAAP workloads to happen on Atlas initially

We are expanding our MongoDB AI Applications program, or MAAP, which helps enterprise customers build and bring AI applications into production by providing them with reference architectures, integrations with leading tech providers and coordinated services and support. Last week, we announced a new cohort of partners, including McKinsey, Confluent, CapGemini and Instructure as well as the collaboration with Meta to enable developers to build arenrich applications on MongoDB using Llama…

…[Question] On the MAAP program, are most of those workloads going to wind up in Atlas? Or will that be a healthy combination of EA and Atlas?

[Answer] I think it’s, again, early days. I would say — I would probably say more on the side of Atlas than EA in the early days. I think once we introduce Search and Vector Search into the EA product, you’ll see more of that on-prem. Obviously, people can use MongoDB for AI workloads using other technologies as well in conjunction with MongoDB for on-prem AI use cases. But I would say you’re probably going to see that happen first in Atlas.

Tealbook consolidated from Postgres, PG Vector, and Elastic Search to MongoDB; Tealbook has seen cost efficiencies and increased scalability with Atlas Vector Search for its application that uses generative AI to collect, verify and enrich supplier data across various sources

Tealbook, a supplier intelligence platform migrated from [ Postgres ], [ PG Vector ] and Elastic Search to MongoDB to eliminate technical debt and consolidate their tech stack. The company experienced workload isolation and scalability issues in PG vector, and we’re concerned with the search index inconsistencies, which were all resolved with the migration to MongoDB. With Atlas Vector search and dedicated search notes, Tealbook has realized improved cost efficiency and increase scalability for the supplier data platform, an application that uses GenAI to collect, verify and enrich supplier data across various sources.

MongoDB’s partnerships with all 3 major cloud providers – AWS, Azure, and GCP – for AI workloads are going well; management expects the cloud providers to bundle their own AI-focused database offerings with their other AI offerings, but management also thinks the cloud providers realise that MongoDB has a better offering and it’s better to partner with the company

With AWS, as you said, they just had their Reinventure last week. It remains very, very strong. We closed a ton of deals this past quarter, some of them very, very large deals. We’re doing integrations to some of the new products like Q and Bedrock and the engagement in the field has been really strong.

On Azure, I think we — as I’ve shared in the past, we start off with a little bit of a slower start. But in the words of the person who runs their partner leadership, the Azure MongoDB relationship has never been stronger. — we closed a large number of deals, we’re part of what’s called the Azure-native IC service program and have a bunch of deep integrations with Azure, including Fabric, Power BI, Visual Studio, Symantec Kernel and Azure OpenAI studio. And we’re also one of Azure’s largest marketplace partners.

And GCP does — we’ve actually seen some uptick in terms of co-sales that we’ve done this past quarter. GCP made some comp changes where that were favorable to working with MongoDB that we saw some results in the field and we’re focused on closing a handful of large deals with GCP in Q4. So in general, I would say things are going quite well.

And then in terms of, I guess, implying your question was like the hyperscalers and are they potentially bundling things along with their AI offerings? I mean, candidly, since day 1, the hyperscalers have been bundling their database offerings with every offering that they have. And that’s been their pretty predominant strategy. And we’ve — I think we’ve executed well against strategy because databases are not like a by-the-way decision. It’s an important decision. And I think the hyperscalers are seeing our performance and realize it’s better to partner with us. And as I said, customers understand the importance of the data layer, especially by our applications. And so the partnership across all 3 hyperscalers is strong.

A new MongoDB AI-related capability called Atlas Search Nodes is seeing very high demand; Atlas Search is being used by one of the world’s largest banks to provide a Google-like Search experience on payments data for customers; an AI-powered accounting software provider is using Atlas Search to allow end-users to perform ad-hoc analysis

On search, we introduced a new capability called Atlas Search nodes, which where you can asymmetrically scale your search nodes because if you have a search intensive use case, you don’t have to scale all your nodes because that have become quite expensive. And we’ve seen that this kind of groundbreaking capability really well received. The demand is quite high. And because customers like they can tune the configuration to the unique needs of their search requirements.

One of the world’s largest banks is using Atlas Search to provide like a Google-like search experience on payments data for massive corporate customers. So there’s a customer-facing application, and so performance and scalability are critical. A leading provider of AI-powered accounting software uses Atlas Search to Power’s invoice analytics future, which allows end users on finance teams to perform ad hoc analysis and easily find past due invoices and voices that contain errors.

Vector Search is only in its first full year of being generally available; uptake of Vector Search has been very high; MongoDB released a feature on Atlas Vector Search in 2024 Q3 that reduces memory requirements by up to 96% and this helps Atlas Vector Search support larger vector workloads at a better price-performance ratio; a multinational news organisation used Vector Search to create a generative AI tool to help producers and journalists sift through vast quantities of information; a security firm is using Vector Search for AI fraud; a global media company replaced Elastic Search with Vector Search for a user-recommendation engine

On Vector Search, again, and it’s been our kind of our first full year since going generally available and the product uptake has been actually very, very high. In Q3, we released quantization for Atlas Vector Search, which reduces the memory requirements by up to 96%, allowing us to support larger Vector workloads with vastly improved price performance.

For example, a multinational news organization created a GenAI powered tool designed to help producers and journalists efficiently search, summarize and verify information from vast and varied data sources. A leading security firm is using Atlas Vector certified AI fraud and a leading global media company replaced elastic search with hybrid search and vector search use case for a user recommendation engine that’s built to suggest that’s building to suggest articles to end users.

MongoDB’s management thinks the industry is still in the very early days of shifting towards AI applications

I do think we’re in the very, very early days. They’re still learning experimenting…  I think as people get more sophisticated with AI as the AI technology matures and becomes more and more useful, I think applications will — you’ll start seeing these applications take off. I kind of chuckle that today, I see more senior leaders bragging about the chips they are using versus the Appstore building. So it just tells you that we’re still in the very, very early days of this big platform shift.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue again had incredibly strong growth in 2024 Q3, driven by demand for the Hopper GPU computing platform; Nvidia’s H200 sales achieved the fastest ramp in the company’s history

Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional, and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest prod ramp in our company’s history.

Nvidia’s H200 product has 2x faster inference speed, and 50% lower total cost of ownership (TCO)

The H200 delivers up to 2x faster inference performance and up to 50% improved TCO. 

Cloud service providers (CSPs) were half of Nvidia’s Data Centre revenue in 2024 Q3, and up more than 2x year-on-year; CSPs are installing tens of thousands of GPUs to meet rising demand for AI training and inference; Nvidia Cloud Instances with H200s are now available, or soon-to-be-available, in the major CSPs

Cloud service providers were approximately half of our Data Center sales with revenue increasing more than 2x year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon.

North America, India, and Asia Pacific regions are ramping up Nvidia Cloud Instances and sovereign clouds; management is seeing an increase in momentum of sovereign AI initiatives; India’s CSPs are building data centers containing tens of thousands of GPUs and increasing GPU deployments by 10x in 2024 compared to a year ago; Softbank is building Japan’s most powerful AI supercomputer with Nvidia’s hardware 

Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs…

…Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs include product communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x…

…In Japan, SoftBank is building the nation’s most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA.

Nvidia’s revenue from consumer internet companies more than doubled year-on-year in 2024 Q3

Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. 

Nvidia’s management sees Nvidia as the largest inference platform in the world; Nvidia’s management is seeing inference really starting to scale up for the company; models that are trained on previous generations of Nvidia chips inference really well on those chips; management thinks that as Blackwell proliferates in the AI industry, it will leave behind a large installed base of infrastructure for inference; management’s dream is that plenty of AI inference happens across the world; management thinks that inference is hard because it needs high accuracy, high throughput, and low latency

NVIDIA’s Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements…

…We’re seeing inference really starting to scale up for our company. We are the largest inference platform in the world today because our installed base is so large. And everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we’re seeing inference demand go up…

… Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really exceeded is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing inference 24/7…

…Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high-throughput as well as low latency is incredibly hard to build. 

Nvidia’s management has driven a 5x improvement in Hopper inference throughput in 1 year via advancements in the company’s software; Hopper’s inference performance is set to increase by a further 2.4x shortly because of NIM (Nvidia Inference Microservices)

Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in 1 year and cut time to first token by 5x. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. 

Nvidia’s Blackwell family of chips is now in full production; Nvidia shipped 13,000 Blackwell samples to customers in 2024 Q3; the Blackwell family comes with a wide variety of customisable configurations; management sees all Nvidia customers wanting to be first to market with the Blackwell family; management sees staggering demand for Blackwell, with Oracle announcing the world’s first zetta-scale cluster with more than 131,000 Blackwell GPUs, and Microsoft being the first CSP to offer private-preview Blackwell instances; Blackwell is dominating GPU benchmarks; Blackwell performs 2.2x better than Hopper and is also 4x cheaper; Blackwell with NVLink Switch delivered up to a 30x improvement in inference speed; Nvidia’s management expects the company’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; Blackwell’s production is in full-steam ahead and Nvidia will deliver more Blackwells in 2024 Q4 than expected; demand for Blackwell exceeds supply

Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled. 

Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world’s first zetta-scale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.

Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling, throughput and response time that is excellent for running new reasoning inference applications like OpenAI’s o1 model…

…As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s…

… Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated…

…It is the case that demand exceeds our supply. And that’s expected as we’re in the beginnings of this generative AI revolution as we all know…

…In terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible…

…[Question] Do you think it’s a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar ’25?

[Answer] Yes, I think it is a reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.  

Nvidia’s management is seeing that hundreds of AI-native companies are already delivering AI services and thousands of AI-native startups are building new services

Hundreds of AI-native companies are already delivering AI services with great success. Though Google, Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor and the Bridge are seeing great success while thousands of AI-native start-ups are building new services. 

Nvidia’s management is seeing large enterprises build copilots and AI agents with Nvidia AI; management sees the potential for billions of AI agents being deployed in the years ahead; Accenture has an internal AI agent use case that reduces steps in marketing campaigns by 25%-35%

Industry leaders are using NVIDIA AI to build Copilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years…

… Accenture with over 770,000 employees, is leveraging NVIDIA-powered agentic AI applications internally, including 1 case that cuts manual steps in marketing campaigns by 25% to 35%.

Nearly 1,000 companies are using NIM (Nvidia Inference Microservices); management expects the Nvidia AI Enterprise platform’s revenue in 2024 to be double that from 2023; Nvidia’s software, service, and support revenue now has an annualised revenue run rate of $1.5 billion and management expects the run rate to end 2024 at more than $2 billion

Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over 2x from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.

Nvidia’s management is seeing an acceleration in industrial AI and robotics; Foxconn is using Nvidia Omniverse to improve the performance of its factories, and Foxconn’s management expects a reduction of over 30% in annual kilowatt hour usage in Foxconn’s Mexico facility

Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics…

…Foxconn, the world’s largest electronics manufacturer, is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce — a reduction of over 30% in annual kilowatt hour usage.

Nvidia saw sequential growth in Data Center revenue in China because of export of compliant Hopper products; management expects the Chinese market to be very competitive

Our data center revenue in China grew sequentially due to shipments of export-compliant Hopper products to industries…

…We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.

Nvidia’s networking revenue declined sequentially, but there was sequential growth in Infiniband and Ethernet switches, Smart NICs (network interface controllers), and BlueField DPUs; management expects sequential growth in networking revenue in 2024 Q4; management is seeing CSPs adopting Infiniband for Hopper clusters; Nvidia’s Spectrum-X Ethernet for AI revenue was up 3x year-on-year in 2024 Q3; xAI used Spectrum-X for its 100,000 Hopper GPU cluster and achieved zero application latency degradation and maintained 95% data throughput, compared to 60% for Ethernet

Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters.

NVIDIA Spectrum-X Ethernet for AI revenue increased over 3x year-on-year. And our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI’s Colossus 100,000 Hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet…

…Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we’re going to be right back up in terms of growing. We’re getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems we are providing them to.

Nvidia has begun shipping new GeForce RTX AI PCs

We began shipping new GeForce RTX AI PC with up to 321 AI FLOPS from ASUS and MSI with Microsoft’s Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.

Nvidia’s Automotive revenue had strong growth year-on-year and sequentially in 2024 Q3, driven by self-driving brands of Nvidia Orin; Volvo’s electric SUV will be powered by Nvidia Orin

Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Volvo Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.

Nvidia’s management thinks pre-training scaling of foundation AI models is intact, but it’s not enough; another way of scaling has emerged, which is inference-time scaling; management thinks that the new ways of scaling has resulted in great demand for Nvidia’s chips, but for now, most of Nvidia’s chips are used in pre-training 

Foundation model pretraining scaling is intact and it’s continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough, that we’ve now discovered 2 other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling. And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI’s o1, which does inference time scaling, what’s called test time scaling. The longer it thinks, the better and higher-quality answer it produces. And it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth…

… we now have 3 ways of scaling and we’re seeing all 3 ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pretraining scaling, post-training scaling, and then now very importantly, inference time scaling…

…[Question] Today, how much of the compute goes into each of these buckets? How much for the pretraining? How much for the reinforcement? And how much into inference today?

[Answer] Today, it’s vastly in pretraining a foundation model because, as you know, post-training, the new technologies are just coming online. And whatever you could do in pretraining and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority. And so you’ll always have to do on-the-spot thinking and in context thinking and a reflection. And so I think that the fact that all 3 are scaling is actually very sensible based on where we are. And in the area foundation model, now we have multimodality foundation models and the amount of petabytes video that these foundation models are going to be trained on, it’s incredible. And so my expectation is that for the foreseeable future, we’re going to be scaling pretraining, post-training as well as inference time scaling and which is the reason why I think we’re going to need more and more compute.  

Nvidia’s management thinks the company generates the greatest possible revenue for its customers because its products has much better performance per watt

Most data centers are now 100 megawatts to several hundred megawatts, and we’re planning on gigawatt data centers, it doesn’t really matter how large the data centers are. The power is limited. And when you’re in the power-limited data center, the best — the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual road map reduces cost. But on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues. 

Nvidia’s management sees Hopper demand continuing through 2025

Hopper demand will continue through next year, surely the first several quarters of the next year. 

Nvidia’s management sees 2 fundamental shifts in computing happening today: (1) the movement from code that runs on CPUs to neural networks that run on GPUs and (2) the production of AI from data centres; the fundamental shifts will drive a $1 trillion modernisation of data centres globally

We are really at the beginnings of 2 fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that’s happening is $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.

On the other hand, secondarily, I guess, is that on top of these systems are going to be — we’re going to be creating a new type of capability called AI. And when we say generative AI, we’re essentially saying that these data centers are really AI factories. They’re generating something. Just like we generate electricity, we’re now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we’re going to see this new type of system come online, and I call it an AI factory because that’s really as close to what it is. It’s unlike a data center of the past.

Nvidia’s management does not see any digestion happening for GPUs until the world’s data centre infrastructure is modernised

[Question] My main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase? Or is it just too premature to discuss that because you’re just at the start of Blackwell?

[Answer] I believe that there will be no digestion until we modernize $1 trillion with the data centers.

Okta (NASDAQ: OKTA)

Okta AI is really starting to help newer Okta products

Second thing is that we have Okta AI, which we talked a lot about a couple of years ago, and we continue to work on that. And it’s really starting to help these new products like identity threat protection with Okta AI. The model inside of identity threat protection and how that works is AI is a big part of the product functionality. 

Okta’s management sees the need for authentication for AI agents and has a product called Auth for Gen AI; management thinks authentication of AI agents could be a new area of growth for Okta; management sees the pricing for Auth for Gen AI as driven by a fee per monthly active machine

Some really interesting new areas are we have something we talked about at Oktane called Auth for Gen AI, which is basically authentication platform for agents. Everyone is very excited about agents, as they should be. I mean, we used to call them bots, right? 4, 5 years ago, they’re called bots. Now they’re called agents, like what’s the big deal? How different is it? Well, you can interact with them natural languages and they can do a lot more with these models. So now it’s like bots are real in real time. But the problem is all of these bots and all of these platforms to build bots, they have the equivalent of the monitor sticky notes with passwords on them, they have the equivalent of that inside the bot. So there’s no protocol for single sign-on for bots. They have like stored passwords in the bot. And if that bot gets hacked, guess what? You signed up for that bot and it has access to your calendar and has access to your travel booking and it has access to your company e-mail and your company data, that’s gone because the hacker is going to get all those passwords out there. So Auth for Gen AI automates that and make sure you can have a secure protocol to build a bot around. And so that’s a really interesting area. It’s very new. We just announced it and all these agent frameworks and so forth are new…

… Auth for GenAI, it’s basically like — think about it as a firm machine authentication. So every time — we have this feature called machine-to-machine, which does a similar thing today, and you pay basically by the monthly active machine.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Salesforce is at the edge of the rise of digital labour, which are autonomous AI agents; management thinks the TAM (total addressable market) for digital labour is much larger than the data management market that Salesforce was previously in; management thinks Salesforce is the largest supplier of digital labour right from the get-go; Salesforce’s AgentForce service went into production in 2024 Q3 and Salesforce has already delivered 200 AgentForce deals with more to come; management has never seen anything like AgentForce; management sees AgentForce as the next evolution of Salesforce; management thinks AgentForce will help companies scale productivity independent of workforce growth; management sees AgentForce AI agents manifesting as robots that will supplement human labour; management sees AgentForce, together with robots, as a driving force for future global economic growth even with a stagnant labour force; AgentForce is already delivering tangible value to customers; Salesforce’s customers recently built 10,000 AI agents with AgentForce in 3 days, and thousands more AI agents have been built since then; large enterprises across various industries are building AI agents with AgentForce; management sees AgentForce unlocking a whole new level of operational efficiency; management will be delivering AgentForce 2.0 in December this year

We’re really at the edge of a revolutionary transformation. This is really the rise of digital labor. Now for the last — I would say for the last 25 years at Salesforce, and we’ve been helping companies to manage and share their information…

…But now we’ve really created a whole new market, a new TAM, a TAM that is so much bigger and so much more exciting than the data management market that it’s hard to get our head completely around. This is the market for digital labor. And Salesforce has become, right out of the gate here, the largest supplier of digital labor and this is just the beginning. And it’s all powered by these autonomous AI agents…

…With Salesforce, agent force, we’re not just imagining this future. We’re already delivering it. And you so know that in the last week of the quarter, Agentforce went production. We delivered 200 deals, and our pipeline is incredible for future transactions. We can talk about that with you on the call, but we’ve never seen anything like it. We don’t know how to characterize it. This is really a moment where productivity is no longer tied to workforce growth, but through this intelligent technology that can be scaled without limits. And Agentforce represents this next evolution of Salesforce. This is a platform now, Salesforce as a platform or AI agents work alongside humans in a digital workforce that amplifies and augments human capabilities and delivers with unrivaled speed…

…On top of the agenetic layer, we’ll soon see a robotic layer as well where these agents will manifest into robots…

…These agents are not tools. They are becoming collaborators. They’re working 24/7 to analyze data, make decisions, take action, and we can all start to picture this enterprise managing millions of customer interactions daily with Agentforce seamlessly resolving issues, processing transactions, anticipating customer needs, freeing up humans to focus on the strategic initiatives and building meaningful relationships. And this is going to evolve into customers that we have, whether it could be a large hospital or a large hotel where not only are the agents working 24/7, but robots are also working side-by-side with humans, robots manifestations of agents this idea that it’s all happening before our eyes and that this isn’t just some far-off future. It’s happening right now…

…For decades, economic growth dependent on expanding the human workforce. It was all about getting more labor. But with labor and with the labor force stagnating globally, Agentforce is unlocking a new path forward. It’s a new level of growth for the world and for our GPT and businesses no longer need to choose between scale and efficiency with agents, they can achieve both…

…Our customers are already experiencing this transformation. Agentforce is deflecting service cases and resolving issues, processing, qualifying leads, helping close more deals, creating optimizing marketing campaigns, all at an unprecedented scale, 24/7…

…What was remarkable was the huge thirst that our customers had for this and how they built more than 10,000 agents in 3 days. And I think you know that we then unleashed a world tour of that program, and we have now built thousands and thousands of more agents in these world tours all over the world…

…So companies like FedEx, [indiscernible], Accenture, Ace Hardware, IBM, RBC Wealth Management and many more are now building their digital labor forces on the Salesforce platform with Agentforce. So that is the largest and most important companies in the world across all geographies, across all industries are now building and delivering agents…

…While these legacy chatbots have handled these basic tasks like password resets and other basic mundane things, Agentforce is really unlocking an entirely new level of digital intelligence and operational efficiency at this incredible scale…

…I want to invite all of you to join us for the launch of Agentforce 2.0. And it is incredible what you are going to see the advancements in the technology already are amazing and accuracy and the ability to deliver additional value. And we hope that you’re going to join us in San Francisco. This is going to happen on December 17. You’ll see Agentforce 2.0 for the first time,

Salesforce is customer-zero for AgentForce and the service is live on Salesforce’s help-website; AgentForce is handling 60 million sessions and 2 millions support cases annually on the help-website; the introduction of AgentForce in Salesforce’s help-website has allowed management to rebalance headcount into growth-areas; users of Salesforce’s help-website will experience very high levels of accuracy because AgentForce is grounded with the huge repository of internal and customer data that Salesforce has; management sees Salesforce’s data as a huge competitive advantage for AgentForce; AgentForce can today quickly deliver personalised insights to users of Salesforce’s help-website and hand off users to support engineers for further help; management thinks AgentForce will deflect between a quarter and half of annual case volume; Salesforce is also using AgentForce internally to engage prospects and hand off prospects to SDR (sales development representative) team

We pride ourselves on being customer [ 0 ] for all of our products, and Agentforce is no exception. We’re excited to share that Agentforce is now live on help.salesforce.com…

… Our help portal, help.salesforce.com, which is now live. This portal, this is our primary support mechanism for our customers. It lets them authenticate in, it then becomes grounded with the agent, and that Help portal already is handling 60 million sessions and more than 2 million support cases every year. Now that is 100% on Agentforce…

…From a human resource point of view, where we can really start to look at how are we going to rebalance our headcount into areas that now are fully automated and to into areas that are critical for us to grow like distribution…

…Now when you use help.salesforce.com, especially as authenticated users, as I mentioned, you’re going to see this incredible level of accuracy and responsiveness and you’re going to see remarkably low hallucinogenic performance whether for solving simple queries or navigating complex service issues because Agentforce is not just grounded in our Salesforce data and metadata including the repository of 740,000 documents and 17 languages, it’s also grounded in each customer’s data, their purchases, returns, that data it’s that 200 petabytes or through 200 to 300 petabytes of Salesforce data that we have that gives us this kind of, I would say, almost unfair advantage with Agentforce because our agents are going to be more accurate in the least hallucinogenic of any because they have access to this incredible capability. And Agentforce can instantly reason over this vast amounts of data, deliver precise personalizing [indiscernible] with citations in seconds, and Agentforce can seamlessly hand off to support engineers, delivering them complete summary and recommendation as well. And you can all try this today. This isn’t some fantasy land future idea this is today reality…

…We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company, it is going to deflect between a quarter and half of annual case volume and in optimistic cases, probably much, much more of that…

…We’re also deploying Agentforce to engage our prospects on salesforce.com, answering their questions 24/7 as well as handing them off to our SDR team. You can see it for yourself and test it out on our home page. We’ll use our new Agentforce SDR agent to further automate top of funnel activities when gatherings leads, lead data for providing education and qualifying prospects and booking meetings.

Salesforce’s management thinks AgentForce is much better than Microsoft’s AI Copilots

I just want to compare and contrast that against other companies who say they are doing enterprise AI. You can look at even Microsoft. We all know about Copilot, it’s been out, it’s been touted now for a couple of years. We’ve heard about CoPilot. We’ve seen the demo. In many ways, it’s just repackaged ChatGPT. You can really see the difference where Salesforce now can operate its company on our platform. And I don’t think you’re going to find that on Microsoft’s website, are you?

Vivint is using AgentForce for customer support and for technician scheduling, payment requests, and more; Adecco is using AgentForce to improve the handling of job applicants (Adecco receives 300 million job applications annually); Wiley is resolving cases 40% faster with AgentForce; Heathrow Airport is using AgentForce to respond to thousands of travelers instantly, accurately, and simultaneously; SharkNinja is using AgentForce for personalised 24/7 customer support in 28 geographies; Accenture is using AgentForce to improve deal quality and boost bid coverage by 75%

One of them is the smart home security provider, Vivint. They’ve struggled with this high volume of support calls, a high churn rate for service reps. It’s a common story. But now using the Agentforce, Vivint has created a digital support staff to autonomously provide support through their app, their website, troubleshooting, a broad variety of issues across all their customer touch points. And in addition, Vivint is planning to utilize Agentforce to further automate technician scheduling, payment request, proactive issue resolution, the use of device telemetry because Agentforce is across the entire sales force product line and including Slack…

…Another great customer example that’s already incredible to work they’ve already done to get this running and going in their company Adecco, the world’s leading provider of talent solutions, handling 300 million job applications annually, but historically, they have just not been able to go through or respond in a timely way, of course, to the vast majority of applications that they’re gating, but now the Agentforce is going to operate an incredible scale, sorting through the millions of resumes, 24/7 matching candidates to opportunities proactively prequalifying them for recruiters. And in addition, Agentforce can also assess candidates helping them to refine their resumes, giving them a better chance of qualifying for a role…

…Wiley, an early adopter, is resolving cases over 40% faster with Agentforce than their previous chat bot. Heathrow Airport, one of the busiest airports in the world will be able to respond to thousands of travelers inquiries instantly, accurately and simultaneously. SharkNinja, a new logo in the quarter, chose Agentforce and Commerce Cloud to deliver 24/7 personalized support for customers across 28 international markets and unifying its service operations…

…Accenture chose Agentforce to streamline sales operations and enhance bid management for its 52,000 global sellers. By integrating sales coach and custom AI agents, Agentforce is improving deal quality and targeting a 75% boost in bid coverage. 

College Possible is using AgentForce to build virtual college counsellors as there’s a shortage of labour (for example, California has just 1 counsellor for every 500 students); College Possible built its virtual counsellors with AgentForce in under a week – basically like flipping a switch – because it has been accumulating all its data in Salesforce for years

Another powerful example is a nonprofit, College Possible. College Possible matches eligible students with counselors to help them navigate and become ready for college. And in California, for example, the statewide average stands at slightly over 1 counselor for every 500 students. It just isn’t enough. Where are we going to get all that labor…

…We’re going to get it from Agentforce. This means the vast majority of students are not getting the help they need, and now they are going to get the help they need.

College Possible creates a virtual counselor built on Agentforce in under a week. They already had all the data. They have the metadata, they already knew the students. They already had all of the capabilities built into their whole Salesforce application. It was just a flip of a switch…

…  But why? It’s because all of the work and the data and the capability that College Possible has put into Salesforce over the years and years that they had it. It’s not the week that it took to get them to turn it on. They have done a lot of work.

Salesforce’s management’s initiative to have all of the company’s apps be rewritten into a single core platform is called More Core; the More Core initiative also involves Salesforce’s Data Cloud, which is important for AI to work; Salesforce is now layering the AI agent layer on top of More Core, and management sees this combination as a complete AI system for enterprises that also differentiates Salesforce’s AgentForce product

Over the last few years, we’ve really aggressively invested in integrating all of our apps on a single core platform with shared services for security workflow user interfaces more. We’ve been rewriting all of our acquisitions into that common area. We’re really looking at how do we take all of our applications and all of our acquisitions, everything and delivered into one consistent platform, we call that More Core internally inside Salesforce. And when you look at that More Core initiative, I don’t think there’s anyone who delivers this comprehensive platform, sales, service, marketing, commerce, analytics, Slack, all of it as one piece of code. And then now deeply integrated in that 1 piece of code is also our data cloud. That is a key part of our strategy, which continues to have this phenomenal momentum as well to help customers unify and federate with zero-copy data access across all their data and metadata, which is crucial for AI to work.

And now that third layer is really opening up for us, which is this agenetic layer. We have built this agenetic layer that takes advantage of all the investments in Salesforce for our customers and made it in our platform. It’s really these 3 layers. And in these 3 layers that form a complete AI system for enterprises and really uniquely differentiate Salesforce uniquely differentiate Agentforce from every other AI platform that this is one piece of code. This isn’t like 3 systems. It’s not a bunch of different apps all running independently. This is all one piece of code. That’s why it works so well, by the way, because it is 1 platform.

Salesforce’s management thinks jobs and roles within Salesforce will change because of AI, especially AI agents

The transformation is not without challenges. Jobs are going to evolve, roles are going to shift and businesses will need to adapt. And listen, at Salesforce, jobs are going to evolve and roles will shift and businesses will need to adapt as well. We’re all going to need to rebalance our workforce. This is the agents take on more of the workforce.

Salesforce’s management is hearing that a large customer of Salesforce is targeting 25% more efficiency with AI

This morning, I was on the phone with one of our large customers, and they were telling me how they’re targeting inside their company, 25% more efficiency with artificial intelligence.

Salesforce signed more than 2,000 AI deals in 2024 Q3 (FY2025 Q3), and number of AI deals that are over $1 million more than tripled year-on-year; 75% of Salesforce’s AgentForce deals, and 9 of Salesforce’s top 10 deals, in 2024 Q3 involved Salesforce’s global partners; more than 80,000 system integrators have completed AgentForce training; hundreds of ISVs (independent software vendors) and partners are building and selling AI agents; Salesforce has a new AgentForce partner network that allows customers to deploy customised AI agents using trusted 3rd-party extensions from Salesforce App Exchange; Salesforce’s partnership with AWS Marketplace is progression well as transactions doubled sequentially in 2024 Q3, with 10 deals exceeding $1 million

In Q3, the number of wins greater than $1 million with AI more than tripled year-over-year. and we signed more than 2,000 AI deals, including more than the 200 Agentforce wins that Marc shared…

…We’re also seeing amazing Agentforce energy across the ecosystem with our global partners involved in 75% of our Q3 Agentforce deals and 9 of our top 10 wins in the quarter. Over 80,000 system integrators have completed Agentforce training and hundreds of ISVs and technology partners are building and selling agents…

… We continue to unlock customer spend through new channels, including the Agentforce partner network that launched at Dreamforce, which allows customers to customize and deploy specialized agents using trusted third-party extensions from Salesforce App Exchange. And AWS Marketplace continues to be a growth driver. Our Q3 transactions doubled quarter-over-quarter with 10 deals exceeding $1 million. 

Veeva Systems (NYSE: VEEV)

Veeva Vault CRM has a number of new innovations coming, including two AI capabilities that will be available in late-2025 at no additional charge; one of the AI capabilities leverages Apple Intelligence; Vault CRM’s CRM Bot AI application will see Vault CRM be hooked onto customers’ own large language models, and Veeva will not be incurring compute costs

We just had our European Commercial Summit in Madrid where we announced a number of new innovations coming in Vault CRM, including two new AI capabilities – CRM Bot and Voice Control. CRM Bot is a GenAI assistant in Vault CRM. Voice Control is a voice interface for Vault CRM, leveraging Apple Intelligence. Both are included in Vault CRM for no additional charge and are planned for availability in late 2025…

…For the CRM Bot, that’s where we will hook our CRM system into the customers’ own large language model that they’re running. And that’s where we will not charge for, and we will not incur compute cost…

Veeva has a new AI application, MLR Bot, for Vault PromoMats within Commercial Cloud; MLR Bot helps perform checks on content with a Veeva-hosted large language model (LLM); MLR Bot will be available in late-2025 and will be charged separately; management has been thinking about MLR Bot for some time; management is seeing a lot of excitement over MLR Bot; management is still working through the details of the monetisation of MLR Bot; MLR Bot’s LLM will be from one of the big tech providers but it will be Veeva who’s the one paying for the compute 

We also announced MLR Bot, an AI application in Vault PromoMats to perform content quality and compliance checks with a Veeva-hosted large language model. Planned for availability in late 2025, MLR Bot will require a separate license…

… So I was at our Europe Summit event where we announced MLR Bot, something we’ve been thinking about and evaluating for some time…

…So there’s a lot of excitement. This is a really core process for life sciences companies. So a lot of excitement there…

…In terms of sizing and the monetization, we’re still working through the details on that, but there’s a ton of excitement from our existing customers. We look forward to getting some early customers started on that as we go into next year…

…MLR Bot, we will charge for, and that’s where we will host and run a large language model. Not our own large language model, right? We’ll use one from the big tech providers, but we will be paying for the compute power for that, and so we’ll be charging for that.

CRM Bot, Voice Control, and MLR Bot are part of Veeva’s management’s overall AI strategy to provide AI applications with tangible value; another part of the AI strategy involves opening up data for customers to power all forms of AI; management’s current thinking is to charge for AI applications if Veeva is responsible for paying compute costs

These innovations are part of our overall AI strategy to deliver specific AI applications that provide tangible value and enable customers and partners with the AI Partner Program, as well as the Vault Direct Data API, for the data needed to power all forms of AI…

… So where we have to use significant compute power, we will most likely charge. And where we don’t, we most likely won’t.

Wix (NASDAQ: WIX)

More than 50% of new Wix users are using the company’s AI-powered onboarding process which was launched nearly a year ago; users who onboard using Wix’s AI process are 50% more likely to start selling on Wix and are more likely to become paid subscribers; the AI-powered onboarding process is leading to a 13% uplift in conversion rate for the most recent Self-Creator cohort; the AI website builder is free but it helps with conversions to paid subscribers

Almost one year ago, we launched our AI website builder, which is now available in 20 languages and has been a game changer in our user onboarding strategy. Today, more than 50% of new users are choosing to create their online presence through our AI-powered onboarding process. The tool is resonating particularly well with small businesses and entrepreneurs as paid subscriptions originated from this AI-powered onboarding are 50% more likely to have a business vertical attached and significantly more likely to start selling on Wix by streamlining the website building process while offering a powerful and tailored commerce-enablement solution…

…Cash in our most recent self-created cohort showed a 13% uplift in conversion rate from our AI onboarding tool…

…[Question] A lot of the commentary seems that today, AI Website Builder is helping on conversion. I wanted to ask about specifically, is there an opportunity to directly monetize the AI products within the kind of core website design funnel?

[Answer] So I think that the way we monetize, of course, during the buildup phase of the website, is by making it easier. And our customers are happy with their websites, of course, we convert better. So I don’t think there is any better way to monetize than that, right? The more users finish the website, the better the website, the higher conversion and the high monetization. 

Wix now has 29 AI assistants to support users

Earlier this year, we spoke about our plan to embed AI assistance across our platform and we’re continuing to push that initiative forward. We now have a total of 29 assistants, spanning a wide range of use cases to support users and to service customers throughout their online journeys.

Wix has a number of AI products that are launching in the next few months that are unlike anything in the market and they will be the first AI products that Wix will be monetising directly

We have a number of AI products coming in the next few months that are unlike anything in the market today. These products will transform the way merchants manage their businesses, redefine how users interact with their customers and enhance the content creation experience. Importantly, these will also be the first AI products we plan to monetize directly. We are on the edge of unforeseen innovation, and I’m looking forward to the positive impact it will have on our users.

Zoom Communications (NASDAQ: ZM)

Zoom’s management has a new vision for Zoom, the AI-first Work Platform for Human Connection

In early October, we hosted Zoomtopia, our annual customer and innovation event, and it was an amazing opportunity to showcase all that we have been working on for our customers. We had a record-breaking virtual attendance, and unveiled our new vision, AI-first Work Platform for Human Connection. This update marks an exciting milestone as we extend our strength as a unified communication and collaboration platform into becoming an AI-first work platform. Our goal is to empower customers to navigate today’s work challenges, streamline information, prioritizing tasks and making smarter use of time.

Management has released AI Companion 2.0, which is an agentic AI technology; AI Companion 2.0 is able to see a broader window of context and gather information from internal and external sources; Zoom AI Companion monthly active users grew 59% sequentially in 2024 Q3; Zoom has over 4 million accounts that have enabled AI Companion; management thinks customers really like Zoom AI Companion; customer feedback for AI Companion has been extremely positive; management does not intend to charge customers for AI Companion

At Zoomtopia, we took meaningful steps towards that vision with the release of AI Companion 2.0…

…This release builds upon the awesome quality of Zoom AI Companion 1.0 across features like Meeting Summary, Meeting Query and Smart Compose, and brings it together in a way that evolves beyond task-specific AI towards agentic AI. This major update allows the AI Companion to see a broader window of context, synthesize the information from internal and external sources, and orchestrate action across the platform. AI Companion 2.0 raises the bar for AI and demonstrates to customers that we understand their needs…

…We saw progress towards our AI-first vision with Zoom AI Companion monthly active users growing 59% quarter-over-quarter…

…At Zoomtopia, we mentioned that there are over 4 million accounts who are already enabled AI Companion. Given the quality, ease of use and no additional cost, the customer really like Zoom AI Companion…

…Feedback from our customers at Zoomtopia Zoom AI Companion 2.0 were extremely positive because, first of all, they look at our innovation, the speed, right? And the — a lot of features built into the AI Companion 2.0, again, at no additional cost, right? At the same time, Enterprise customers also want to have some flexibility. That’s why we also introduced customized AI Companion and also AI Companion Studio. And that will be available first half of next year and also we can monetize…

…We are not going to charge the customer for AI Companion, at no additional cost

Zscaler is using Zoom AI Companion to improve productivity across the whole company; large enterprises such as HSBC and Exxon Mobil are also using Zoom AI Companion

Praniti Lakhwara, CIO of Zscaler, provided a great example of how Zoom AI Companion helped democratize AI and enhance productivity across the organization, without sacrificing security and privacy. And it wasn’t just Zscaler. the RealReal, HSBC, ExxonMobil and Lake Flato Architects shared similar stories about Zoom’s secure, easy-to-use solutions, helping them thrive in the age of AI and flexible work.

Zoom’s management recently introduced a road map of AI products that expands Zoom’s market opportunity; Custom AI Companion add-on, including paid add-ons for healthcare and education, will be released in 2025 H1; management built the monetisable parts of AI Companion after gathering customer feedback 

Building on our vision for democratizing AI, we introduced a road map of TAM-expanding AI products that create additional business value through customization, personalization and alignment to specific industries or use cases. 

 Custom AI Companion add-on, which will be released in the first half of next year, aims to meet our customers where they are in their AI journey by plugging into knowledge bases, integrating with third-party apps and personalizing experiences like custom AI avatars and AI coaching. Additionally, we announced that we’ll also have Custom AI Companion paid add-ons for health care and education available as early as the first quarter of next year…

…The reason why we introduced the Customized AI Companion or AI Companion Studio because, a few quarters ago — and we talked to many Enterprise customers. They shared with us feedback, right? So they like AI Companion. Also, they want to make sure, hey, some customers, they already build their own AI large language model. How to [ federate ] that into our federated AI approach. And some customers, they have very large content, like a knowledge base, how to connect with that. Some customers, they have with other beginning systems, right, like a ServiceNow, Atlassian and Workday, a lot of Box and HubSpot, how to connect those data sources, right? And also even from an employee perspective, right, they won’t have a customized avatar, like AI to — as a personal culture as well. So meaning those customers, they have customized requirements. To support those customer requirements, we need to make sure we have AI infrastructure and technology ready, right? That’s the reason why we introduced the AI Companion, the Customized AI Companion. The goal is really working together with integrated customers to tailored for each Enterprise customer. That’s the reason why it’s not free.

I think the feedback from Zoomtopia is very positive because, again, those features are not built by our — just the several product managers, engineers think about let’s build that. We already solicited feedback from our Enterprise content before, those features that I think can truly satisfy their needs.

Zoon’s management thinks that Zoom is very well-positioned because it is providing AI-powered tools to customers at no additional cost, unlike other competitors

Given our strength on the quality plus at no additional cost, Zoom is much better positioned. In particular, customers look at all the vendors when they try to consult and look at — again, the AI cost is not small, right? You look at some of the competitors, per user per month, $30, right? And look at Zoom, better quality at no additional cost. That’s the reason why it comes with a total cost of ownership. Customers look at Zoom, I think, much better positioned…

…Again, almost every business, they subscribe to multiple software services. If each software service vendors they are going to charge the customer with AI, guess what, every business is — they have to spend more. That’s the reason why they trust Zoom, and I think we are much better positioned.

Zoom’s management is seeing some customers find new budgets to invest in AI, whereas some customers are reallocating budgets from other areas towards AI

Every company, I think now they are all thinking about where they should allocate the budget, right? Where should they get more money or fund, right, to support AI? I think every company is different. And some internal customers, and they have a new budget. Some customers, they consolidated into the few vendors and some customers, they just want to say, hey, maybe actually save the money from other areas and to shift the budget towards embracing AI.

Zoom’s management thinks Zoom will need to continue investing in AI, but they are not worried about the costs because the AI features will be monetised

Look at AI, right? So we have to invest more, right? And I think a few areas, right? One is look at our Zoom Workplace platform, right? We have to [ invent ] more talent, deploy more GPUs and also use more of the cloud, basically GPUs, as well as we keep improving the AI quality and innovate on AI features. That’s for Workplace. And at the same time, we are going to introduce the customized AI Companion, also AI Studio next year. Not only do we offer the free service for AI Companion, but those Enterprise customization certainly can help us in terms of monetization. At the same time, we leverage the technology we build for the workplace, apply that to the Contact Center, like Zoom Virtual Agent, right, and also some other Contact Center features. We can share the same AI infrastructure and also a lot of technology components and also can be shared with Zoom Contact Center.

Where AI Companion is not free, the Contact Center is different, right? We also can monetize. Essentially, we build the same common AI infrastructure architecture and Workplace — Customized AI Companion, we can monetize. Contact Center, also, we can monetize. I think more and more — and like today, you see you keep investing more and more, and soon, we can also monetize more as well. That’s why I think we do not worry about the cost in the long run at all, I mean, the AI investment because with the monetization coming in, certainly can help us more. So, so far, we feel very comfortable.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google and GCP), Amazon (parent of AWS), Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Veeva Systems, Wix, and Zoom Video Communications. Holdings are subject to change at any time.

The Pitfalls of Using IRR 

IRR is a useful calculation but it has its limitations.

The internal rate of return (IRR) is a commonly used metric to estimate the profitability of an investment. It can be used to assess whether an investment is worth making or not. It is also used to assess the performance of investment funds, such as venture capital and private equity funds.

However, an IRR can be somewhat misleading and actual returns can differ significantly from what the IRR shows you. This is because the IRR only calculates the return on investment starting at the point when cash is deployed. In many funds, cash may not be deployed immediately, which results in a cash drag that is not accounted for in the IRR calculation.

The IRR also makes an assumption that the cash generated can be redeployed at the calculated IRR rate. This is often not the case.

Here are some examples to illustrate these points.

Cash drag

Venture capital and private equity funds are unique in that investors do not give the committed capital to a fund immediately. Instead, investors make a commitment to a fund. The fund only asks for the money when it has found a startup or company to invest in; this is called paid-in capital, which differs from committed capital.

To calculate returns, venture capital and private equity funds use the IRR based only on paid-in capital. This means that while the IRR of two venture funds can look the same, the actual returns can be very different. Let’s look at two IRR scenarios below:

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Fund A-$100000$20000026%
Fund B00-$100000$200026%

Both Fund A and Fund B have an IRR of 26%. The difference is that Fund A deployed the capital straight away while Fund B only found an investment in Year 3. Investors in Fund A are actually much better off as they can then deploy the $2000 received in Year 3 into another investment vehicle to compound returns. Fund B’s investors, meanwhile, had a cash drag with committed capital that was not deployed in Year 1 and 2, and this drag is not recorded in the IRR calculation.

Wrong assumptions

The IRR formula also assumes that the cash returned to investors can be redeployed at the IRR rate. As mentioned above, this is not always the case. Take the example below:

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Investment A-$1000$300$300$300$300$30015.2%
Investment B-$10000000$202515.2%

In the above scenario, both Investment A and Investment B provide a 15.2% IRR. However, there is a difference in the timing of cash flows. Investment A provides cash flow of $300 per year while Investment B provides a one-time $2025 cash flow at the end of Year 5. While the IRR is the same, investors should opt for Investment B.

This is because the IRR calculation assumes that the cash flow generated can be deployed at similar rates as the IRR. But the reality is that oftentimes, the cash flow can neither be redeployed immediately, nor at similar rates to the investment.

For instance, suppose the cash flow generated can only provide a 10% return. Here are the adjusted returns at the end of Year 5 for Investment A

Year 0Year 1Year 2Year 3Year 4Year 5IRR
Investment A-$1000$300$300$300$300$30015.2%
Investment A (adjusted)-$10000000$183212.9%
Investment B-$10000000$202515.2%

I calculated $1832 by summing up the cash flows with the extra returns generated by investing the cash flows at a 10% rate. As you can see, after doing this, the returns generated from investment A now fall to just 12.9% vs the 15.2% previously calculated.

The bottom line

Using the IRR to calculate investment returns is a good starting point to assess an investment opportunity. This can be used for investments such as real estate or private equity funds.

But it is important to note the limitations of the IRR calculation. It can overstate or understate actual returns, depending on the timing of the cash flows as well as the actual returns on the cash generated.

A key rule of thumb is that the IRR is best used when cash can be deployed quickly so that there is minimal cash drag, and when the cash generated can be deployed at close to the IRR of the investment. If this assumption does not hold true, then a manual calculation of the returns of the investment need to be made by inputting the actual returns of the cash generated.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

The Best Investment Theme For The New Trump Presidency

There is no shortage of investing ideas being thrown around that could potentially do well under the new Trump administration – but what would actually work?

Last week, Donald Trump won the latest US Presidential Elections, which would see him be sworn in as the USA’s new President on 20 January 2025. Often, there’s a huge rush of investment themes that accompany the inauguration of a new political leader in a country. It’s no exception this time. 

For my own investment activities, the only theme I’m in favour of with the new Trump presidency – in fact, with any new presidency – is to look at a stock as a piece of a business, and assess the value of that business. Why? Because there’s a long history of investment themes accompanying shifts in political leadership that have soured. In a November 2014 article for The Motley Fool, Morgan Housel shared some examples:

“During the 1992 election, a popular argument was that Bill Clinton’s proposed remake of the U.S. healthcare system would be disastrous for pharmaceutical stocks… by the end of Clinton’s presidency pharmaceutical companies were some of the most valuable companies in the world. Pfizer increased 791% during Clinton’s presidency. Amgen surged 611%. Johnson & Johnson popped 385%. Merck jumped 299%. Those crushed the market, with the S&P 500 rising 251% from January 1993 to January 2001…

…During the 2000 election, Newsweek wrote that if George W. Bush wins, the ensuing tax changes could “help banks, brokers and other investment firms.” By the end of Bush’s second term, the KBW Bank Index had dropped almost 80%. The article also recommended pharmaceutical stocks thanks to Bush’s light touch on regulation. The NYSE Pharmaceutical Index lost nearly half its value during Bush’s presidency…

…During the 2008 election, many predicted that an Obama victory would be a win for green energy like solar and wind and a loss for big oil… The opposite happened: The iShares Clean Energy ETF is down 51% since then, while Chevron (CVX 0.10%) is up 110%.

During the 2012 election, Fox Business wrote that if Obama wins, “home builders such as Pulte and Toll Brothers could see increased demand for new homes due to a continuation of the Obama Administration’s efforts to limit foreclosures, keeping homeowners in their existing properties.” Their shares have underperformed the S&P 500 by 26 percentage points and 40 percentage points since then, respectively.”

It was more of the same in the presidential elections that came after Housel’s article.

When Trump won the 2016 US elections for his first term as President, CNBC proclaimed the banking sector as a strong beneficiary because of his promises to ease banking regulations. But from the day Trump was sworn into office (President-elects are typically sworn in on 20 January in the following year after the elections) till the time he stepped down four years later, the KBW Nasdaq Bank Index was up by less than 20%, whereas the S&P 500 was up by nearly 70%. The KBW Nasdaq Bank Index tracks the stock market performance of 24 of America’s largest banks.

CNBC surveyed more than 100 investment professionals shortly after Joe Biden won the 2020 elections. They thought that “consumer discretionary, industrials and financials will perform the best under a Biden administration.” From Biden’s first day as President till today, the S&P 500 is up by slightly under 60%. Meanwhile, the S&P 500 Consumer Discretionary Index, which comprises consumer discretionary companies within the S&P 500 index, has gained just around 30%. The Dow Jones Industrials Index (a collection of American industrial companies) and the KBW Nasdaq Bank Index are both also trailing the S&P 500 with their respective gains of around 40% and 20%.

I have no idea if the hot themes for Trump’s second term as President would end up performing well. But given the weight of the historical evidence, I have no interest in participating in them. Politics and investing seldom mix well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Problems With China’s Economy And How To Fix Them

An analysis of China’s balance sheet recession, and what can be done about it.

Economist Richard Koo (Gu Chao Ming) is the author of the book The Other Half of Macroeconomics and the Fate of Globalization. Investor Li Lu published a Mandarin review of the book in November 2019, which I translated into English in March 2020. When I translated Li’s review, I found myself nodding in agreement to Koo’s unique concept of a balance sheet recession as well as his analyses of Japan’s economic collapse in the late 1980s and early 1990s, and the Japanese government’s responses to the crash. 

When I realised that Koo was interviewed last week in an episode of the Bloomberg Odd Lots podcast to discuss the Chinese government’s recent flurry of stimulus measures, I knew I had to tune in – and I was not disappointed. In this article, I want to share my favourite takeaways (the paragraphs in italics are transcripts from the podcast)

Takeaway #1: China is currently facing a balance sheet recession, and in a balance sheet recession, the economy can shrink very rapidly and be stuck for a long time

I think China is facing balance sheet recession and balance sheet recession happens when a debt-financed bubble bursts, asset prices collapse, liabilities remain, people realise that their balance sheets’ under water or nearly so, and they all try to repair their balance sheets all at the same time…

…Suppose I have $1000 of income and I spend $900 myself. The $900 is already someone else’s income so that’s not a problem. But the $100 that I saved will go through people like us, our financial institutions, and will be lent to someone who can use it. That person borrows and spends it, then total expenditure in economy will be $900 that I spent, plus $100 that this guy spent, to get $1000 against original income of $1000. That’s how economy moves forward, right? If there are too many borrowers and economy is doing well, central banks will raise rates. Too few, central bank will lower rates to make sure that this cycle is maintained. That’s the usual economy.

But what happens in the balance sheet recession is that when I have $1000 in income and I spend $900 myself, that $900 is not a problem. But the $100 I decide to save ends up stuck in the financial system because no one’s borrowing money. And China, so many people are refusing to borrow money these days because of that issue. Then economy shrinks from $1000 to $900, so 10% decline. The next round, the $900 is someone else’s income, when that person decides to save 10% and spends $810 and decides to save $90, that $90 gets stuck in the financial system again, because repairing financial balance sheets could take a very long time. I mean, Japanese took nearly 20 years to repair their balance sheets.

But in the meantime, economy can go from $1000, $900, $810, $730, very, very quickly. That actually happened in United States during the Great Depression. From 1929 to 1933, the United States lost 46% of its nominal GDP. Something quite similar actually happened in Spain after 2008when unemployment rates skyrocketed to 26% in just three and a half years or so. That’s the kind of danger we face in the balance sheet recession.

Takeaway #2: Monetary policy (changing the level of interest rates) is not useful in dealing with a balance sheet recession – what’s needed is fiscal policy (government spending), but it has yet to arrive for China

I’m no great fan of using monetary policy, meaning policies from the central bank to fight what I call a balance sheet recession…

…Repairing balance sheets of course is the right thing to do. But when everybody does it all at the same time, we enter the problem of fallacy of composition, in that even though everybody’s doing the right things, collectively we get the wrong results. And we get that problem in this case because in the national economy, if someone is repairing balance sheets, meaning paying down debt or increasing savings, someone has to borrow those funds to keep the economy going. But in usual economies, you bring interest rates down, there’ll be people out there willing to borrow the money and spend it. That’s how you keep the economy going.

But in the balance sheet recession, you bring interest rates down to very low levels – and Chinese interest rates are already pretty low. But even if you bring it down to zero, people will be still repairing balance sheets because if you are in negative equity territory, you have to come out of that as quickly as possible. So when you’re in that situation, you cannot expect private sector to respond to lowering of interest rates or quantitative easing, forward guidance, and all of those monetary policy, to get this private sector to borrow money again because they are all doing the right things, paying down debt. So when you’re in that situation, the economy could weaken very, very quickly because all the saved funds that are returned to the banking system cannot come out again. That’s how you end up with economy shrinking very, very rapidly.

The only way to stop this is for the government, which is outside of the fallacy of composition, to borrow money. And that’s the fiscal policy of course, but that hasn’t come out yet. And so yes, they did the quick and easy part with big numbers on the monetary side. But if you are in balance sheet recession, monetary policy, I’m afraid is not going to be very effective. You really need a fiscal policy to get the economy moving and that hasn’t arrived yet.

Takeaway #3: China’s fiscal policy for dealing with the balance sheet recession needs to be targeted, and a good place to start would be to complete all unfinished housing projects in the country, followed by developing public works projects with a social rate of return that’s higher than Chinese government bond yields

If people are all concerned about repairing their balance sheets, you give them money to spend and too often they just use it to pay down debt. So even within fiscal stimulus, you have to be very careful here because tax cuts I’m afraid, are not very effective during balance sheet recessions because people use that money to repair their balance sheets. Repairing balance sheets is of course the right thing to do, but it will not add to GDP when they’re using that tax cuts to pay down debt or rebuild their savings. So that will not add to consumption as much as you would expect under ordinary circumstances. So I would really like to see government just borrow and spend the money because that will be the most effective way to stop the deflationary spiral…

… I would use money first to complete all the apartments that were started but are not yet complete. In that case you might have to take some heavy handed actions, but basically the government should take over these companies and the projects, and start putting money so that they’ll complete the projects. That way, you don’t have to decide what to make, because the things that are already in the process of being built – or the construction drawings are there, workers are there, where to get the materials. And in many cases, potential buyers already know. So in that case, you don’t waste time thinking about what to build, who’s to design, and who the order should go to.

Remember President Obama, when he took over 2009, US was in a balance sheet recession after the collapse of the housing bubble. But he was so careful not to make the Japanese mistake of building bridges to nowhere and roads to nowhere. He took a long time to decide which projects should be funded. But that year-and-a-half or so, I think the US lost quite a bit of time because during that time, economy continued to weaken. There were no shovel-ready projects.

But in the Chinese case, I would argue that these uncompleted apartments are the shovel-ready projects. You already know who wants them, who paid their down payments and all of that. So I will spend the money first on those projects, complete those projects, and use the time while the money is used to complete these apartments.

I would use the magic wand to get the brightest people in China to come into one room and ask them to come up with public works projects with a social rate of return higher than 2.0%. The reason is that Chinese government bond is about 2.00-something. If these people can come up with public works projects with a social rate of return higher than let’s say 2.1%, then those projects will be basically self-financing. It won’t be a burden on future taxpayers. Then once apartments are complete, then the economy still is struggling from balance sheet recession, then I would like to spend the money on those projects that these bright people might come up with.

Takeaway #4: The central government in China actually has a budget deficit that is a big part of the country’s GDP, unlike what official statistics say

But in China, even though same rules should have applied, local governments were able to sell lots of land, make a lot of money in the process, and then they were able to do quite a bit of fiscal stimulus, which also of course added to their GDP. That model will have to be completely revised now because no one wants to buy land anymore. So the big source of revenue of local governments are gone and as a result, many of them are very close to bankrupt. Under the circumstances, I’m afraid central government will have to take over a lot of these problems from the local government. So this myth that Chinese central government, the budget deficit is not a very big part of GDP, that myth will have to be thrown out. Central government will have to take on, not all of it perhaps, but some of the liabilities of the local governments so that local governments can move forward.

Takeaway #5: There’s plenty of available-capital for the Chinese central government to borrow from, and the low yields of Chinese government bonds are a sign of this

So even though budget deficit of China might be very large, the money is there for government to borrow. If the money is not there for the government to borrow, Chinese government bond yields should have gone up higher and higher. But as you know, Chinese government 10-year government bond yields almost down to 2.001% or 2%. It went that low because there are not enough borrowers out there. Financial institutions have to place this money somewhere, all these deleveraged funds coming back into the financial institutions, newly generated savings, all the money that central bank put in, all comes to basically people like us in the financial institutions, the fund managers. But if the private sector is not borrowing money, the only borrower left is the government.

So even if the required budget deficit might be very large to stabilize the economy, the funds are available in the financial market. Only the government just have to borrow that and spend it. So financing should not be a big issue for governments in balance sheet recession. Japan was running huge budget deficits and a lot of conventional minded economists who never understood the dynamics of balance sheet recession was warning about Japan’s budget deficit growing sky high, and then interest rates going sky high. Well, interest rates kept on coming down because of the mechanism that I just described to you, that all those funds coming into the financial sector cannot go to the private sector, end up going to our government bond market. And I see the same pattern developing in China today.

Takeaway #6: Depending on exports is a great way for a country to escape from a balance sheet recession, but this route is not available for China because its economy is already running the largest trade surplus in the world

Export is definitely one of the best ways if you can use it, to come out of balance sheet recession. But China, just like Japan 30 years ago, is the largest trade surplus country in the world. And if the world’s largest trade surplus country in the world tries to export its way out, very many trading partners will complain. You are already such a large destabilizing factor on the world trade, now you’re going to destabilize it even more.

I remember 30 years ago that United States, Europe, and others were very much against Japan trying to export its way out. Because of their displeasure, particularly the US displeasure, Japanese yen, which started at 160 yen when the bubble burst in 1990, ended up 80 yen to the dollar, five years later, 1995. What that indicated to me was that if you’re running trade deficit, you can probably export your way out and no one can really complain because you are a deficit country to begin with. But if you are the surplus country, and if you’re the largest trade surplus country in the world, there will be huge pushback against that kind of move by the Chinese. We already seeing that, in very many countries complaining that China should not export its problems.

Takeaway #7: Regulatory uncertainties for businesses that are caused by the Chinese central government may have played a role in the corporate sector’s unwillingness to borrow

Aside from a balance sheet recession, which is a very, very serious disease to begin with, we have those other factors that started hurting the Chinese economy, I would say, starting as early as 2016.

When you look at the flow of funds data for the Chinese economy, you notice that the Chinese corporate sector started reducing their borrowings, starting around 2016. So until 2016, Chinese companies were borrowing all the household sector savings generated, which is of course the ideal world. The household sector saving money, the corporate sector borrowing money. But starting around 2016, you see corporate sector borrowing less and less. And at around the Covid time, corporate sector was actually a net saver, not a net borrower. So that trend, I think has to do with what you just described, that regulatory uncertainties got bigger and bigger under the current leadership and I think people began to realize that even after you make these big investments in the new projects, they may not be able to expect the same revenue stream that they expected earlier because of this regulatory uncertainty.

Takeaway #8: China’s economy was already running a significant budget deficit prior to the bubble bursting, and this may have made the central government reluctant to step in as borrower of last resort now to fix the balance sheet recession

If the household sector is saving money, but the corporate sector is not borrowing money, you need someone else to fill that gap. And actually that gap was filled by Chinese government, mostly decentralized local governments. But if that temporary fiscal jolt of fiscal stimulus then turn the economy around, then those local government interventions would’ve been justified. But because this was a much more deeply rooted – here, I would use structural problems, this regulatory uncertainties and middle income trap and so forth – local government just had to keep on borrowing and spending money to keep the economy going. That was happening long before the bubble burst. So if you look at total, or what I call general government spending – not just the central government, but the general government – they were financial deficit to the tune of almost 7% of GDP by 2022. This is before the bubble bursting.

So if you are already running a budget deficit, 7% of GDP before the onset of balance sheet recession, then whatever you have to do to stop balance sheet recession, we have to be on top of the 7%. Suppose you need 5% GDP equivalent to keep the economy going, then you’re talking about 12% of GDP budget deficit. I think that’s one of the reasons why Chinese policy makers, even though many of them are fully aware that in the balance sheet recession, you need the government to come in, they haven’t been able to come to a full consensus yet because even before the bubble burst, Chinese government was writing a large budget deficit.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

How Recessions and Interest Rate Changes Affect Stocks

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future.

After years of investing in stocks, I’ve noticed that stock market participants place a lot of emphasis on how recessions and changes in interest rates affect stocks. This topic is even more important right now for investors in US stocks, given fears that a recession could happen soon in the country, and the interest rate cut last month by the Federal Reserve, the country’s central bank. I have no crystal ball, so I have no idea how the US stock market would react if a recession were to arrive in the near future and/or the Federal Reserve continues to lower interest rates.   

What I have is historical context. History is of course not a perfect indicator of the future, but it can give us context for possible future outcomes. I’ve written a few articles over the years in this blog discussing the historical relationships between stocks, recessions, and movements in interest rates, some of which are given below (from oldest to the most recent):

I thought it would be useful to collect the information from these separate pieces into a single place, so here goes!

The history of recessions and stocks

These are the important historical relationships between recessions and stocks:

  • It’s not a given that stocks will definitely fall during a recession. According to a June 2022 article by Ben Carlson, Director of Institutional Asset Management at Ritholtz Wealth Management, there have been 12 recessions in the USA since World War II (WWII). The average return for the S&P 500 (a broad US stock market benchmark) when all these recessions took place was 1.4%. There were some horrible returns within the average. For example, the recession that stretched from December 2007 to June 2009 saw the S&P 500 fall by 35.5%. But there were also decent returns. For the recession between July 1981 and November 1982, the S&P 500 gained 14.7%.
  • Holding onto stocks in the lead up to, through, and in the years after a recession, has mostly produced good returns. Carlson also showed in his aforementioned article that if you had invested in the S&P 500 six months prior to all of the 12 recessions since WWII and held on for 10 years after each of them, you would have earned a positive return on every occasion. Furthermore, the returns were largely rewarding. The worst return was a total gain of 9.4% for the recession that lasted from March 2001 to November 2001. The best was the first post-WWII recession that happened from November 1948 to October 1949, a staggering return of 555.7%. After taking away the best and worst returns, the average was 257.2%. 
  • Avoiding recessions flawlessly would have caused your return to drop significantly. Data from Michael Batnick, Carlson’s colleague at Ritholtz Wealth Management, showed that a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint.
  • Stocks tend to bottom before the economy does. The three most recent recessions in the USA prior to COVID-19 would be the recessions that lasted from July 1990 to March 1991, from March 2001 to November 2001, and from December 2007 to June 2009. During the first recession in this sample, data on the S&P 500 from Yale economist Robert Shiller, who won a Nobel Prize in 2013, showed that the S&P 500 bottomed in October 1990. In the second episode, the S&P 500 found its low 15 months after the end of the recession, in February 2003. This phenomenon was caused by the aftermath of the dotcom bubble’s bursting. For the third recession, the S&P 500 reached a trough in March 2009, three months before the recession ended. Moreover, after the December 2007 – June 2009 recession ended, the US economy continued to worsen in at least one important way over the next few months. In March 2009, the unemployment rate was 8.7%. By June, it rose to 9.5% and crested at 10% in October. But by the time the unemployment rate peaked at 10%, the S&P 500 was 52% higher than its low in March 2009. Even if we are right today that the economy would be in worse shape in the months ahead, stocks may already have bottomed or be near one – only time can tell.
  • The occurrence of multiple recessions has not stopped the upward march of stocks. The logarithmic chart below shows the performance of the S&P 500 (including dividends) from January 1871 to February 2020. It turns out that US stocks have done exceedingly well over these 149 years (up 46,459,412% in total including dividends, or 9.2% per year) despite the US economy having encountered numerous recessions. If you’re investing for the long run, recessions are nothing to fear.
Figure 1; Source: Robert Shiller data; National Bureau of Economic Research

The history of interest rates and stocks

These are the important historical relationships between interest rates and stocks:

  • Rising interest rates have been met with rising valuations. According to Robert Shiller’s data, the US 10-year Treasury yield was 2.3% at the start of 1950. By September 1981, it had risen to 15.3%, the highest rate recorded in Shiller’s dataset. In that same period, the S&P 500’s price-to-earnings (P/E) ratio moved from 7 to 8. In other words, the P/E ratio for the S&P 500 increased slightly despite the huge jump in interest rates. It’s worth noting too that the S&P 500’s P/E ratio of 7 at the start of 1950 was not a result of earnings that were temporarily inflated. Yes, there’s cherry picking with the dates. For example, if I had chosen January 1946 as the starting point, when the US 10-year Treasury yield was 2.2% and the P/E ratio for the S&P 500 was 19, then it would be a case of valuations falling alongside rising interest rates. But this goes to show that while interest rates have a role to play in the movement of stocks, it is far from the only thing that matters.
  • Stocks have climbed in rising interest rate environments. In a September 2022 piece, Carlson showed that the S&P 500 climbed by 21% annually from 1954 to 1964 even when the yield on 3-month Treasury bills (a good proxy for the Fed Funds rate, which is the key interest rate set by the Federal Reserve) surged from around 1.2% to 4.4% in the same period. In the 1960s, the yield on the 3-month Treasury bill doubled from just over 4% to 8%, but US stocks still rose by 7.7% per year. And then in the 1970s, rates climbed from 8% to 12% and the S&P 500 still produced an annual return of nearly 6%.
  • Stocks have done poorly in both high and low interest rate environments, and have also done well in both high and low interest rate environments. Carlson published an article in February 2023 that looked at how the US stock market performed in different interest rate regimes. It turns out there’s no clear link between the two. In the 1950s, the 3-month Treasury bill (which is effectively a risk-free investment, since it’s a US government bond with one of the shortest maturities around) had a low average yield of 2.0%; US stocks returned 19.5% annually back then, a phenomenal gain. In the 2000s, US stocks fell by 1.0% per year when the average yield on the 3-month Treasury bill was 2.7%. Meanwhile, a blockbuster 17.3% annualised return in US stocks in the 1980s was accompanied by a high average yield of 8.8% for the 3-month Treasury bill. In the 1970s, the 3-month Treasury bill yielded a high average of 6.3% while US stocks returned just 5.9% per year. 
  • A cut in interest rates by the Federal Reserve is not guaranteed to be a good or bad event for stocks. Josh Brown, CEO of Ritholtz Wealth Management, shared fantastic data in an August 2024 article on how US stocks have performed in the past when the Federal Reserve lowered interest rates. His data, in the form of a chart, goes back to 1957 and I reproduced them in tabular format in Table 1; it shows how US stocks did in the next 12 months following a rate cut, as well as whether a recession occurred in the same window. I also split the data in Table 1 according to whether a recession had occurred shortly after a rate cut, since eight of the 21 past rate-cut cycles from the Federal Reserve since 1957 took place without an impending recession. Table 2 shows the same data as Table 1 but for rate cuts with a recession; Table 3 is for rate cuts without a recession. What the data show is that US stocks have historically done well, on average, in the 12 months following a rate-cut. The overall record, seen in Table 1, is an average 12-month forward return of 9%. When a recession happened shortly after a rate-cut, the average 12-month forward return is 8%; when a recession did not happen shortly after a rate-cut, the average 12-month forward return is 12%. A recession is not necessarily bad for stocks. As Table 2 shows, US stocks have historically delivered an average return of 8% over the next 12 months after rate cuts that came with impending recessions. It’s not a guarantee that stocks will produce good returns in the 12 months after a rate cut even if a recession does not occur, as can be seen from the August 1976 episode in Table 3.
Table 1; Source: Josh Brown
Table 2; Source: Josh Brown
Table 3; Source: Josh Brown

Conclusion

Knowing how stocks have performed in the past in the context of recessions and changes in interest rates provides us with possible paths that stocks could take in the future. But it’s also worth bearing in mind that anything can happen in the financial markets. Things that have never happened before do happen, so there are limits to learning from history. Nonetheless, there’s a really important lesson from all the data seen above that I think is broadly applicable even far into the future, and it is that one-factor analysis in finance – “if A happens, then B will occur” – should be largely avoided because clear-cut relationships are rarely seen.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time. 

Company Notes Series (#2): BayCurrent Consulting

Editor’s note: We’re testing out a new series for the blog, the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first edition in the series can be found here. Please give us your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!


Start of notes for BayCurrent Consulting

Data as of 31 May 2023

Background

  • Founded in March 1998 as PC Works Co. Ltd for the purpose of consulting, system integration and outsourcing related to management, operations and IT. In December 2006, PC Works Co. Ltd changed its name to BayCurrent Consulting Co. Ltd. In April 2014, Byron Holdings Co. Ltd was set up. In June 2014, Byron Holdings Co. Ltd acquired BayCurrent Consulting Co. Ltd and then the combined entity changed its name to BayCurrent Consulting Co. Ltd.
  • HQ: Tokyo, Japan
  • Listed on September 2016 on the Tokyo Stock Exchange Mother’s section; moved to First Section of Tokyo Stock Exchange on December 2018; moved to Prime Market of the Tokyo Stock Exchange on April 2022
  • Ticker: TSE: 6532
  • Total number of consultants as of FY2023 (financial year ended February 2023) is 2,961; total number of employees as of FY2023 is 3,310

Business

  • BayCurrent is a consulting firm that supports a wide range of themes such as strategy, digital, and operations for Japan’s leading companies in various industries. BayCurrent provides planning and execution support to clients, such as company-wide strategy planning and business strategy planning to support decision-making by management, and support for examining business operations using digital technology.
  • Examples of the projects that BayCurrent is currently working on under Digital Consulting:
    • Finance, Cashless payment, and Design: Building a UX improvement process to continue achieving high customer satisfaction over the long term
    • Pharmaceutical manufacturing, Digital technologies, and Market research: Formulating plans to enter the Japanese market of advanced digital medical equipment business for foreign companies 
    • Telecommunication, Metaverse, and Business planning: Developing plans to use metaverse and examine the use of AI toward the smart city concept
    • Automobiles, AI, and Business creation: Building a model for business using AI and supporting its implementation, aiming to reduce the risk of traffic accidents
  • Examples of the projects that BayCurrent is currently working on under Sustainability Consulting:
    •  Energy, ESG, and Support for practice: Forming a scheme and supporting negotiations for realizing offshore wind power business
    • Finance and Carbon neutrality: Considering policies in response to TCFD (Task Force on Climate-Related Financial Disclosures) in anticipation of sales of solutions in the future
    • High-tech, EV, and Business planning: Considering business domains and creating a road map for popularizing EVs (electric vehicles) to reduce CO2 emissions
    • Manufacturing, ESG, and Supply chain management: Considering the possibility of commercializing supplier ESG assessments and risk management
  • See also Figures 1, 2, 3, and 4 for examples of BayCurrent’s projects
Figure 1
Figure 2
Figure 3
Figure 4
  • BayCurrent groups its customers into three industry categories: Finance (banking, securities, insurance etc); Telecommunications/Media/High-Tech; and Others (energy, entertainment, government offices, food etc). In FY2023, 25% of revenue was from Finance, 35% was from Telecommunications/Media/High-Tech, and 40% from Others. In FY2019, the split was 40% from Finance, 30% from Telecommunications/Media/High-Tech, and 30% from Others. BayCurrent’s largest customer in FY2023 was Pfizer Japan, accounting for 12.0% of revenue; it seems like there’s no other company that accounted for more than 10% of revenue during the year.
  • In FY2023, revenue from customers based in Japan was at least 90% of BayCurrent’s total revenue.

Market opportunity

  • According to IDC Japan’s “Forecast for domestic business consulting market: 2021-2025” (announced on 1 July 2021), the Japanese consulting market is expected to have a CAGR of 7.8% from around ¥900 billion in 2020 to more than ¥1.2 trillion in 2025; within the consulting market is the digital consulting sub-segment which is expected to have a CAGR of 30.1% from more than ¥100 billion in 2020 to around ¥500 billion in 2025. BayCurrent ended FY2023 with revenue of just ¥76.1 billion.
  • According to BayCurrent: “In today’s business environment, the challenges faced by corporate managers are becoming more diverse and complex due to intensifying market competition and changes in market structure. There is a growing need for consultants with a high level of expertise. Furthermore, with the further development of digital technology in the future, the need for the utilization of new technologies in business is expected to increase year by year, and the consulting market is expected to continue to grow at a high rate.”
  • In Japan, there’s an initiative called DX (Digital Transformation) that began to be promoted heavily by the Japanese government starting in 2018 with the publication of the “DX [Digital Transformation]” report by the Ministry of Economy, Trade, and Industry (METI) during the year. METI warned that Japan would face an economic loss of ¥12 trillion per year by 2025 if traditional mainframes and backbone core systems were not updated and the shortage of ICT engineers were not addressed. Moreover, in 2016, the percentage of companies that have been operating their core systems for 21 years or more is 20%, and 40% for companies that have been in operation for 11 to 20 year; if this situation continues in 10 years, in 2025, the percentage of companies that have been operating core systems for 21 years or more will be 60%. Japanese companies appear to have heeded the government’s DX call. Surveys conducted by the METI and FUJITSU in 2020 indicated that almost half of the SMEs were actively promoting DX companywide, while large companies with more than 5.000 employees indicate an adoption rate close to 80%. These are a tailwind for BayCurrent Consulting.

Growth strategy

  • BayCurrent is focused on further increasing the added value of its consulting services; the recruitment and training of human resources; and providing an attractive work environment. 
  • BayCurrent’s support services for corporate managers in all industries are knowledge-intensive, and so management believes that improvements in the company’s consultants’ ability to make proposals and solve problems will affect its growth. For this reason, management strives to recruit excellent human resources with various backgrounds and focusing on creating an environment and treatment that makes it easy for each consultant to work with peace of mind. Management has established a wide variety of training programs and study sessions to improve its consultants’ skills for strategic planning and solving management issues. Management believes that BayCurrent is able to formulate viable strategies that meet the needs of clients precisely because the company’s consultants are professionals who have worked on numerous projects across industries and service areas; for this reason, management strives to not limit its consultants to specific fields. Figure 5 shows the establishment of the BayCurrent Institute, a business management research institute.
  • Management also distributes knowledge obtained through dialogue with university professors working on research subjects and members of the management teams of leading companies, in order to gain visibility from the public. Most recent examples of such work:
    • Participated in FIN/SUM, one of Japan’s largest fintech conferences, co-hosted by the Financial Services Agency and Nikkei Inc. BayCurrent did the following: Joji Noritake, Managing Executive Officer and CDO, conducted a standalone lecture on “Sustainable customer experience connects emotional memories”; took part in panel discussion on “Possibility of future individual investment through digital technology”
    • Participation in Green CPS Consortium, an organization aimed at building eco-friendly industry and society by controlling material loss, energy loss, and other aspects in all economic activities while driving economic growth
    • Made a donation to the VR/metaverse in the corporate sponsored practical research program of the University of Tokyo Virtual Reality Educational Research Center. The research program conducts basic research on the creation and operation of metaverse space and conducts demonstration experiments to develop practical applications of the metaverse in society. 
  • Growth of number of consultants vs growth of revenue (note the higher revenue growth vs consultant growth):
Table 1
Figure 5

Financials

  • Financials from FY2016 to FY2023 (financials in ¥; earliest data we could find was for FY2016): 
Table 2
  • Solid CAGRs in revenue:
    • FY2016-FY2023: 25.1%
    • FY2018-FY2023: 30.1%
    • FY2023: 32.4%
  • Profitable since at least FY2016. Net income CAGRs and average net income margins:
    • FY2016-FY2023: 52.3% CAGR, 15.3% average margin
    • FY2018-FY2023: 60.3% CAGR, 18.1% average margin
    • FY2023: 43.3% growth, 27.6% margin
  • Positive operating cash flow since at least FY2016. Operating cash flow CAGRs and average operating cash flow margins:
    • FY2016-FY2023: 34.0% CAGR, 19.4% average margin
    • FY2018-FY2023: 45.0% CAGR, 21.6% average margin
    • FY2023: 35.5% growth, 27.2% margin
  • Free cash flow positive since at least FY2016. Free cash flow CAGRs and average free cash flow margins:
    • FY2016-FY2023: 34.0% CAGR, 19.1% average margin
    • FY2018-FY2023: 45.8% CAGR, 21.3% average margin
    • FY2023: 33.6% growth, 26.7% margin
  • Balance sheet was initial in net-debt position and became net-cash in FY2020 onwards; high net-cash position of ¥33 billion in FY2023
  • Minimal dilution as weighted average diluted share count increased by only 0.8% per year for FY2016-FY2023, and -0.3% in FY2023
  • Management aims for a total shareholder return ratio (dividends and share buybacks) of around 40% of earnings; dividend payout ratio is typically 20%-30% under IFRS. In FY2023, interim dividend of ¥14 per share (adjusting for 1-for-10 stock split in November 2022) and final dividend of ¥23 per share, for a total dividend of ¥37 per share for FY2023, representing a payout ratio of 27%.

Management

  • Yoshiyuki Abe, 57, is President and CEO. Became President in December 2016. Joined the original BayCurrent Consulting Co. Ltd in September 2008 and became an executive director in November of same year. Yoshiyuki Abe became President in December 2016 after some major turmoil at BayCurrent that happened in H2 2016:
    • Failed to gain deals matching waiting consultants and then suffered a largely lowered operation rate
    • Additionally faced the defection of employees as a result of a talk about withdrawal that resulted in the loss of credibility of the clients receiving the support for many years
    • Revised earnings forecasts downwardly on 9 December 2016; on the same day, the former President left office
  • Kentaro Ikehira, 46, is Executive Vice President. Became Vice President in May 2021. Joined the original BayCurrent Consulting Co. Ltd in September 2007.
  • Kosuke Nakamura, 41, is CFO. Became CFO in May 2021. Joined the original BayCurrent Consulting Co. Ltd in January 2007.
  • Management has a long history of significantly beating their own mid-term growth projections. Examples:
    • In FY2018 earnings presentation, a projection for FY2019-FY2021 was given where revenue was expected to have a CAGR of 15%-20%, ending at ¥32-35 billion. Actual FY2021 revenue was ¥42.8 billion. 
    • In FY2022 earnings presentation, a projection for FY2022-FY2026 was given where revenue was expected to have a CAGR of 20% to end at ¥100 billion and EBITDA was expected to end at ¥30 billion. Projection given for FY2024 was for revenue of ¥94.6 billion and EBITDA of ¥36 billion – so FY2026 medium-term projection could be achieved/beat by as early as FY2024
  • Management has set a target of FY2029 revenue of ¥250 billion, which represents a 20% CAGR from FY2024’s projected revenue of ¥94.6 billion.

Compensation of Management

  • Yoshiyuki Abe’s total FY2023 compensation was ¥333 million, consisting of ¥40 million of fixed pay, ¥192 million of performance-linked remuneration, and ¥101 million of restricted stock compensation. Total compensation in FY2023 was just 1.6% of FY2023 net income as well as free cash flow
  • Yoshiyuki Abe’s tota FY2022 compensation was ¥297 million, FY2021 compensation was ¥206 million, and FY2020 compensation was ¥137 million.
  • Comparison of Yoshiyuki Abe’s compensation growth vs BayCurrent’s revenue/net income/FCF growth over past few years:
Table 3

Valuation (as of 31 May 2023)

  • 31 May 2023 share price of ¥5,110
  • Trailing revenue per share is ¥496.47, hence PS is 10.3
  • Trailing diluted EPS is ¥137.19, hence PE is 37.2
  • Trailing FCF per share is ¥132.71, hence PFCF is 38.5
  • Reminder that revenue growth projection for FY2029 is for CAGR of 20% from FY2024 – the valuation does not look too rich if BayCurrent is able to grow as projected 

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

The Federal Reserve Has Much Less Power Over Financial Markets Than You Think 

It makes sense to mostly ignore the Federal Reserve’s actions when assessing opportunities in the stock market.

Last week, the Federal Reserve, the USA’s central bank, opted to lower the federal funds rate (the key interest rate controlled by it) by 50 basis points, or 0.5%. The move, both before and after it was announced, was heavily scrutinised by market participants. There’s a wide-held belief that the Federal Reserve wields tremendous influence over nearly all aspects of financial market activity in the USA.

But Aswath Damodaran, the famed finance professor from New York University, made an interesting observation in a recent blog post: The Federal Reserve actually does not have anywhere close to the level of influence over America’s financial markets as many market participants think.

In his post, Damodaran looked at the 249 calendar quarters from 1962 to 2024, classified them according to how the federal funds rate changed, and compared the changes to how various metrics in the US financial markets moved. There were 96 quarters in the period where the federal funds rate was raised, 132 quarters where it was cut, and 21 quarters where it was unchanged. Some examples of what he found:

  • A median change of -0.01% in the 10-year Treasury rate was seen in the following quarter after the 96 quarters where the federal funds rate increased, whereas a median change of 0.07% was seen in the following quarter after the 132 quarters where the federal funds rate was lowered. Put another way, the 10-year Treasury rate has historically tended to (1) decrease when the federal funds rate increased, and (2) increase when the federal funds rate decreased. This means that the Federal Reserve has very little control over longer-term interest rates. 
  • A median change of -0.13% in the 15-year mortgage rate was seen in the following quarter after the quarters where the federal funds rate increased, whereas a median change of -0.06% was seen in the following quarter after the quarters where the federal funds rate was lowered. It turns out that the Federal Reserve also exerts little control over the types of interest rates that consumers directly interact with on a frequent basis.
  • A median change of 2.85% in US stocks was seen in the following quarter after the quarters where the federal funds rate increased, a median change of 3.07% was seen in the following quarter after the quarters where the federal funds rate was lowered, and a median change of 5.52% was seen in the following quarter after the quarters where the federal funds rate was unchanged. When discussing the stock-market related data, Damodaran provided a provocative question and answer: 

“At the risk of disagreeing with much of conventional wisdom, is it possible that the less activity there is on the part of the Fed, the better stocks do? I think so, and stock markets will be better served with fewer interviews and speeches from members of the FOMC and less political grandstanding (from senators, congresspeople and presidential candidates) on what the Federal Reserve should or should not do.”

I have always paid scant attention to what the Federal Reserve is doing when making my investing decisions. My view, born from observations of financial market history* and a desire to build a lasting investment strategy, is that business fundamentals trump macro-economics. Damodaran’s data lends further support for my stance to mostly ignore the Federal Reserve’s actions when I assess opportunities in the stock market. 

*A great example can be found in Berkshire Hathaway, Warren Buffett’s investment conglomerate. Berkshire produced an 18.7% annual growth rate in its book value per share from 1965 to 2018, which drove a 20.5% annual increase in its stock price. Throughout those 53 years, Berkshire endured numerous macro worries, such as the Vietnam War, the Black Monday stock market crash, the “breaking” of the Bank of England, the Asian Financial Crisis, the bursting of the Dotcom Bubble, the Great Financial Crisis, Brexit, and the US-China trade war. Damodaran’s aforementioned blog post also showed that the federal funds rate moved from around 5% in the mid-1960s to more than 20% in the early-1980s and then to around 2.5% in 2018. And yet, an 18.7% input (Berkshire’s book value per share growth) still resulted in a 20.5% output (Berkshire’s stock price growth).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

The Buyback Endemic

Buying back stock at unreasonably high valuations is not a good use of capital and can destroy shareholder value.

Buybacks can be a good way for companies to enhance shareholder value. Share buybacks reduce the number of shares outstanding, allowing companies to pay a higher dividend per share in the future.

But not all buybacks are good. Done at the wrong price, buybacks can actually be a bad use of capital. In fact, I have seen so many companies do buybacks recklessly and without consideration of the share price.

The problem probably arises from a few reasons. 

Wrong mindset

First, some executives do not have a good grasp of what buybacks are. Take this statement from Tractor Supply’s management in its 2024 second-quarter earnings report for example:

“The Company repurchased approximately 0.5 million shares of its common stock for $139.2 million and paid quarterly cash dividends totaling $118.5 million, returning a total of $257.7 million of capital to shareholders in the second quarter of 2024.”

The issue with this statement is that it lumps dividends and share repurchases in the same bracket. It also implies that share repurchases are a form of returning capital to shareholders. The truth is that share repurchases is not returning cash to long-term shareholders but only to exiting shareholders. If management mistakes repurchases as capital return, it may lead them to do buybacks regularly, instead of opportunistically.

Although I am singling out Tractor Supply’s management, they are just one out of many management teams that seem to have the wrong mindset when it comes to buybacks.

Incentives

Additionally, executive compensation schemes may encourage management to buy back stock even if it is not the best use of capital. 

For instance, Adobe’s executives have an annual cash remuneration plan that is determined in part by them achieving certain earnings per share goals. This may lead management to buy back stock simply to boost the company’s earnings per share. But doing so when prices are high is not a good use of capital. When Adobe’s stock price is high, it would be better for management to simply return dividends to shareholders – but management may not want to pay dividends as it does not increase the company’s earnings per share.

Again, while I am singling out Adobe’s management, there are numerous other companies that have the same incentive problem.

Tax avoidance

I have noticed that the buyback phenomena is more prevalent in countries where dividends are taxed. 

The US, for instance, seems to have a buyback endemic where companies buy back stock regardless of the price. This may be due to the fact that US investors have to pay a tax on dividends, which makes buybacks a more tax-efficient use of capital for shareholders. On the contrary, Singapore investors do not need to pay taxes on dividends. As such, Singapore companies do not do buybacks as often.

However, simply doing buybacks for tax efficiency reasons without considering the share price can still harm shareholders. Again, management teams need to weigh both the pros and cons of buybacks before conducting them.

Final thoughts

There is no quick fix to this problem but there are some starting points that I believe companies can do to address the issue. 

First, fix the incentives problem. A company’s board of directors need to recognise that incentives that are not structured thoughtfully can encourage reckless buybacks of shares regardless of the share price.

Second, management teams need to educate themselves on how to increase long-term value for shareholders and to understand the difference between buybacks and dividends.

Third, management teams need to understand the implications of taxes properly. Although it is true that taxes can affect shareholders’ total returns when a company pays a dividend, it is only one factor when it comes to shareholder returns. Executive teams need to be coached on these aspects of capital allocation.

Only through proper education and incentives, will the buyback endemic be solved.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe and Tractor Supply. Holdings are subject to change at any time.

Assessing Different Share Buyback Strategies

Buying back stock is a great way to drive shareholder value but only if it is done at the right price.

Over the past few years, I have observed the different ways that companies conduct their share buybacks. This made me realise that the way a company conducts its share buybacks can have a profound impact on the long term returns of its stock.

Here are some share buyback strategies and what they mean for shareholders.

Opportunistic

The best way to conduct share buybacks is what I term as opportunistic buybacks. This means buying back shares aggressively when shares are undervalued and vice versa.

An example of a company that does this very well is the US-listed company Medpace, which helps drugmakers run drug trials. 

In 2022, when markets and its own stock price were down, Medpace took the opportunity to buy back its shares aggressively. The company tapped the debt markets to procure more capital to buyback shares, to the extent that its net-cash position of US$314 million at the end of 2021 flipped to a net-debt position of US$361 million as of 30 June 2022.

But as its stock price went up, Medpace became less enthusiastic about buying back shares and instead started to pay off the debt it incurred; the company ended 2022 with a lower net-debt position of US$180 million

This type of opportunistic buyback strategy is the most efficient buyback strategy in my opinion.

The plot below shows the amount spent by Medpace on buy backs over the last 3 years.

Source: TIKR.com

With its stock price now at a much higher level, Medpace has not conducted buybacks for the last four quarters. Medpace’s management team is likely waiting for its shares to fall to a lower valuation before they conduct buybacks again.

Regular buybacks

Another way to conduct buybacks is to do it on a regular basis. The parent of Google, Alphabet, is one such company that has conducted very regular buybacks. In the past 10 quarters, Alphabet has consistently spent close to US$15 billion a quarter on buybacks. This includes quarters when the company’s free cash flow was less than US$15 billion.

Although I prefer opportunistic buybacks, regular buybacks may be best suited for a company such as Alphabet which has to deploy large amounts of capital. Alphabet’s shares have also consistently traded at a reasonable valuation over the last few years, making regular buybacks a decent strategy.

The chart below shows the amount that Alphabet spent on buybacks in each quarter for the last 10 quarters. 

Source: Tikr

Poor timing

At the other end of the spectrum, some companies try to time their buybacks but end up being aggressive with buybacks at the wrong time.

Take Adobe, the owner of Photoshop, for example.

Source: TIKR.com

Adobe seems to change the level of aggressiveness in its share buybacks from quarter to quarter.

In the first quarter of 2022 , Adobe’s stock price was close to all-time highs, but the company was very aggressive with buybacks and spent more than US$2 billion – or 143% of its free cash flow in the quarter – to repurchase its shares. 

When its stock price started falling later that year, instead of taking advantage of the lower price, Adobe surprisingly cut down on its buybacks to slightly over US$1 billion a quarter, less than what it generated in free cash flow during those periods. So far in 2024, Adobe has again increased its buybacks after its stock price increased.

The optimum strategy would have been to do more buybacks when its stock price was low and less buybacks when its stock price was high.

Bottom line

Buybacks can be a great way to add value to shareholders. However, it is vital that companies conduct buybacks at low valuations to maximise the use of their capital to generate long term returns for shareholders. 

Medpace is an excellent example of great capital allocation, even going so far as to tap the debt markets to be even more aggressive with buybacks when its stock price is low. In the middle, we have companies such as Alphabet that consistently buyback shares. But on the other end of the spectrum is Adobe that seems to become more aggressive with buybacks at the wrong times.

Hopefully, more companies can follow in the footsteps of Medpace and make sure they put their capital to use only when the time is right.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, and Medpace. Holdings are subject to change at any time.

The Expensive Weighing Machine

Stocks and business fundamentals can diverge wildly in the short run, only to then converge in the long run.

In Pain Before Gain, I shared Walmart’s past business growth and corresponding stock price movement (emphases are new):

From 1971 to 1980, Walmart produced breath-taking business growth. The table below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return… Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the S&P 500 was down by ‘only’ 40%. But by the end of 1979 (when inflation in the USA peaked during the 1970s), Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972 (when inflation was at a low in the 1970s)…

…At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year…

It turns out that in late-August 1972, when its stock price was less than US$0.04, Walmart’s price-to-earnings (P/E) ratio was between 42 and 68… This is a high valuation… at Walmart’s stock price in December 1974, after it had sunk by 75% to a low of around US$0.01 to carry a P/E ratio of between 6 and 7 the easy conclusion is that it was a mistake to invest in Walmart in August 1972 because of its high valuation. But as can be seen above, Walmart’s business continued to grow and its stock price eventually soared to around US$3.70 near the end of 1989. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972.”

In When Genius Failed (temporarily)*, I explored a little-discussed aspect of Teledyne’s history (emphasis is from the original passage) :

Warren Buffett once said that Singleton “has the best operating and capital deployment record in American business… if one took the 100 top business school graduates and made a composite of their triumphs, their record would not be as good.”

Singleton co-founded Teledyne in 1960 and stepped down as chairman in 1990… According to The Outsiders, a book on eight idiosyncratic CEOs who generated tremendous long-term returns for their shareholders, Teledyne produced a 20.4% annual return from 1963 to 1990, far ahead of the S&P 500’s 8.0% return. Distant Force, a hard-to-obtain memoir on Singleton, mentioned that a Teledyne shareholder who invested in 1966 “was rewarded with an annual return of 17.9 percent over 25 years, or a return of 53 times his invested capital.” In contrast, the S&P 500’s return was just 6.7 times in the same time frame… 

based on what I could gather from Distant Force, Teledyne’s stock price sunk by more than 80% from 1967 to 1974. That’s a huge and demoralising decline for shareholders after holding on for seven years, and was significantly worse than the 11% fall in the S&P 500 in that period. But even an investor who bought Teledyne shares in 1967 would still have earned an annualised return of 12% by 1990, outstripping the S&P 500’s comparable annualised gain of 10%. And of course, an investor who bought Teledyne in 1963 or 1966 would have earned an even better return… 

But for the 1963-1989 time frame, based on data from Distant Force, it appears that the compound annual growth rates (CAGRs) for the conglomerate’s revenue, net income, and earnings per share were 19.8%, 25.3%, and 20.5%, respectively; the self-same CAGRs for the 1966-1989 time frame were 12.1%, 14.3%, and 16.0%. These numbers roughly match Teledyne’s returns cited by The Outsiders and Distant Force

My article The Need For Patience contained one of my favourite investing stories and it involves Warren Buffett and his investment in The Washington Post Company (emphasis is from the original passage):

Through Berkshire Hathaway, he invested US$11 million in WPC [The Washington Post Company] in 1973. By the end of 2007, Berkshire’s stake in WPC had swelled to nearly US$1.4 billion, which is a gain of over 10,000%. But the percentage gain is not the most interesting part of the story. What’s interesting is that, first, WPC’s share price fell by more than 20% shortly after Buffett invested, and then stayed in the red for three years

Buffett first invested in WPC in mid-1973, after which he never bought more after promising Katherine Graham (the then-leader of the company and whose family was a major shareholder) that he would not do so without her permission. The paragraph above showed that Berkshire’s investment in WPC had gains of over 10,000% by 2007. But by 1983, Berkshire’s WPC stake had already increased in value by nearly 1,200%, or 28% annually. From 1973 to 1983, WPC delivered CAGRs in revenue, net income, and EPS of 10%, 15%, and 20%, respectively (EPS grew faster than net income because of buybacks). 

Walmart, Teledyne, and WPC’s experience are all cases of an important phenomenon in the stock market: Their stock price movements were initially detached from their underlying business fundamentals in the short run, before eventually aligning with the passage of time, even when some of them began with very high valuations. They are also not idiosyncratic instances.

Renowned Wharton finance professor Jeremy Siegel – of Stocks for the Long Run fame – penned an article in late-1998 titled Valuing Growth Stocks: Revisiting The Nifty-Fifty. In his piece, Siegel explored the business and stock price performances from December 1972 to August 1998 for a group of US-listed stocks called the Nifty-Fifty. The group was perceived to have bright business-growth prospects in the early 1970s and thus carried high valuations. As Siegel explained, these stocks “had proven growth records” and “many investors did not seem to find 50, 80 or even 100 times earnings at all an unreasonable price to pay for the world’s preeminent growth companies [in the early 1970s].” But in the brutal 1973-1974 bear market for US stocks, when the S&P 500 fell by 45%, the Nifty-Fifty did even worse. For perspective, here’s Howard Marks’ description of the episode in his book The Most Important Thing (emphasis is mine):

In the early 1970s, the stock market cooled off, exogenous factors like the oil embargo and rising inflation clouded the picture and the Nifty Fifty stocks collapsed. Within a few years, those price/earnings ratios of 80 or 90 had fallen to 8 or 9, meaning investors in America’s best companies had lost 90 percent of their money.”

Not every member of the Nifty-Fifty saw their businesses prosper in the decades that followed after the 1970s. But of those that did, Siegel showed in Valuing Growth Stocks that their stock prices eventually tracked their business growth, and had also beaten the performance of the S&P 500. These are displayed in the table below. There are a few important things to note about the table’s information:

  • It shows the stock price returns from December 1972 to August 1998 for the S&P 500 and five of the Nifty-Fifty identified by Siegel as having the highest annualised stock price returns; December 1972 was the peak for US stocks before the 1973-1974 bear market
  • It shows the annualised earnings per share (EPS) growth for the S&P 500 and the five aforementioned members of the Nifty-Fifty
  • Despite suffering a major decline in their stock prices in the 1973-1974 bear market, members of the Nifty-Fifty whose businesses continued to thrive saw their stock prices beat the S&P 500 and effectively match their underlying business growth in the long run even when using the market-peak in December 1972 as the starting point.
Source: Jeremy Siegel

You may have noticed that all of the examples of stock prices first collapsing then eventually reflecting their underlying business growth that were shared above – Walmart, Teledyne, WPC, and members of the Nifty-Fifty – were from the 1970s. What if this relationship between stock prices and business fundamentals no longer holds now? It’s a legitimate concern. Economies change over time. Financial markets do too.

But I believe the underlying driver for the initial divergence and eventual convergence in the paths that the companies’ businesses and stock prices had taken in the past are alive and well today. This is because the driver was, in my opinion, the simple but important nature of the stock market: It is a place to buy and sell pieces of a business. This understanding leads to a logical conclusion that a stock’s price movement over the long run depends on the performance of its underlying business. The stock market, today, is still a place to buy and sell pieces of a business, which means the market is still a weighing machine in the long run. This also means that if you had invested a few years ago in a stock with an expensive valuation and have seen its stock price fall, it will likely still be appropriately appraised by the weighing machine in the fullness of time, if its fundamentals do remain strong in the years ahead. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.