Beware of This Valuation Misconception

Don’t value your shares based on cash flow to the firm, value it based on cash flow to the shareholder.

How should we value a stock? That’s one of the basic questions when investing. Warren Buffett answers this question extremely well. He says:

“Intrinsic value can be defined simply: It is the discounted value of the cash that can be taken out of a business during its remaining life.”

While seemingly straightforward, a lot of investors (myself included) have gotten mixed up between cash flow that a company generates and cash that is actually taken out of a business.

While the two may sound similar, they are in fact very different.

Key difference

Extra cash flow that a firm generates is termed free cash flow. This is cash flow that the company generates from operations minus any capital expenditure paid. 

But not all free cash flow to the firm is distributed to shareholders. Some of the cash flow may be used for acquisitions, some may be left in the bank, and some may be used for other investments such as buybacks or investing in other assets. Therefore, this is not cash that a shareholder will receive. The cash flow that is taken out of the business and paid to shareholders is only the dividend. 

When valuing a stock, it is important that we only take cash that will be returned to the shareholder as the basis of the valuation.

Extra free cash flow that is not returned to shareholders should not be considered when valuing a stock.

Common mistake

It is a pretty big mistake to value a stock based on the cash flow that the company generates as it can severely overstate the value of a business.

When using a discounted cash flow model, we should not take free cash flow to the firm  as the basis of valuation but instead use future dividends to value a business.

But what if the company is not paying a dividend?

Well, the same should apply. In the case that there is no dividend yet, we need to account for that in our valuation by only modelling for dividend payments later in the future.

Bottom line

Using discounted cash flow to the firm to value a business can severely overstate its value. This can be extremely dangerous as it can be used to justify extremely unwarranted valuations, leading to buying overvalued stocks.

To be accurate, a company should be valued based only on how much it can return to shareholders.

That said, free cash flow to the firm is not a useless metric in valuation. It is actually the basis of what makes a good company.

A company that can generate strong and growing free cash flows should be able to return an increasing stream of dividends to shareholders in the future. Free cash flow to the firm can be called the “lifeblood” of sustainable dividends.

Of course, all of this also depends on whether management is able to make good investment decisions on the cash it generates.

Therefore, when investing in a company, two key things matter. One, how much free cash flow the firm generates, and two, how good management is in allocating that new capital.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the fourth quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management believes that AI will allow the company to develop the most innovative and personalised AI interfaces in the world, and the company recently acquired GamePlanner AI to do so; Airbnb’s management thinks that popular AI services today, such as ChatGPT, are underutilising the foundational models that power the services; GamePlanner AI was founded by the creator of Apple’s Siri smart assistant

There is a new platform shift with AI, and it will allow us to do things we never could have imagined. While we’ve been using AI across our service for years, we believe we can become a leader in developing some of the most innovative and personalized AI interfaces in the world. In November, we accelerated our efforts with the acquisition of GamePlanner AI, a stealth AI company led by the co-founder and original developer of Siri. With these critical pieces in place, we’re now ready to expand beyond our core business. Now this will be a multiyear journey, and we will share more with you towards the end of this year…

…If you were to open, say, ChatGPT or Google, though the models are very powerful, the interface is really not an AI interface. It’s the same interface as the 2000s, in a sense, the 2010s. It’s a typical classical web interface. So we feel like the models, in a sense, are probably underutilized…

Airbnb’s management does not want to build foundational large language models – instead, they want to focus on the application layer

One way to think about AI is, let’s use a real-world metaphor. I mentioned we’re building a city. And in that city, we have infrastructure, like roads and bridges. And then on top of those roads and bridges, we have applications like cars. So Airbnb is not an infrastructure company. Infrastructure would be a large language model or, obviously, GPUs. So we’re not going to be investing in infrastructure. So we’re not going to be building a large language model. We’ll be relying on, obviously, OpenAI. Google makes — or create a model, Meta creates models. So those are really infrastructure. They’re really developing infrastructure. But where we can excel is on the application layer. And I believe that we can build one of the leading and most innovative AI interfaces ever created. 

Airbnb’s management believes that the advent of generative AI represents a platform shift and it opens the probability of Airbnb becoming a cross-vertical company

Here’s another way of saying it. Take your phone and look at all the icons on your phone. Most of those apps have not fundamentally changed since the advent of Generative AI. So what I think AI represents is the ultimate platform shift. We had the internet. We had mobile. Airbnb really rose during the rise of mobile. And the thing about a platform shift, as you know, there is also a shift in power. There’s a shift of behavior. And so I think this is a 0-0 ball game, where Airbnb, we have a platform that was built for 1 vertical short-term space. And I think with AI — Generative AI and developing a leading AI interface to provide an experience that’s so much more personalized than anything you’ve ever seen before.

Imagine an app that you feel like it knows you, it’s like the ultimate Concierge, an interface that is adaptive and evolving and changing in real-time, unlike no interface you’ve ever seen before. That would allow us to go from a single vertical company to a cross-vertical company. Because one of the things that we’ve noticed is the largest tech companies aren’t a single vertical. And we studied Amazon in the late ’90s, early 2000s, when they went from books to everything, or Apple when they launched the App Store. And these really large technology companies are horizontal platforms. And I think with AI and the work we’re doing around AI interfaces, I think that’s what you should expect of us.

Alphabet (NASDAQ: GOOG)

Alphabet’s Google Cloud segment saw accelerated growth in 2023 Q4 from generative AI

Cloud, which crossed $9 billion in revenues this quarter and saw accelerated growth driven by our GenAI and product leadership.

Alphabet closed 2023 by launching Gemini, a foundational AI model, which has state-of-the-art capabilities; Gemini Ultra is coming soon

We closed the year by launching the Gemini era, a new industry-leading series of models that will fuel the next generation of advances. Gemini is the first realization of the vision we had when we formed Google DeepMind, bringing together our 2 world-class research teams. It’s engineered to understand and combine text, images, audio, video and code in a natively multimodal way, and it can run on everything from mobile devices to data centers. Gemini gives us a great foundation. It’s already demonstrating state-of-the-art capabilities, and it’s only going to get better. Gemini Ultra is coming soon. The team is already working on the next versions and bringing it to our products.

Alphabet is already experimenting Gemini with Google Search; Search Generative Experience (SGE) saw its latency drop by 40% with Gemini

We are already experimenting with Gemini in Search, where it’s making our Search Generative Experience, or SGE, faster for users. We have seen a 40% reduction in latency in English in the U.S. 

Alphabet’s management thinks that SGE helps Google Search (1) answer new types of questions, (2) answer complex questions, and (3) surface more links; management believes that digital advertising will continue to play an important role in SGE; management has found that users find the ads placed above or below an AI overview of searches to be helpful; management knows what needs to be done to incorporate AI into the future experience of Google Search and they see AI assistants or agents as being an important component of Search in the future

By applying generative AI to Search, we are able to serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. People are finding it particularly useful for more complex questions like comparisons or longer queries. It’s also helpful in areas where people are looking for deeper understanding, such as education or even gift ideas. We are improving satisfaction, including answers for more conversational and intricate queries. As I mentioned earlier, we are surfacing more links with SGE and linking to a wider range of sources on the results page, and we’ll continue to prioritize approaches that add value for our users and send valuable traffic to publishers…

…As we shared last quarter, Ads will continue to play an important role in the new search experience, and we’ll continue to experiment with new formats native to SGE. SGE is creating new opportunities for us to improve commercial journeys for people by showing relevant ads alongside search results. We’ve also found that people are finding ads either above or below the AI-powered overview helpful as they provide useful options for people to take action and connect with businesses…

…Overall, one of the things I think people underestimate about Search is the breadth of Search, the amount of queries we see constantly on a new day, which we haven’t seen before. And so the trick here is to deliver that high-quality experience across the breadth of what we see in Search. And over time, we think Assistant will be very complementary. And we will again use generative AI there, particularly with our most advanced models in Bard and allows us to act more like an agent over time, if I were to think about the future and maybe go beyond answers and follow through for users even more. So that is the — directionally, what the opportunity set is. Obviously, a lot of execution ahead. But it’s an area where I think we have a deep sense of what to do.

Alphabet’s latest Pixel 8 phones have an AI-powered feature that lets users search what they see on their phones without switching apps; the Pixel 8s uses Gemini Nano for AI features

Circle to Search lets you search what you see on Android phones with a simple gesture without switching apps. It’s available starting this week on Pixel 8 and Pixel 8 Pro and the new Samsung Galaxy S24 Series…

…Pixel 8, our AI-first phone, was awarded Phone of the Year by numerous outlets. It now uses Gemini Nano with features like Magic Compose for Google Messages and more to come.

Alphabet’s management is seeing that advertisers have a lot of interest in Alphabet’s AI advertising solutions; the solutions include (1) the Automatically Created Assets (ACA) feature for businesses to build better ads and (2) conversational experiences – currently under beta testing – that has helped SMBs be 42% more likely to publish ads with good ad-strength

We are also seeing a lot of interest in our AI-powered solutions for advertisers. That includes our new conversational experience that uses Gemini to accelerate the creation of Search campaigns…

…As we look ahead, we’re also starting to put generative AI in the hands of more and more businesses to help them build better campaigns and even better performing ads. Automatically created assets help advertisers show more relevant search ads by creating tailored headlines and descriptions based on each ad’s context. Adoption was up with strong feedback in Q4. In addition to now being available in 8 languages, more advanced GenAI-powered capabilities are coming to ACA…

…And then last week’s big news was that Gemini will power new conversational experience in Google Ads. This is open and beta to U.S. and U.K. advertisers. Early tests show advertisers are building higher-quality search campaigns with less effort, especially SMBs who are 42% more likely to publish a campaign with good or excellent ad strength. 

Alphabet’s Google Cloud offers AI Hypercomputer (a supercomputing architecture for AI), which is used by high-profile AI startups such as Anthropic and Mistral AI

Google Cloud offers our AI Hypercomputer, a groundbreaking supercomputing architecture that combines our powerful TPUs and GPUs, AI software and multi-slice and multi-host technology to provide performance and cost advantages for training and serving models. Customers like Anthropic, Character.AI, Essential AI and Mistral AI are building and serving models on it.

Vertex AI, which is within Google Cloud, enables users to customise and deploy more than 130 generative AI models; Vertex AI’s API (application programming interface) requests has jumped six times from the first half of 2023 the second half; Samsung is using Vertex AI to provide GenAI models in its Galaxy S24 smartphones while companies such as Shutterstock and Victoria’s Secret are also using Vertex AI

For developers building GenAI applications, we offer Vertex AI, a comprehensive enterprise AI platform. It helps customers like Deutsche Telekom and Moody’s discover, customize, augment and deploy over 130 GenAI models, including PaLM, MedPaLM, Sec-PaLM and Gemini as well as popular open source and partner models. Vertex AI has seen strong adoption with the API request increasing nearly 6x from H1 to H2 last year. Using Vertex AI, Samsung recently announced its Galaxy S24 Series smartphone with Gemini and Imagen 2, our advanced text-to-image model. Shutterstock has added Imagen 2 to their AI image generator, enabling users to turn simple text prompts into unique visuals. And Victoria’s Secret & Co. will look to personalize and improve the customer experience with Gemini, Vertex AI, Search and Conversations.

Duet AI, Alphabet’s AI agents for its Google Workspace and Google Cloud Platform (GCP) services, now has more than 1 million testers, and will incorporate Gemini soon; Duet AI for Developers is the only generative AI offering that supports the entire development and operations lifecycle for software development; large companies such as Wayfair, GE Appliances, and Commerzbank are already using Duet AI for Developers

Customers are increasingly choosing Duet AI, our packaged AI agents for Google Workspace and Google Cloud Platform, to boost productivity and improve their operations. Since its launch, thousands of companies and more than 1 million trusted testers have used Duet AI. It will incorporate Gemini soon. In Workspace, Duet AI is helping employees benefit from improved productivity and creativity at thousands of paying customers around the world, including Singapore Post, Uber and Woolworths. In Google Cloud Platform, Duet AI assists software developers and cybersecurity analysts. Duet AI for Developers is the only GenAI offering to support the complete development and operations life cycle, fine-tuned with the customer’s own core purpose and policies. It’s helping Wayfair, GE Appliances and Commerzbank write better software, faster with AI code completion, code generation and chat support. With Duet AI and Security Operations, we are helping cybersecurity teams at Fiserv, Spotify and Pfizer.

Alphabet’s management believes that the company has state-of-the-art compute infrastructure and that it will be a major differentiator in the company’s AI-related work; managements wants Alphabet to continue investing in its infrastructure

Search, YouTube and Cloud are supported by our state-of-the-art compute infrastructure. This infrastructure is also key to realizing our big AI ambitions. It’s a major differentiator for us. We continue to invest responsibly in our data centers and compute to support this new wave of growth in AI-powered services for us and for our customers.

Alphabet’s AI-powered ad solutions are helping retailers with their omni channel growth; a large big-box retailer saw a 60%+ increase in omni channel ROA (return on advertising) and a 22%+ increase in store traffic

Our proven AI-powered ad solutions were also a win for retailers looking to accelerate omni growth and capture holiday demand. Quick examples include a large U.S. big-box retailer who drove a 60%-plus increase in omni ROAS and a 22%-plus increase in store traffic using Performance Max during Cyber Five; and a well-known global fashion brand, who drove a 15%-plus higher omnichannel conversion rate versus regular shopping traffic by showcasing its store pickup offering across top markets through pickup later on shopping ads.

Alphabet’s management is using AI to make it easier for content creators to create content for Youtube (for example, creators can easily create backgrounds or translate their videos); management also believes the AI tools built for creators can also be ported over to the advertising business to help advertisers

First, creation, which increasingly takes place on mobile devices. We’ve invested in a full suite of tools, including our new YouTube Create app for Shorts, to help people make everything from 15-second Shorts to 15-minute videos to 15-hour live streams with a production studio in the palm of their hands. GenAI is supercharging these capabilities. Anyone with a phone can swap in a new backdrop, remove background extras, translate their video into dozens of languages, all without a big studio budget. We’re excited about our first products in this area from Dream Screen for AI-generated backgrounds to Aloud for AI-powered dubbing…

…You are obviously aware of the made YouTube announcement where we introduced a whole lot of new complementary creativity features on YouTube, including Dream Screen, for example, and a lot of other really interesting tools and thoughts. You can obviously imagine that we can take this more actively to the advertising world already. As you know, it continues already to power AI, a lot of our video ad solutions and measurement capabilities. It’s part of video-rich campaigns. Multi-format ads are — actually, there is a generative creator music that actually makes it easier for creators to design the perfect soundtrack already. And as I said earlier, AI will unlock a new world of creativity. And you can see how this will — if you just look at where models are heading, where multimodal models are heading, where the generation capabilities of those models are heading, you can absolutely see how this will impact and positively impact and simplify the flow for creators, similar to what you see already emerging in some of our core products like ACA on the Search side.

Alphabet’s management expects the company’s capital expenditure in 2024 to be notably higher than in 2023 (it was US$20 billion in 2023), driven by investments in AI infrastructure

With respect to CapEx, our reported CapEx in the fourth quarter was $11 billion, driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. The step-up in CapEx in Q4 reflects our outlook for the extraordinary applications of AI to deliver for users, advertisers, developers, cloud enterprise customers and governments globally and the long-term growth opportunities that offers. In 2024, we expect investment in CapEx will be notably larger than in 2023.

Alphabet’s management is restructuring the company’s workforce not because AI is taking away jobs, but because management believes that AI solutions can deliver significant ROI (return on investments) and it’s important for Alphabet to have an organisational structure that can better build these solutions

But I also want to be clear, when we restructure, there’s always an opportunity to be more efficient and smarter in how we service and grow our customers. We’re not restructuring because AI is taking away roles that’s important here. But we see significant opportunities here with our AI-powered solution to actually deliver incredible ROI at scale, and that’s why we’re doing some of those adjustments.

Alphabet’s management thinks that Search is not just about generative AI

Obviously, generative AI is a new tool in the arsenal. But there’s a lot more that goes into Search: the breadth, the depth, the diversity across verticals, stability to follow through, getting actually access to rich, diverse sources of content on the web and putting it all together in a compelling way.

Alphabet’s management believes that AI features can help level the playing field for SMBs in the creation of effective advertising (when competing with large companies) and they will continue to invest in that area

Our focus has always been here on investing in solutions that really help level the playing field, and you mentioned several of those. So actually, SMBs can compete with bigger brands and more sophisticated advertisers. And so the feedback we’re always getting is they need easy solutions that could drive value quickly, and several of the AI-powered solutions that you’re mentioning are actually making the workflow and the whole on-ramp and the bidded targeting creative and so on, you mentioned that is so much easier for SMBs. So we’re very satisfied with what we’re seeing here. We will continue to invest. 

Amazon (NASDAQ: AMZN)

Amazon’s cloud computing service, AWS, saw an acceleration in revenue growth in 2023 Q4 and management believes this was driven partly by AI

If you look back at the revenue growth, it accelerated to 13.2% in Q4, as we just mentioned. That was an acceleration. We expect accelerating trends to continue into 2024. We’re excited about the resumption, I guess, of migrations that companies may have put on hold during 2023 in some cases and interest in our generative AI products, like Bedrock and Q, as Andy was describing

Amazon’s management reminded the audience that their framework for thinking about generative AI consists of three layers – the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon is investing heavily in all three

You may remember that we’ve explained our vision of three distinct layers in the gen AI stack, each of which is gigantic and each of which we’re deeply investing.

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute…

…In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service…

…At the top layer of the stack is the application layer.

Amazon’s management is seeing revenues accelerate rapidly for AWS across all three layers of the generative AI stack and AWS is receiving significant interest from customers wanting to run AI workloads

Still relatively early days, but the revenues are accelerating rapidly across all three layers, and our approach to democratizing AI is resonating well with our customers. We have seen significant interest from our customers wanting to run generative AI applications and build large language models and foundation models, all with the privacy, reliability and security they have grown accustomed to with AWS

Amazon’s management is seeing that enterprises are still figuring out which layer of the generative AI stack they want to operate in; management thinks that most enterprises will operating in at least two layers, with the technically capable ones operating in all three

When we talk to customers, particularly at enterprises as they’re thinking about generative AI, many are still thinking through at which layers of those three layers of the stack I laid out that they want to operate in. And we predict that most companies will operate in at least two of them. But I also think, even though it may not be the case early on, I think many of the technically capable companies will operate at all three. They will build their own models, they will leverage existing models from us, and then they’re going to build the apps. 

At the first layer of the generative AI stack, AWS is offering the most expansive collection of compute instances with NVIDIA chips; AWS has built its own Trainium chips for training and Inferentia chips for inference; a new version of Trainium – Trainium 2 – was recently announced and it is 4x faster, and has 3x more memory, than the first generation of Trainium; large companies and prominent AI startups are using AWS’s AI chips

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute, we offer the most expansive collection of compute instances with NVIDIA chips. We also have customers who like us to push the price performance envelope on AI chips just as we have with Graviton for generalized CPU chips, which are 40% more price-performant than other x86 alternatives. And as a result, we’ve built custom AI training chips named Trainium and inference chips named Inferentia. In re:Invent, we announced Trainium2, which offers 4x faster training performance and 3x more memory capacity versus the first generation of Trainium, enabling advantageous price performance versus alternatives. We already have several customers using our AI chips, including Anthropic, AirBnB, Hugging Face, Qualtrics, Rico and Snap.

At the middle layer of the generative AI stack, AWS has launched Bedrock, which offers LLMs-as-a-service; Bedrock is off to a very strong start with thousands of customers already using it just a few months after launch; Bedrock has added new models, including those from prominent AI startups, Meta’s Llama2, and Amazon’s own Titan family; customers are excited over Bedrock because building production-quality generative AI applications requires multiple iterations of models, and the use of many different models, and this is where Bedrock excels

In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service, we’ve launched Bedrock, which is off to a very strong start with many thousands of customers using the service after just a few months… We also added new models from Anthropic, Cohere, Meta with Llama 2, Stability AI and our own Amazon Titan family of LLMs. What customers have learned at this early stage of gen AI is that there’s meaningful iteration required in building a production gen AI application with the requisite enterprise quality at the cost and latency needed. Customers don’t want only one model. They want different models for different types of applications and different-sized models for different applications. Customers want a service that makes this experimenting and iterating simple. And this is what Bedrock does, which is why so many customers are excited about it.

At the top layer of the generative AI stack, AWS recently launched Amazon Q, a coding companion; management believes that a coding companion is one of the very best early generative AI applications; Amazon Q is linked with more than 40 popular data-connectors so that customers can easily query their data repositories; Amazon Q has generated strong interest from developers

At the top layer of the stack is the application layer. One of the very best early gen AI applications is a coding companion. At re:Invent, we launched Amazon Q, which is an expert on AWS, writes code, debugs code, tests code, does translations like moving from an old version of Java to a new one and can also query customers various data repositories like Internet, Wikis or from over 40 different popular connectors to data in Salesforce, Amazon S3, ServiceNow, Slack, Atlassian or Zendesk, among others. And it answers questions, summarizes data, carries on a coherent conversation and takes action. It was designed with security and privacy in mind from the start, making it easier for organizations to use generative AI safely. Q is the most capable work assistant and another service that customers are very excited about…

…When enterprises are looking at how they might best make their developers more productive, they’re looking at what’s the array of capabilities in these different coding companion options they have. And so we’re spending a lot of time. Our enterprises are quite excited about it. It created a meaningful stir in re:Invent. And what you see typically is that these companies experiment with different options they have and they make decisions for their employee base, and we’re seeing very good momentum there.

Amazon’s management is seeing that security over data is very important to customers when they are using AI and this is an important differentiator for AWS because its AI services inherit the same security features as AWS – and AWS’s capabilities and track record in security are good

By the way, don’t underestimate the point about Bedrock and Q inheriting the same security and access control as customers get with AWS. Security is a big deal, an important differentiator between cloud providers. The data in these models is some of the company’s most sensitive and critical assets. With AWS’ advantaged security capabilities and track record relative to other providers, we continue to see momentum around customers wanting to do their long-term gen AI work with AWS.

Amazon has launched some generative AI applications across its businesses and are building more; one of the applications launched is Rufus, a shopping assistant, which allows consumers to receive thoughtful responses to detailed shopping questions; other generative AI applications being built and launched by Amazon include a customer-review-summary app, an app for customers to predict how they will fit in apparel, an app for inventory forecasts for each fulfilment centre, and an app to generate copy for ads based on a picture, or generate pictures based on copy; Rufus is seamlessly integrated into Amazon and management thinks Rufus could meaningfully change what discovery looks for shoppers using Amazon

We’re building dozens of gen AI apps across Amazon’s businesses, several of which have launched and others of which are in development. This morning, we launched Rufus, an expert shopping assistant trained on our product and customer data that represents a significant customer experience improvement for discovery. Rufus lets customers ask shopping journey questions, like what is the best golf ball to use for better spin control or which are the best cold weather rain jackets, and get thoughtful explanations for what matters and recommendations on products. You can carry on a conversation with Rufus on other related or unrelated questions and retains context coherently. You can sift through our rich product pages by asking Rufus questions on any product features and it will return answers quickly…

…. So if you just look at some of our consumer businesses, on the retail side, we built a generative AI application that allowed customers to look at summary of customer review, so that they didn’t have to read hundreds and sometimes thousands of reviews to get a sense for what people like or dislike about a product. We launched a generative AI application that allows customers to quickly be able to predict what kind of fit they’d have for different apparel items. We built a generative AI application in our fulfillment centers that forecasts how much inventory we need in each particular fulfillment center…Our advertising business is building capabilities where people can submit a picture and an ad copy is written and the other way around. 

…  All those questions you can plug in and get really good answers. And then it’s seamlessly integrated in the Amazon experience that customers are used to and love to be able to take action. So I think that that’s just the next iteration. I think it’s going to meaningfully change what discovery looks like for our shopping experience and for our customers.

Amazon’s management believes generative AI will drive tens of billions in revenue for the company over the next few years

Gen AI is and will continue to be an area of pervasive focus and investment across Amazon primarily because there are a few initiatives, if any, that give us the chance to reinvent so many of our customer experiences and processes, and we believe it will ultimately drive tens of billions of dollars of revenue for Amazon over the next several years.

Amazon’s management expects the company’s full-year capital expenditure for 2024 to be higher than in 2023, driven by increased investments in infrastructure for AWS and AI

We define our capital investments as a combination of CapEx plus equipment finance leases. In 2023, full year CapEx was $48.4 billion, which was down $10.2 billion year-over-year, primarily driven by lower spend on fulfillment and transportation. As we look forward to 2024, we anticipate CapEx to increase year-over-year primarily driven by increased infrastructure CapEx to support growth of our AWS business, including additional investments in generative AI and large language models.

AWS’s generative AI revenue is pretty big in absolute numbers, but small in the context of AWS already being a $100 billion annual-revenue-run-rate business

If you look at the gen AI revenue we have, in absolute numbers, it’s a pretty big number. But in the scheme of a $100 billion annual revenue run rate business, it’s still relatively small, much smaller than what it will be in the future, where we really believe we’re going to drive tens of billions of dollars of revenue over the next several years. 

Apple (NASDAQ: AAPL)

Many of the features in Apple’s latest product, the virtual reality headset, the Vision Pro, features are powered by AI

There’s an incredible amount of technology that’s packed into the product. There’s 5,000 patents in the product. And it’s, of course, built on many innovations that Apple has spent multiple years on, from silicon to displays and significant AI and machine learning, all the hand tracking, the room mapping, all of this stuff is driven by AI.

Apple has been spending a lot of time and effort on AI and management will share details later in 2024

As we look ahead, we will continue to invest in these and other technologies that will shape the future. That includes artificial intelligence where we continue to spend a tremendous amount of time and effort, and we’re excited to share the details of our ongoing work in that space later this year…

…In terms of generative AI, which I would guess is your focus, we have a lot of work going on internally as I’ve alluded to before. Our MO, if you will, has always been to do work and then talk about work and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we’ve got some things that we’re incredibly excited about that we’ll be talking about later this year.

Apple’s management thinks there is a huge opportunity for Apple with generative AI but will only share more details in the future

Let me just say that I think there is a huge opportunity for Apple with gen AI and AI and without getting into more details and getting out in front of myself.

Arista Networks (NYSE: ANET)

Arista Networks’ management believes that AI at scale needs Ethernet at scale because AI workloads cannot tolerate delays; management thinks that 400 and 800-gigabit Ethernet will become important or AI back-end GPU clusters

AI workloads are placing greater demands on Ethernet as they have both data and compute-intensive across thousands of processes today. Basically, AI at scale needs Ethernet at scale. AI workloads cannot tolerate the delays in the network because the job can only be completed after all flows are successfully delivered to the GPU clusters. All it takes is one culprit or worst-case link to throttle an entire AI workload…

…. We expect both 400 and 800-gigabit Ethernet will emerge as important pilots for AI back-end GPU clusters. 

Arista Networks’ management is pushing the company and the Ultra Ethernet Consortium to improve Ethernet technology for AI workloads in three key ways; management believes that Ethernet is superior to Infiniband for AI-related data networking because Ethernet provides flexible ordering of data transfer whereas Infiniband is rigid

Three improvements are being pioneered by Arista and the founding members of the Ultra Ethernet Consortium to improve job completion time. Number one, packet spring. AI network topology meets packet spring to allow every flow to simultaneously access all parts of the destination. Arista is developing multiple forms of load balancing dynamically with our customers. Two is flexible ordering. Key to an AI job completion is the rapid and reliable bulk transfer with flexible ordering using Ethernet links to optimally balance AI-intensive operations, unlike the rigid ordering of InfiniBand. Arista is working closely with its leading vendors to achieve this. Finally, network congestion. In AI networks, there’s a common in-cost congestion problem whereby multiple uncoordinated senders can send traffic to the receiver simultaneously. Arista’s platforms are purpose-built and designed to avoid these kinds of hotspots, evenly spreading the load across multi-packs across a virtual output queuing VoQ losses fabric.

Arista Networks’ management thinks the company can achieve AI revenue of at least $750 million in 2025

We are cautiously optimistic about achieving our AI revenue goal of at least $750 million in AI networking in 2025…

…. So our AI performance continues to track well for the $750 million revenue goal that we set last November at Analyst Day. 

Arista Networks’ management sees the company becoming the gold-standard for AI data-networking

We have more than doubled our enterprise revenue in the last 3 years and we are becoming the gold standard for client-to-cloud-to-AI networking with 1 EOS and 1 CloudVision Foundation. 

In the last 12 months, Arista Networks has participated in a large number of AI project bids, and in the last five projects where there was a situation of Ethernet versus Infiniband, Arista Networks has won four of them; over the last 12 months, a lot has changed in terms of how Infiniband was initially bundled into AI data centres; management believes that Ethernet will become the default standard for AI networking going forward

To give you some color on the last 3 months, I would say difficult to project anything in 3 months. But if I look at the last year, which maybe last 12 months is a better indication, we have participated in a large number of AI bids and when I say large, I should say they are large AI bids, but there are a small number of customers actually to be more clear. And in the last 4 out of 5, AI networking clusters we have participated on Ethernet versus InfiniBand, Arista has won all 4 of them for Ethernet, one of them still stays on InfiniBand. So these are very high-profile customers. We are pleased with this progress…

…The first real consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet become — versus InfiniBand becomes really important because as you may recall, a year ago, I told you we were outside looking in, everybody had an Ethernet — everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now and the back-end cluster for our AI is the Arista 7800 AI spine, which in a single chassis with north of 500 terabit of capacity can give you a substantial number of ports, 400 or 800. So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the Analyst Day, we can expand that to a 2-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs nonblocking. This lossless architecture for Ethernet. And then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion controls, packet spring and working with a suite of [ UC ] mix is what I think will make Ethernet the default standard for AI networking going forward. 

Arista Networks’ management believes that owners and operators of AI data centres would not want to work with white box data switches (non-branded and commoditised data switches) because data switches are mission critical in AI data centres, so users would prefer reliable and higher-quality data switches

I think white box is here to stay for a very long time if somebody just wants a throwaway commodity product, but how many people want throwaway commodity in the data center? They’re still mission-critical, and they’re even more mission-critical for AI. If I’m going to spend multimillion dollars on a GPU cluster, and then the last thing I’m going to do is put a toy network in, right? So to put this sort of in perspective, that we will continue to coexist with a white box. There will be use cases where Arista’s blue box or a stand-alone white box can run either SONiC or FBOSS but many times, the EOS software stack is really, really something they depend on for availability, analytics, automation, and there’s — you can get your network for 0 cost, but the cost of downtime is millions and millions of dollars.

Arista Networks is connecting more and more GPUs and management believes that the picture of how a standard AI data centre Ethernet switch will look like is starting to form; AI is still a small part of Arista Networks’ business but one that should grow over time

On the AI side, we continue to track well. I think we’re moving from what I call trials, which is connecting hundreds of GPUs to pilots, which is connecting thousands of GPUs this year, and then we expect larger production clusters. I think one of the questions that we will be asking ourselves and our customers is how these production clusters evolve. Is it going to be 400, 800 or a combination thereof? The role of Ultra Ethernet Consortium and standards and the ecosystem all coming together, very similar to how we had these discussions in 400 gig will also play a large part. But we’re feeling pretty good about the activity. And I think moving from trials to pilots this year will give us considerable confidence on next year’s number…

…AI is going to come. It is yet to come — certainly in 2023, as I’ve said to you many, many times, it was a very small part of our number, but it will gradually increase.

Arista Networks’ management is in close contact with the leading GPU vendors when designing networking solutions for AI data centres

Specific to our partnership, you can be assured that we’ll be working with the leading GPU vendors. And as you know, NVIDIA has 90% or 95% of the market. So Jensen and I are going to partner closely. It is vital to get a complete AI network design going. We will also be working with our partners in AMD and Intel so we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.

Arista Networks’ management believes that the company is very well-positioned for the initial growth spurts in AI networking

Today’s models are moving very rapidly, relying on a high bandwidth, predictable latency, the focus on application performance requires you to be sole sourced initially. And over time, I’m sure it’ll move to multiple sources, but I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade.

ASML (NASDAQ: ASML)

ASML’s management believes that 2025 will be a strong year for the company because of the long-term trends in its favour (this includes AI and digitalisation, customer-inventory-levels becoming better, and the scheduled opening of many semiconductor fabrication plants)

So essentially unchanged I would say in comparison to what we said last quarter. So if we start looking at 2025. As I mentioned before, we are looking at a year of significant growth and that is for a couple of reasons. First off, we think the secular trends in our industry are still very much intact. If you look at the developments around AI, if you look at the developments around electrification, around energy transition etcetera, they will need many, many semiconductors. So we believe the secular trends in the industry are still very, very strong. Secondly I think clearly by 2025 we should see our customers go through the up cycle. I mean the upward trend in the cycle. So that should be a positive. Thirdly, as we also mentioned last time it’s clear that many fab openings are scheduled that will require the intake of quite some tools in the 2025 time frame.

ASML’s management is seeing AI-related demand drive a positive inflection in the company’s order intake

And I think AI is now particularly something which could be on top of that because that’s clearly a technology transition. But we’ve already seen a very positive effect of that in our Q4 order intake…

…After a few soft quarters, the order intake for the quarter was very, very strong. Actually a record order intake at €9.2 billion. If you look at the composition of that, it was about 50/50 for Memory versus Logic. Around €5.6 billion out of the €9.2 was related to EUV, both Low NA and High NA.

ASML’s management is confident that AI will help to drive demand for the company’s EUV (extreme ultraviolet) lithography systems from the Memory-chips market in the near future

 In ’23, our Memory shipments were lower than the 30% that you mentioned. But if you look at ’25, and we also take into account what I just said about AI and the need for EUV in the DDR5 and in the HBM era, then the 30% is a very safe path and could be on the conservative side.

ASML’s management thinks that the performance of memory chips is a bottleneck for AI-related workloads, and this is where EUV lithography is needed; management was also positively surprised at how important EUV was for the development of leading-edge memory chips for AI

I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV…

…  And were we surprised? I must be — I say, yes, to some extent, we were surprised in the meetings we’ve had with customers and especially the Memory because we’re leading-edge Memory customers. We were surprised about the technology requirements of — for litho, EUV specifically and how it impacts how important it is for the rollout and the ramp of the memory solutions for AI. This is why we received more EUV orders than we anticipated because it was obvious in the detailed discussions and the reviews with our customers, that EUV is critical in that sense. And that was a bit of a surprise, that’s a positive surprise. 

[Question] Sorry, was that a function of EUV layer count or perhaps where they’re repurposing equipment? And so now they’re realizing they need more footprint for EUV.

[Answer] No, it is layer count and imaging performance. And that’s what led to the surprise, the positive surprise, which indeed led to more orders.

ASML’s management sees the early shoots of recovery observed in the Memory chip market as being driven by both higher utilisation across the board, and by the AI-specific technology transition

I think it’s — what we’re seeing is, of course, the information coming off our tools that we see the utilization rates going up. That’s one. Clearly, there’s also an element of technology transition. That’s also clear. I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV. So it’s a bit of a mix. I mean, yes, you’ve gone through, I think, the bottom of this memory cycle with prices going up, utilizations increasing, and that combined with the technology transition driven by AI. That’s a bit what we see today. So it’s a combination of both, and I think that will continue.

ASML’s management is thinking if their planned capacity buildout for EUV lithography systems is too low, partly because of AI-driven demand for leading edge chips

We have said our capacity buildout will be 90 EUV Low-NA systems, 20 High-NA whereby internally, we are looking at that number as a kind of a base number where we’re investigating whether that number should be higher. The question is whether that 90 is going to be enough. Now we have to realize, we are selling wafer capacity, which is not only a function of the number of units, but also a function of the productivity of those tools. Now we have a pretty aggressive road map for the productivity in terms of wafers per hour. So it’s a complex question that you’re asking. But actually, we need to look at this especially against the math that we’re seeing for little requirements in the area of AI, whether it’s HBM or whether it is Logic, whether the number of units and the road map on productivity, which gives wafers because the combination is wafer capacity, whether that is sufficient.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing growing engagement in AI with a 75% sequential jump in the use of next-gen AI integrations

In observability, we now have more than 700 integrations allowing our customers to benefit from the latest AWS, Azure and GCP abilities as well as from the newly emerging AI stack. We continued to see increasing engagement there with the use of our next-gen AI integrations growing 75% sequentially in Q4.

Datadog’s management continues to add capabilities to Bits AI, the company’s natural language incident management copilot, and is improving the company’s LLM (large language model) observability capabilities

In the generative AI and LLM space, we continued to add capability to Bits AI, our natural language incident management copilot. And we are advancing LLM observability to help customers investigate where they can safely deploy and manage their models in production.

Currently, 3% of Datadog’s annualised recurring revenue (ARR) comes from next-gen AI native customers (was 2.5% in 2023 Q3); management believes the AI opportunity will be far larger in the future as all kinds of customers start incorporating AI in production; the AI native customers are companies that Datadgo’s management knows are substantially all based on AI

Today, about 3% of our ARR comes from next-gen AI native customers, but we believe the opportunity is far larger in the future as customers of every industry and every size start doing AI functionality in production…

…It’s hard for us to wrap our arms exactly around what is GenAI, what is not among our customer base and their workload. So the way we chose to do it is we looked at a smaller number of companies that we know are substantially all based on AI so these are companies like the modal providers and things like that. So 3% of ARR, which is up from what we had disclosed last time.

Microsoft said that AI accounts for six percentage points of Azure’s growth, but Datadog’s management is seeing AI-native companies on Datadog’s Azure business account for substantially more than the six percentage points mentioned

I know one number that everyone has been thinking about is one cloud, in particular, Microsoft, disclosed that 6% of their growth was attributable to AI. And we definitely see the benefits of that on our end, too. If I look at our Azure business in particular, there is substantially more than 6% that is attributable to AI native as part of our Azure business. So we see completely this trend is very true for us as well. It’s harder to tell with the other cloud providers because they don’t break those numbers up.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business, and that Datadog is ideally positioned for these

We continue to believe digital transformation and cloud migration are long-term secular growth drivers of our business and critical motion for every company to deliver value and competitive advantage. We see AI adoption as an additional driver of investment and accelerator of technical innovation and cloud migration. And more than ever, we feel ideally positioned to achieve our goals and help customers of every size in every industry to transform, innovate and drive value through technology adoption.

Datadog experienced a big slowdown from its digitally native customers in the recent past, but management thinks that these customers could also be the first ones to fully leverage AI and thus reaccelerate earlier

We suddenly saw a big slowdown from the digital native over the past year. On the other hand, they might be the first ones to fully leverage AI and deploy it in production. So you might see some reacceleration earlier from some of them at least.

Datadog’s management sees the attach rates for observability going up for AI workloads versus traditional workloads

[Question] If you think about the very long term, would you think attach rates of observability will end up being higher or lower for these AI workloads versus traditional workloads?

[Answer] We see the attach rate going up. The reason for that is our framework for that is actually in terms of complexity. AI just adds more complexity. You create more things faster without understanding what they do. Meaning you need — you shift a lot of the value from building to running, managing, understanding, securing all of the other things that need to keep happening after that. So the shape of some of the products might change a little bit because the shape of the software that runs it changes a little bit, which is no different from what happened over the past 10, 15 years. But we think it’s going to drive more need for observability, more need for security products around that.

Datadog’s management is seeing AI-native companies using largely the same kind of Datadog products as everyone else, but the AI-native companies are building the models, so the tooling for understanding the models are not applicable for them

[Question] Are the product SKUs, these kind of GenAI companies are adopting, are they similar or are they different to the kind of other customer cohorts?

[Answer] Today, this is largely the same SKUs as everybody else. These are infrastructure, APM logs, profiling these kind of things that they are — or really the monitoring, these kind of things that these customers are using. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base. And we see also where we see the bulk of the opportunity in the longer term, not in the handful of model providers that [ anybody ] is going to use. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base.

Datadog has a much larger presence in inference AI workloads as compared to training AI workloads; Datadog’s management sees that the AI companies that are scaling the most on Azure are scaling on inference

There’s 2 parts to the AI workloads today. There’s training and there’s inference. The vast majority of the players are still training. There’s only a few that are scaling with inference. The ones that are scaling with inference are the ones that are driving our ARR because we are — we don’t — we’re not really present on the training side, but we’re very present on the inference side. And I think that also lines up with what you might see from some of the cloud providers, where a lot of the players or some of the players that are scaling the most are on Azure today on the inference side, whereas a lot of the other players still largely training on some of the other clouds.

Etsy (NASDAQ: ETSY)

Etsy’s management recently launched Gift Mode, a feature where a buyer can type in details of a person and occasion, and AI technology will match the buyer with a gift; Gift Mode has more than 200 recipient persons, and has good early traction with 6 million visits in the first 2 weeks

So what’s Gift Mode? It’s a whole new shopping experience where gifters simply enter a few quick details about the person they’re shopping for, and we use the power of artificial intelligence and machine learning to match them with unique gifts from Etsy sellers. Creating a separate experience helps us know immediately if you’re shopping for yourself or someone else, hugely beneficial information to help our search engines solve for your needs. Within Gift Mode, we’ve identified more than 200 recipient personas, everything from rock climber to the crossword genius to the sandwich specialist. I’ve already told my family that when shopping for me, go straight to the music lover, the adventurer or the pet parent… 

…Early indications are that Gift Mode is off to a good start, including positive sentiment from buyers and sellers in our social channels, very strong earned media coverage and nearly 6 million visits in the first 2 weeks. As you test and shop in Gift Mode, keep in mind that this is just the beginning.

Etsy’s management is using AI to understand the return on investment of the company’s marketing spend

We’ve got pretty sophisticated algorithms that work on is this bid — is this click worth this much right now and how much should we bid. And so to the extent that CPCs rise, we naturally pull back. Or to the extent that CPC is lower, we naturally lean in. The other thing, by the way, it’s not just CPCs, it’s also conversion rates. So in times when people are really budget constrained, we see them actually — we see conversion rate across the industry go down. We see people compare some shop a lot more. And so we are looking at all of that and not humans, but machines using AI are looking at a very sophisticated way at what’s happening with conversion rate right now, what’s happening with CPCs right now. And therefore, how much is each visit worth and how much should we be bidding. 

Fiverr (NYSE: FVRR)

Fiverr’s management is seeing strong demand for the AI services vertical, with AI-related keyword searches growing sevenfold in 2023 

Early in January last year, we were the first in the market to launch a dedicated AI services vertical, creating a hub of businesses to higher AI talent. Throughout the year, we continue to see tremendous demand for those services with searches that contain AI-related keywords in our market base growing sevenfold in 2023 compared to 2022. 

Fiverr’s management has seen AI create a net-positive 4% impact to Fiverr’s business by driving a mix-shift for the company from simple services – such as translation and voice-over – to complex services; complex services now represent 1/3 of Fiverr’s market base are typically larger and longer-duration; complex categories are where a human touch is needed and adds value while simple categories are where technology can do a good job without humans; Fiverr’s management thinks that simple categories will be automated away by AI while complex categories will become more important

Overall, we estimate AI created a net positive impact of 4% to our business in 2023 as we see a category mix shift from simple services such as translation and voice over to more complex services such as mobile app development, e-commerce management or financial consulting. In 2023, complex services represented nearly 1/3 of our market base, a significant step-up from 2022. Moreover, there are typically larger projects and longer duration with an average transaction size 30% higher than those of simple services…

…What we’ve identified is there is a difference between what we call simple categories or tasks and more complex ones. And in the complex group, it’s really those categories that require human intervention and human inputs in order to produce a satisfactory results for the customer. And in these categories, we’re seeing growth that goes well beyond the overall growth that we’re seeing. And really, the simple ones are such where technology can actually do a pretty much gen-tie work, which in those cases, they’re usually associated with lower prices and shorter-term engagements…

…So our assumption is that some of the simple paths are going to be — continue to be automated, which, by the way, is nothing new. I mean, it happened before even before AI, automation has been a part of our lives. And definitely, the more complex services is where I think the growth potential definitely lies. This is why we called out the fact that we’re going to double down on these categories and services.

Fiverr’s management believes that the opportunities created by AI will outweigh the jobs that are displaced

We believe that the opportunities created by emerging technologies far outweigh the jobs they replace. Human talent continues to be an essential part of unlocking the potential of new technologies. 

Fiverr’s management believes that AI will be a multiyear tailwind for the company

We are also seeing a shift into more sophisticated, highly skilled and longer-duration categories with bigger addressable market. Data shows our market base is built to benefit from these technologies and labor market changes. Unlike single vertical solutions with higher exposure to disruptive technologies and train changes, Fiverr has developed a proprietary horizontal platform with hundreds of verticals, quickly leaning into the ever-changing industry demand needs and trends. All in all, we believe AI will be a multiyear tailwind for us to drive growth and innovation. In 2023, we also made significant investments in AI that drove improvements in our overall platform. 

A strategic priority for Fiverr’s management in 2024 is to develop AI tools to enhance the overall customer experience of the company’s marketplace

Our recent winter product release in January culminated these efforts in the second half of 2023 and revamped almost every part of our platform with an AI-first approach, from search to personalization from supply quality to seller engagement…

…Our third strategic priority is to continue developing proprietary AI applications unique to our market base to enhance the overall customer experience. The winter product release we discussed just now gives you a flavor of that, but there is so much more to do.

Mastercard (NYSE: MA)

Mastercard’s management is leveraging the company’s work on generative AI to build new services and solutions as well as to increase internal productivity

We also continue to develop new services and solutions, many of which leverage the work we are doing with generative AI. Generative AI brings more opportunity to drive better experiences for our customers, makes it easier to extract insights from our data. It can also help us increase internal productivity. We are working on many Gen AI use cases today to do just that. For example, we recently announced Shopping News. Shopping News uses generative AI to offer a conversational shopping tool that recreates the in-store human experience online, can translate consumers collegially language into tailored recommendations. Another example is Mastercard Small Business AI. The tool will draw on our existing small business resources, along with the content from a newly formed global media coalition to help business owners navigate a range of business challenges. The platform, which is scheduled for pilot launch later this year will leverage AI to provide personalized real-time assistance delivered in a conversational tone.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management launched a number of AI features – including a summary of customer reviews, a summary of product functions, push notifications about items left unpurchased in shopping carts, and capabilities for sellers to create coupons and answer buyer questions quickly – in 2023 for the ecommerce business

In 2023, we launched capabilities that enable sellers to create their own promotional coupons and answer buyer questions more quickly with the assistance of artificial intelligence…

…AI based features are already an integral part of the MELI experience, with many innovations launched in 2023, including: 

  • A summary of customer reviews on the product pages that concentrates the main feedback from buyers of that product.
  • On beauty product pages a summary of product functions and characteristics is automatically created to facilitate buyers choices.
  • Push notifications about items left unpurchased in shopping carts are now highly personalized and remind users why they may have chosen to buy a particular product.
  • We have also added an AI feature that helps sellers to respond to questions by preparing answers that sellers can send immediately, or edit quickly. 

Meta Platforms (NASDAQ: META)

The major goal of Meta’s management is for the company is to have (1) world-class AI assistant for all users, (2) AI-representor for each creator, (3) AI agent for every business, and (4) state-of-the-art open source models for developers

Now moving forward, a major goal, we’ll be building the most popular and most advanced AI products and services. And if we succeed, everyone who uses our services will have a world-class AI assistant to help get things done, every creator will have an AI that their community can engage with, every business will have an AI that their customers can interact with to buy goods and get support, and every developer will have a state-of-the-art open-source model to build with.

Meta’s management thinks consumers will want a new AI-powered computing device that can see and hear what we are seeing and hearing, and this new computing device will be smart glasses, and will require full general intelligence; Meta has been conducting research on general intelligence for more than a decade, but it will now also incorporate general intelligence into product work – management thinks having product-targets when developing general intelligence helps to focus the work

I also think that everyone will want a new category of computing devices that let you frictionlessly interact with AIs that can see what you see and hear what you hear, like smart glasses. And one thing that became clear to me in the last year is that this next generation of services requires building full general intelligence. Previously, I thought that because many of the tools were social-, commerce- or maybe media-oriented that it might be possible to deliver these products by solving only a subset of AI’s challenges. But now it’s clear that we’re going to need our models to be able to reason, plan, code, remember and many other cognitive abilities in order to provide the best versions of the services that we envision. We’ve been working on general intelligence research and FAIR for more than a decade. But now general intelligence will be the theme of our product work as well…

…We’ve worked on general intelligence in our lab, FAIR, for more than a decade, as I mentioned, and we produced a lot of valuable work. But having clear product targets for delivering general intelligence really focuses this work and helps us build the leading research program.

Meta’s management believes the company has world-class compute infrastructure; Meta will end 2024 with 600,000 H100 (NVIDIA’s state-of-the-art AI chip) equivalents of compute; Meta is coming up with new data centre and chip designs customised for its own needs

The first is world-class compute infrastructure. I recently shared that, by the end of this year, we’ll have about 350,000 H100s, and including other GPUs, that will be around 600,000 H100 equivalents of compute…

…In order to build the most advanced clusters, we’re also designing novel data centers and designing our own custom silicons specialized for our workloads.

Meta’s management thinks that future AI models will be even more compute-intensive to train and run inference; management does not know exactly how much the compute this will be, but recognises that the trend has been of AI models requiring 10x more compute for each new generation, so management expects Meta to require growing infrastructure investments in the years ahead for its AI work

Now going forward, we think that training and operating future models will be even more compute-intensive. We don’t have a clear expectation for exactly how much this will be yet, but the trend has been that state-of-the-art large language models have been trained on roughly 10x the amount of compute each year…

…While we are not providing guidance for years beyond 2024, we expect our ambitious long-term AI research and product development efforts will require growing infrastructure investments beyond this year.

Meta’s approach with AI is to open-source its foundation models while keeping product-implementations proprietary; Meta’s management thinks open-sourcing brings a few key benefits, in that open source software (1) is safer and more compute-efficient, (2) can become the industry standard, and (3) attracts talented people; management intends to continue open-sourcing Meta’s AI models 

Our long-standing strategy has been to build an open-source general infrastructure while keeping our specific product implementations proprietary. In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far, as well as industry standard tools like PyTorch that we’ve developed…

…The short version is that open sourcing improves our models. And because there’s still significant work to turn our models into products because there will be other open-source models available anyway, we find that there are mostly advantages to being the open-source leader, and it doesn’t remove differentiation for our products much anyway. And more specifically, there are several strategic benefits.

First, open-source software is typically safer and more secure as well as more compute-efficient to operate due to all the ongoing feedback, scrutiny and development from the community. Now this is a big deal because safety is one of the most important issues in AI. Efficiency improvements and lowering the compute costs also benefit everyone, including us. Second, open-source software often becomes an industry standard. And when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products. That’s subtle, but the ability to learn and improve quickly is a huge advantage. And being an industry standard enables that. Third, open source is hugely popular with developers and researchers. And we know that people want to work on open systems that will be widely adopted. So this helps us recruit the best people at Meta, which is a very big deal for leading in any new technology area…

…This is why our long-standing strategy has been to open source general infrastructure and why I expect it to continue to be the right approach for us going forward.

Meta is already training the next generation of its foundational Llama model, Llama 3, and progress is good; Meta is also working on research for the next generations of Llama models with an eye on developing full general intelligence; Meta’s management thinks that the company’s next few generations of foundational AI models could be in a totally different direction from other AI companies

In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far…

…While we’re working on today’s products and models, we’re also working on the research that we need to advance for Llama 5, 6 and 7 in the coming years and beyond to develop full general intelligence…

…A lot of last year and the work that we’re doing with Llama 3 is basically making sure that we can scale our efforts to really produce state-of-the-art models. But once we get past that, there’s a lot more kind of different research that I think we’re going to be doing that’s going to take our foundation models in potentially different directions than other players in the industry are going to go in because we’re focused on specific vision for what we’re building. So it’s really important as we think about what’s going to be in Llama 5 or 6 or 7 and what cognitive abilities we want in there and what modalities we want to build into future multimodal versions of the models.

Meta’s management sees unique feedback loops for the company’s AI work that involve both data and usage of its products; the feedback loops have been important in how Meta improved its AI systems for Reels and ads

When people think about data, they typically think about the corpus that you might use to train a model upfront. And on Facebook and Instagram, there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the common crawl data set. And people share large numbers of public text posts and comments across our services as well. But even more important in the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products. And this feedback is a big part of how we’ve improved our AI systems so quickly with Reels and Ads, especially over the last couple of years when we had to re-architect it around new rules.

Meta’s management wants hiring-growth in AI-related roles for 2024

AI is a growing area of investment for us in 2024 as we hire to support our road map…

…Second, we anticipate growth in payroll expenses as we work down our current hiring underrun and add incremental talent to support priority areas in 2024, which we expect will further shift our workforce composition toward higher-cost technical roles.

Meta’s management fully rolled out Meta AI Assistant and other AI chat experiences in the US at the end of 2023 and has began testing generative AI features in the company’s Family of Apps; Meta’s focus in 2024 regarding generative AI is on launching Llama3, making Meta AI assistant useful, and improving AI Studio

With generative AI, we fully rolled out our Meta AI assistant and other AI chat experiences in the U.S. at the end of the year and began testing more than 20 GenAI features across our Family of Apps. Our big areas of focus in 2024 will be working towards the launch of Llama 3, expanding the usefulness of our Meta AI assistant and progressing on our AI Studio road map to make it easier for anyone to create an AI. 

Meta has been using AI to improve its marketing performance; Advantage+ is helping advertisers partially or fully automate the creation of ad campaigns; Meta has rolled out generative AI features to help advertisers with changing text and images in their ad campaigns – adoption of the features is strong and test show promising performance gains, and Meta has a big focus in this area in 2024

We continue to leverage AI across our ad systems and product suite. We’re delivering continued performance gains from ranking improvements as we adopt larger and more advanced models, and this will remain an ongoing area of investment in 2024. We’re also building out our Advantage+ portfolio of solutions to help advertisers leverage AI to automate their advertising campaigns. Advertisers can choose to automate part of the campaign creation setup process, such as who to show their ad to with Advantage+ audience, or they can automate their campaign completely using Advantage+ shopping, which continues to see strong growth. We’re also now exploring ways to apply this end-to-end automation to new objectives. On the ads creative side, we completed the global rollout of 2 of our generative AI features in Q4, Text Variations and Image Expansion, and plan to broaden availability of our background generation feature later in Q1. Initial adoption of these features has been strong, and tests are showing promising early performance gains. This will remain a big area of focus for us in 2024…

…So we’re really scaling our Advantage+ suites across all of the different offerings there, which really helped to automate the ads creation process for different types of advertisers. And we’re getting very strong feedback on all of those different features, advantage+ Shopping, obviously, being the first, but Advantage+ Catalog, Advantage+ Creative, Advantage+ Audiences, et cetera. So we feel like these are all really important parts of what has continued to grow improvements in our Ads business and will continue to going forward.

Meta’s management’s guidance for capital expenditure for 2024 is increased slightly from prior guidance (for perspective 2023’s capex is $27.27 billion), driven by increased investments in servers and data centers for AI-related work

Turning now to the CapEx outlook. We anticipate our full year 2024 capital expenditures will be in the range of $30 billion to $37 billion, a $2 billion increase of the high end of our prior range. We expect growth will be driven by investments in servers, including both AI and non-AI hardware, and data centers as we ramp up construction on sites with our previously announced new data center architecture.

Meta’s management thinks AI will make all of the company’s products and services better, but is unsure how the details will play out

I do think that AI is going to make all of the products and services that we use and make better. So it’s hard to know exactly how that will play out. 

Meta’s management does not expect the company’s generative AI products to be a meaningful revenue-driver in the short term, but they expect the products to be huge drivers in the long term

We don’t expect our GenAI products to be a meaningful 2024 driver of revenue. But we certainly expect that they will have the potential to be meaningful contributors over time.

Microsoft (NASDAQ: MSFT)

Microsoft is now applying AI at scale, across its entire tech stack, and this is helping the company win customers

We have moved from talking about AI to applying AI at scale. By infusing AI across every layer of our tech stack, we are winning new customers and helping drive new benefits and productivity gains.

Microsoft’s management thinks that Azure offers (1) the best AI training and inference performance, (2) the widest range of AI chips, including those from AMD, NVIDIA, and Microsoft, and (3) the best selection of foundational models, including LLMs and SLMs (small language models); Azure AI now has 53,000 customers and more than 33% are new to Azure; Azure allows developers to deploy LLMs without managing underlying infrastructure

Azure offers the top performance for AI training and inference and the most diverse selection of AI accelerators, including the latest from AMD and NVIDIA as well as our own first-party silicon, Azure Maia. And with Azure AI, we provide access to the best selection of foundation and open source models, including both LLMs and SLMs all integrated deeply with infrastructure, data and tools on Azure. We now have 53,000 Azure AI customers. Over 1/3 are new to Azure over the past 12 months. Our new models of service offering makes it easy for developers to use LLMs from our partners like Cohere, Meta and Mistral on Azure without having to manage underlying infrastructure.

Azure grew revenue by 30% in 2023 Q4, with six points of growth from AI services; most of the six points of growth from AI services was driven by Azure Open AI

Azure and other cloud services revenue grew 30% and 28% in constant currency, including 6 points of growth from AI services. Both AI and non-AI Azure services drove our outperformance…

…Yes, Azure OpenAI and then OpenAI’s own APIs on top of Azure would be the sort of the major drivers. But there’s a lot of the small batch training that goes on, whether it’s out of [indiscernible] or fine-tuning. And then a lot of people who are starting to use models as a service with all the other new models. But it’s predominantly Azure OpenAI today.

Microsoft’s management believes the company has built the world’s most popular SLMs; the SLMs have similar performance to larger models, but can run on laptops and mobile devices; both startups and established companies are exploring the use of Microsoft’s Phi SLM for applications

We have also built the world’s most popular SLMs, which offer performance comparable to larger models but are small enough to run on a laptop or mobile device. Anchor, Ashley, AT&T, EY and Thomson Reuters, for example, are all already exploring how to use our SLM, Phi, for their applications. 

Microsoft has added Open AI’s latest models to the Azure OpenAI service; Azure Open AI is seeing increased usage from AI-first start ups, and more than 50% of Fortune 500 companies are using it

And we have great momentum with Azure OpenAI Service. This quarter, we added support for OpenAI’s latest models, including GPT-4 Turbo, GPT-4 with Vision, DALL-E 3 as well as fine-tuning. We are seeing increased usage from AI-first start-ups like Moveworks, Poplexity, Symphony AI as well as some of the world’s largest companies. Over half of the Fortune 500 use Azure OpenAI today, including Ally Financial, Coca-Cola and Rockwell Automation. For example, at CES this month, Walmart shared how it’s using Azure OpenAI Service along with its own proprietary data and models to streamline how more than 50,000 associates work and transform how its millions of customers shop. 

Microsoft’s management is integrating AI across the company’s entire data stack; Cosmo DB, which has vector search capabilities, is used by companies as a database for AI apps; KPMG, with the help of Cosmos DB, has seen a 50% increase in productivity for its consultants; Azure AI Search provides hybrid search that goes beyond vector search and Open AI is using it for ChatGPT 

We are integrating the power of AI across the entire data stack. Our Microsoft Intelligent Data Platform brings together operational databases, analytics, governance and AI to help organizations simplify and consolidate their data estates. Cosmos DB is the go-to database to build AI-powered apps at any scale, powering workloads for companies in every industry from AXA and Kohl’s to Mitsubushi and TomTom. KPMG, for example, has used Cosmos DB, including its built-in native vector search capabilities, along with Azure OpenAI Service to power an AI assistant, which it credits with driving an up to 50% increase in productivity for its consultants… And for those organizations who want to go beyond in-database vector search, Azure AI Search offers the best hybrid search solution. OpenAI is using it for retrieval augmented generation as part of ChatGPT. 

There are now more than 1.3 million GitHub Copilot subscribers, up 30% sequentially; more than 50,000 organisations use GitHub Copilot Business and Accenture alone will roll out GitHub Copilot to 50,000 of its developers in 2024; Microsoft’s management thinks GitHub Copilot is a core product for anybody who is working in software development

GitHub revenue accelerated to over 40% year-over-year, driven by all our platform growth and adoption of GitHub Copilot, the world’s most widely deployed AI developer tool. We now have over 1.3 million paid GitHub Copilot subscribers, up 30% quarter-over-quarter. And more than 50,000 organizations use GitHub Copilot Business to supercharge the productivity of their developers from digital natives like Etsy and HelloFresh to leading enterprises like Autodesk, Dell Technologies and Goldman Sachs. Accenture alone will roll out GitHub Copilot to 50,000 of its developers this year…

…Everybody had talked it’s become — it is the 1 place where it’s becoming standard issue for any developer. It’s like if you take away spellcheck from Word, I’ll be unemployable. And similarly, it will be like I think GitHub Copilot becomes core to anybody who is doing software development…

To increase GitHub Copilot’s ARPU (average revenue per user), and ARPUs for other Copilots for the matter, Microsoft’s management will lean on the improvement that the Copilots bring to a company’s operating leverage and ask for a greater share of value

Our ARPUs have been great but they’re pretty low. But frankly, even though we’ve had a lot of success, it’s not like we are a high-priced ARPU company. I think what you’re going to start finding is, whether it’s Sales Copilot or Service Copilot or GitHub Copilot or Security Copilot, they are going to fundamentally capture some of the value they drive in terms of the productivity of the OpEx, right? So it’s like 2 points, 3 points of OpEx leverage would go to some software spend. I think that’s a pretty straightforward value equation. And so that’s the first time. I mean, this is not something we’ve been able to make the case for before, whereas now I think we have that case.

Then even the horizontal Copilot is what Amy was talking about, which is at the Office 365 or Microsoft 365 level. Even there, you can make the same argument. Whatever ARPU we may have with E5, now you can say incrementally as a percentage of the OpEx, how much would you pay for a Copilot to give you more time savings, for example. And so yes, I think all up, I do see this as a new vector for us in what I’ll call the next phase of knowledge work and frontline work even and their productivity and how we participate.

And I think GitHub Copilot, I never thought of the tools business as fundamentally participating in the operating expenses of a company’s spend on, let’s say, development activity. And now you’re seeing that transition. It’s just not tools. It’s about productivity of your dev team.

Microsoft’s own research and external studies show that companies can see up to a 70% increase in productivity by using generative AI for specific tasks; early users of Copilot for Microsoft 365 became 29% faster in a number of tasks

Our own research as well as external studies show as much as 70% improvement in productivity using generative AI for specific work tasks. And overall, early Copilot for Microsoft 365 users were 29% faster in a series of tasks like searching, writing and summarizing.

Microsoft’s management believes that AI will become a first-class part of every personal computer (PC) in 2024

In 2024, AI will become first-class part of every PC. Windows PCs with built-in neural processing units were front and center at CES, unlocking new AI experiences to make what you do on your PC easier and faster, from searching for answers and summarizing e-mails to optimizing performance in battery efficiency. Copilot in Windows is already available on more than 75 million Windows 10 and Windows 11 PCs. And with our new Copilot Key, the first significant change to the Windows Keyboard in 30 years, providing one-click access.

Microsoft’s management thinks that AI is transforming Microsoft’s search and browser experience; Microsoft has created more than 5 billion images and conducted more than 5 billion chats to-date, with both doubling sequentially; Bing and Edge both took share in 2023 Q4

And more broadly, AI is transforming our search and browser experience. We are encouraged by the momentum. Earlier this month, we achieved a new milestone with 5 billion images created and 5 billion chats conducted to date, both doubling quarter-over-quarter and both Bing and Edge took share this quarter.

Microsoft’s management expects the company’s capital expenditure to increase materially in the next quarter because of cloud and AI infrastructure investments; management’s commitment to increase infrastructure investments is guided by customer demand and what they see as a substantial market opportunity; management feels good about where Microsoft is in terms of adding infrastructure capacity to meet AI computing demand

We expect capital expenditures to increase materially on a sequential basis, driven by investments in our cloud and AI infrastructure and the slip of a delivery date from Q2 to Q3 from a third-party provider noted earlier. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-out…

…Our commitment to scaling our cloud and AI investment is guided by customer demand and a substantial market opportunity. As we scale these investments, we remain focused on driving efficiencies across every layer of our tech stack and disciplined cost management across every team…

…I think we feel really good about where we have been in terms of adding capacity. You started to see the acceleration in our capital expense starting almost a year ago, and you’ve seen it scale through that process.

Microsoft’s management is seeing that most of the AI activity taking place on Azure is for inference

[Question] On AI, where are we in the journey from training driving most of the Azure AI usage to inferencing?

[Answer] What you’ve seen for most part is all inferencing. So none of the large model training stuff is in any of our either numbers at all. Small batch training, so somebody is doing fine-tuning or what have you, that will be there but that’s sort of a minor part. So most of what you see in the Azure number is broadly inferencing.

New workloads in AI that happen on Azure starts with selecting a frontier model, fine-tuning that model, then inference

The new workload in AI obviously, in our case, it starts with 1 of the frontier — I mean, starts with the frontier model, Azure OpenAI. But it’s not just about just 1 model, right? So you — first, you take that model, you do all that jazz, you may do some fine-tuning. You do retrieval, which means you’re sort of either getting some storage meter or you’re eating some compute meters. And so — and by the way, there’s still a large model to a small model and that would be a training perhaps, but that’s a small batch training that uses essentially inference infrastructure. So I think that’s what’s happening. 

Microsoft’s management believes that generative AI will change the entire tech stack, down to the core computer architecture; one such change is to separate data storage from compute, as in the case of one of Microsoft’s newer services, Fabric

[Question] Cloud computing changed the tech stack in ways that we could not imagine 10 years back. The nature of the database layer, the operating system, every layer just changed dramatically. How do you foresee generative AI changing the tech stack as we know it?

[Answer] I think it’s going to have a very, very foundational impact. In fact, you could say the core compute architecture itself changes, everything from power density to the data center design to what used to be the accelerator now is that sort of the main CPU, so to speak, or the main compute unit. And so I think — and the network, the memory architecture, all of it. So the core computer architecture changes, I think every workload changes. And so yes, so there’s a full — like take our data layer.

The most exciting thing for me in the last year has been to see how our data layer has evolved to be built for AI, right? If you think about Fabric, one of the genius of Fabric is to be able to say, let’s separate out storage from the compute layer. In compute, we’ll have traditional SQLs, we’ll have spark. And by the way, you can have an Azure AI drop on top of the same data lake, so to speak, or the lake house pattern. And then the business model, you can combine all of those different computes. So that’s the type of compute architecture. So it’s sort of a — so that’s just 1 example…

… I do believe being in the cloud has been very helpful to build AI. But now AI is just redefining what it means to have — what the cloud looks like, both at the infrastructure level and the app model.

Microsoft’s management is seeing a few big use cases emerging within Microsoft 365 Copilot: Summarisation of meetings and documents; “chatting” with documents and texts of past communications; and creation and completion of documents

In terms of what we’re seeing, it’s actually interesting if you look at the data we have, summarization, that’s what it’s like, number one, like I’m doing summarizations of teams, meetings inside of teams during the meeting, after the meeting, Word documents, summarization. I get something in e-mail, I’m summarizing. So summarization has become a big deal. Drafts, right? You’re drafting e-mails, drafting documents. So anytime you want to start something, the blank page thing goes away and you start by prompting and drafting.

Chat. To me, the most powerful feature is now you have the most important database in your company, which happens to be the database of your documents and communications, is now query-able by natural language in a powerful way, right? I can go and say, what are all the things Amy said I should be watching out for next quarter? And it will come out with great detail. And so chat, summarization, draft.

Also, by the way, actions, one of the most used things is, here’s a Word document. Go complete — I mean, create a PowerPoint for me. So those are the stuff that’s also beginning.

Microsoft’s management is seeing strong engagement growth with Microsoft 365 Copilot that gives them optimism

And the other thing I would add, we always talk about in enterprise software, you sell software, then you wait and then it gets deployed. And then after deployment, you want to see usage. And in particular, what we’ve seen and you would expect this in some ways with Copilot, even in the early stages, obviously, deployment happens very quickly. But really what we’re seeing is engagement growth. To Satya’s point on how you learn and your behavior changes, you see engagement grow with time. And so I think those are — just to put a pin on that because it’s an important dynamic when we think about the optimism you hear from us.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that companies are starting to build the next generation of AI data centres; this next generation of AI data centres takes in data and transforms them into tokens, which are the output of AI models

At the same time, companies have started to build the next generation of modern Data Centers, what we refer to as AI factories, purpose-built to refine raw data and produce valuable intelligence in the era of generative AI…

…A whole new industry in the sense that for the very first time, a Data Center is not just about computing data and storing data and serving the employees of the company. We now have a new type of Data Center that is about AI generation, an AI generation factory, and you’ve heard me describe it as AI factories. But basically, it takes raw material, which is data. It transforms it with these AI supercomputers that NVIDIA built, and it turns them into incredibly valuable tokens. These tokens are what people experience on the amazing ChatGPT or Midjourney or search these days are augmented by that. All of your recommender systems are now augmented by that, the hyper-personalization that goes along with it. All of these incredible start-ups in digital biology generating proteins and generating chemicals and the list goes on. And so all of these tokens are generated in a very specialized type of Data Center. And this Data Center, we call it AI supercomputers and AI generation factories.

Nvidia’s management is seeing very strong demand for the company’s Hopper AI chips and expects demand to far outstrip supply

Demand for Hopper remains very strong. We expect our next generation products to be supply constrained as demand far exceeds supply…

…However, whenever we have new products, as you know, it ramps from 0 to a very large number, and you can’t do that overnight. Everything is ramped up. It doesn’t step up. And so whenever we have a new generation of products and right now, we are ramping H200s, there’s no way we can reasonably keep up on demand in the short term as we ramp. 

Nvidia’s outstanding 2023 Q4 growth in Data Center revenue was driven by both training and inference of AI models; management estimates that 40% of Nvidia’s Data Center revenue in 2023 was for AI inference; the 40% estimate might even be understated, because recommendation systems that were driven by CPU approaches are now being driven by GPUs

In the fourth quarter, Data Center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-on-year…

…Fourth quarter Data Center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our Data Center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year, approximately 40% of Data Center revenue was for AI inference…

…The estimate is probably understated and — but we estimated it, and let me tell you why. Whenever — a year ago, a year ago, the recommender systems that people are — when you run the Internet, the news, the videos, the music, the products that are being recommended to you because, as you know, the Internet has trillions — I don’t know how many trillions, but trillions of things out there, and your phone is 3 inches squared. And so the ability for them to fit all of that information down to something such a small real estate is through a system, an amazing system called recommender systems.

These recommender systems used to be all based on CPU approaches. But the recent migration to deep learning and now generative AI has really put these recommender systems now directly into the path of GPU acceleration. It needs GPU acceleration for the embeddings. It needs GPU acceleration for the nearest neighbor search. It needs GPU accelerating for reranking. And it needs GPU acceleration to generate the augmented information for you. So GPUs are in every single step of a recommender system now. And as you know, a recommender system is the single largest software engine on the planet. Almost every major company in the world has to run these large recommender systems. 

Nvidia’s management is seeing that all industries are deploying AI solutions

Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale…

…One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as Automotive, health care, and financial services.

Large cloud providers accounted for more than half of Nvidia’s Data Center revenue in 2023 Q4; Microsoft 

In the fourth quarter, large cloud providers represented more than half of our Data Center revenue, supporting both internal workloads and external public cloud customers. 

Nvidia’s management is finding that consumer internet companies have been early adopters of AI and they are one of Nvidia’s largest customer categories; consumer internet companies are using AI (1) in content recommendation systems to boost user engagement and (2) to generate content for advertising and to help content creators

The consumer Internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-through rates…

… In addition, consumer Internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.

Nvidia’s management is observing that enterprise software companies are using generative AI to help their customers with productivity and they are already seeing commercial success

Enterprise software companies are applying generative AI to help customers realize productivity gains. All the customers we’ve partnered with for both training and inference of generative AI are already seeing notable commercial success. ServiceNow’s generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP, and Snowflake.

There are both enterprises and startups that are building foundational large language models; these models are serving specific cultures, regions, and also industries

The field of foundation of large language models is thriving, Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.AI, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world’s many regions. And others are creating foundation models to address entirely different industries like Recursion, pharmaceuticals and generative biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU-specialized cloud providers.

Nvidia’s AI infrastructure are used for autonomous driving; the automotive vertical accounted for more than $1 billion of Nvidia’s Data Center revenue in 2023, and Nvidia’s management thinks the automotive vertical is a big growth opportunity for the company

We estimate that Data Center revenue contribution of the Automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions include systems and software for the development of autonomous driving, including data ingestion, curation, labeling, and AI training, plus validation through simulation. Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA’s AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. In effect, nearly every Automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA’s automotive Data Center processing demand to grow significantly…

…NVIDIA DRIVE Orin is the AI car computer of choice for software-defined AV fleet. Its successor, NVIDIA DRIVE Thor, designed for vision transformers offers more AI performance and integrates a wide range of intelligent capabilities into a single AI compute platform, including autonomous driving and parking, driver and passenger monitoring, and AI cockpit functionality and will be available next year. There were several automotive customer announcements this quarter. Li Auto, Great Wall Motor, ZEEKR, the premium EV subsidiary of Geely and Xiaomi EV all announced new vehicles built on NVIDIA.

Nvidia is developing AI solutions in the realm of healthcare

In health care, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices. We have built deep domain expertise in health care over the past decade, creating the NVIDIA Clara health care platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery. BioNeMo features a growing collection of pre-trained biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for the proprietary AI model through BioNeMo for the drug discovery ecosystem.

Nvidia’s business in China is affected by the US government’s export restrictions concerning advanced AI chips; Nvidia has been building workarounds and have started shipping alternatives to China; Nvidia’s management expects China to remain a single-digit percentage of Data Center revenue in 2024 Q1; management thinks that while the US government wants to limit China’s access to leading-edge AI technology, it still wants to see Nvidia succeed in China

Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market. China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter…

…At the core, remember, the U.S. government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars, if you will, are the restrictions.

Nvidia’s management is seeing demand for AI infrastructure from countries become an additional growth-driver for the company

In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems…

…So we’re seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West will surely be replicated around the world. 

Nvidia is shipping its Hopper AI chips with Infiniband networking; management believes that a combination of the company’s Hopper AI chips with Infiniband is becoming a de facto standard for AI infrastructure

The vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de facto standard for accelerated computing and AI infrastructure. 

Nvidia is on track to ramp shipments of the latest generation of its most advanced AI chips – the H200 – in 2024 Q2; the H200 chips have double the inference performance of its predecessor

We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubled the inference performance of H100. 

Nvidia’s networking solutions has a revenue run-rate of more than $13 billion and the company’s Quantum Infiniband band solutions grew by more than five times in 2023 Q4 – but Nvidia is also working on its own Ethernet AI networking solution called Spectrum X, which is purpose-built for AI and performs better than traditional Ethernet for AI workloads; Spectrum X has attracted leading OEMs as partners, and Nvidia is on track to ship the solution in 2024 Q1; management still sees Infiniband the standard for AI-dedicated systems

Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year-on-year. NVIDIA Quantum InfiniBand is the standard for the highest-performance AI-dedicated infrastructures. We are now entering the Ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the Data Center. Spectrum-X introduces new technologies over Ethernet that are purpose-built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Leading OEMs, including Dell, HPE, Lenovo and Supermicro with their global sales channels are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter…

…InfiniBand is the standard for AI-dedicated systems. Ethernet with Spectrum-X, Ethernet is just not a very good scale-out system. But with Spectrum-X, we’ve augmented, layered on top of Ethernet, fundamental new capabilities like adaptive routing, congestion control, noise isolation or traffic isolation so that we could optimize Ethernet for AI. And so InfiniBand will be our AI-dedicated infrastructure, Spectrum-X will be our AI-optimized networking

Nvidia’s AI-training-as-a-service-platform, DGX Cloud, has reached a $1 billion annualised revenue run rate, and is now available on all the major cloud service providers; Nvidia’s management believes that the company’s software business will become very significant over time, because of the importance of software when running AI-related hardware

We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. NVIDIA DGX Cloud will expand its list of partners to include Amazon’s AWS, joining Microsoft Azure, Google Cloud, and Oracle Cloud. DGX Cloud is used for NVIDIA’s own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners…

…And the way that we work with CSPs, that’s really easy. We have large teams that are working with their large teams. However, now that generative AI is enabling every enterprise and every enterprise software company to embrace accelerated computing, and when it is now essential to embrace accelerated computing because it is no longer possible, no longer likely anyhow, to sustain improved throughput through just general-purpose computing, all of these enterprise software companies and enterprise companies don’t have large engineering teams to be able to maintain and optimize their software stack to run across all of the world’s clouds and private clouds and on-prem.

So we are going to do the management, the optimization, the patching, the tuning, the installed base optimization for all of their software stacks. And we containerize them into our stack called NVIDIA AI Enterprise. And the way we go to market with it is think of that NVIDIA AI Enterprise now as a run time like an operating system. It’s an operating system for artificial intelligence. And we charge $4,500 per GPU per year. And my guess is that every enterprise in the world, every software enterprise company that are deploying software in all the clouds and private clouds and on-prem will run on NVIDIA AI Enterprise, especially obviously, for our GPUs. And so this is going to likely be a very significant business over time.

Nvidia’s gaming chips also have strong generative AI capabilities, leading to better gaming performance

At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start. NVIDIA AI Tensor Cores and the GPUs deliver up to 836 AI TOPS, perfect for powering AI for gaming, creating and everyday productivity. The rich software stack we offer with our RTX DPUs further accelerates AI. With our DLSS technologies, 7 out of 8 pixels can be AI-generated, resulting up to 4x faster ray tracing and better image quality. And with the TensorRT LLM for Windows, our open-source library that accelerates inference performance for the latest large language models, generative AI can run up to 5x faster on RTX AI PCs.

Nvidia has announced new gaming AI laptops from every major laptop manufacturer; Nvidia has more than 100 million RTX PCs in its installed base, and management thinks the company is in a good position to lead the next wave of generative AI applications that are coming to the personal computer

At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEM. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14-inch and thin and light laptops. With up to 686 TOPS of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best performing AI PC platforms…

…NVIDIA is fueling the next wave of generative AI applications coming to the PC. With over 100 million RTX PCs in the installed base and over 500 AI-enabled PC applications and games, we are on our way.

Nvidia has a service that allows software developers to build state-of-the-art generative AI avatars

At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allow developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards. NVIDIA has an end-to-end platform for building and deploying generative AI applications for RTX PCs and workstations. This includes libraries, SDKs, tools and services developers can incorporate into their generative AI workloads.

Nvidia’s management believes that generative AI cannot be done on traditional general-purpose computing – it has to be done on an accelerated computing framework

With accelerated computing, you can dramatically improve your energy efficiency. You can dramatically improve your cost in data processing by 20:1, huge numbers. And of course, the speed. That speed is so incredible that we enabled a second industry-wide transition called generative AI. In generative AI, I’m sure we’re going to talk plenty about it during the call. But remember, generative AI is a new application. It is enabling a new way of doing software, new types of software being created. It is a new way of computing. You can’t do generative AI on traditional general-purpose computing. You have to accelerate it.

The hardware supply chain of a Nvidia GPU is improving; the components that go into a Nvidia GPU is really complex
Our supply is improving. Overall, our supply chain is just doing an incredible job for us. Everything from, of course, the wafers, the packaging, the memories, all of the power regulators to transceivers and networking and cables, and you name it, the list of components that we ship. As you know, people think that NVIDIA GPUs is like a chip, but the NVIDIA Hopper GPU has 35,000 parts. It weighs 70 pounds. These things are really complicated things we’ve built. People call it an AI supercomputer for good reason. If you ever look at the back of the Data Center, the systems, the cabling system is mind-boggling. It is the most dense, complex cabling system for networking the world has ever seen. Our InfiniBand business grew 5x year-over-year. The supply chain is really doing fantastic supporting us. And so overall, the supply is improving. 

Nvidia’s management is allocating chips fairly to all of the company’s customers

CSPs have a very clear view of our product road map and transitions. And that transparency with our CSPs gives them the confidence of which products to place and where and when. And so they know the timing to the best of our ability, and they know quantities and, of course, allocation. We allocate fairly. We allocate fairly, do the best of our — best we can to allocate fairly and to avoid allocating unnecessarily.

Nvidia’s management is seeing a lot of activity emerging from robotics companies

There’s just a giant suite of robotics companies that are emerging. There are warehouse robotics to surgical robotics to humanoid robotics, all kinds of really interesting robotics companies, agriculture robotics companies.

Nvidia’s installed base of hardware has been able to support every single innovation in AI technology because it is programmable

NVIDIA is the only architecture that has gone from the very, very beginning, literally at the very beginning when CNNs and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through RNNs to LSTMs to every RLs to deep RLs to transformers to every single version and every species that have come along, vision transformers, multi-modality transformers that every single — and now time sequence stuff. And every single variation, every single species of AI that has come along, we’ve been able to support it, optimize our stack for it and deploy it into our installed base…

… We simultaneously have this ability to bring software to the installed base and keep making it better and better and better. So our customers’ installed base is enriched over time with our new software…

…on’t be surprised if in our future generation, all of a sudden, amazing breakthroughs in large language models were made possible. And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand, we make giant breakthroughs on the other hand.

A big difference between accelerated computing and general purpose computing is the importance of software in the former

As you know, accelerated computing is very different than general-purpose computing. You’re not starting from a program like C++. You compile it and things run on all your CPUs. The stacks of software necessary for every domain from data processing, SQL versus SQL structured data versus all the images and text and PDF, which is unstructured, to classical machine learning to computer vision to speech to large language models, all — recommender systems. All of these things require different software stacks. That’s the reason why NVIDIA has hundreds of libraries. If you don’t have software, you can’t open new markets. If you don’t have software, you can’t open and enable new applications. Software is fundamentally necessary for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. And now people understand that software is really key.

Nvidia’s management believes that generative AI has kicked off a massive new investment cycle for AI infrastructure

Generative AI has kicked off a whole new investment cycle to build the next trillion dollars of infrastructure of AI generation factories. We believe these two trends will drive a doubling of the world data center infrastructure installed base in the next 5 years and will represent an annual market opportunity in the hundreds of billions.

PayPal (NASDAQ: PYPL)

PayPal’s management will soon launch a new PayPal app that will utilise AI to personalise the shopping experience for consumers; management hopes to drive engagement with the app

This year, we’re launching and evolving a new PayPal app to create a situation. We will also leverage our merchant relationships and the power of AI to make the entire shopping experience personalized for consumers while giving them control over their data…

…The new checkout and app experiences we are rolling out this year will also create an engagement loop that will drive higher awareness of the various products we offer and drive higher adoption of our portfolio over time.

Shopify (NASDAQ: SHOP)

Shopify’s management launched nearly 12 AI-powered tools through the Shopify Magic product suite in 2023, including tools for AI-generated product descriptions and an AI commerce assistant; in recent weeks, management launched AI product image creating and editing tools within Shopify Magic; management will be introducing new modalities and text-to-image capabilities later this year

In 2023, we brought nearly a dozen AI-enabled tools through our Shopify Magic product suite. We’re one of the first platforms to bring AI-generated product descriptions to market and made solid progress towards building Sidekick, a first of its kind AI-enabled commerce assistant. As part of our winter edition a few weeks ago, we introduced new features to our Shopify Magic suite of AI tools. These new generative AI tools simplify and enhance product image editing directly within the product image editor in the Shopify admin. With Shopify Magic, merchants can now leverage AI to create stunning images and professional edits with just a few clicks or keywords, saving on cost and time. And given the significant advancements in AI in 2023, we plan to seize this enormous opportunity ahead of us and are excited to introduce new modalities and text to image capabilities to Shopify in 2024.

Shopify’s marketing-paybacks have improved by over 30% with the help of AI

In terms of marketing, the 2 areas, in particular, where we are leaning in this quarter are performance marketing and point-of-sale. Within performance marketing, our team has unlocked some opportunities to reach potential customers at highly attractive LTV to CAC and paybacks. In fact, tactics that we’ve implemented on some channels earlier this year including through the enhanced use of AI and automation have improved paybacks by over 30%, enabling us to invest more into these channels while still maintaining our operating discipline on the underlying unit economics. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management has increased the company’s capital expenditure materially over the last few years to capture the growth opportunities associated with AI

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years. In the past few years, we have sharply increased our CapEx spending in preparation to capture or harvest the growth opportunities from HPC, AI and 5G megatrends.

TSMC’s management expects 2024 to be a healthy growth-year for the company with revenue growth in the low-to-mid 20s percentage range, driven by its 3nm technologies, 5nm technologies, and AI

Entering 2024, we forecast fabless semiconductor inventory to have returned to a [ handsome ] level exiting 2023. However, macroeconomic weakness and geopolitical uncertainties persist, potentially further weighing on consumer sentiment and the market demand. Having said that, our business has bottomed out on a year-over-year basis, and we expect 2024 to be a healthy growth year for TSMC, supported by continued strong ramp of our industry-leading 3-nanometer technologies, strong demand for the 5-nanometer technologies and robust AI-related demand.

TSMC’s management sees 2023 as the year that generative AI became important for the semiconductor industry, with TSMC as a key enabler; management thinks that the surge in AI-related demand in 2023 will drive an acceleration in structural demand for energy-efficient computing, and that AI will need to be supported by more powerful semiconductors – these are TSMC’s strengths

2023 was a challenging year for the global semiconductor industry, but we also witnessed the rising emergence of generative AI-related applications with TSMC as a key enabler…

…Despite the near-term challenges, our technology leadership enable TSMC to outperform the foundry industry in 2023, while we are positioning us to capture the future AI and high-performance computing-related growth opportunities…

…The surge in AI-related demand in 2023 supports our already strong conviction that the structural demand for energy-efficient computing will accelerate in an intelligent and connected world. TSMC is a key enabler of AI applications. No matter which approach is taken, AI technology is evolving to use more complex AI models as the amount of computation required for training and inference is increasing. As a result, AI models need to be supported by more powerful semiconductor hardware, which requires use of the most advanced semiconductor process technologies. Thus, the value of TSMC technology position is increasing, and we are all well positioned to capture the major portion of the market in terms of semiconductor component in AI. To address insatiable AI-related demand for energy-efficient computing power, customers rely on TSMC to provide the most leading edge processing technology at scale with a dependable and predictable cadence of technology offering.

Almost everyone important in AI is working with TSMC on its 2nm technologies

As process technology complexity increase, the engagement lead time with customers also started much earlier. Thus, almost all the AI innovators are working with TSMC, and we are observing a much higher level of customer interest and engagement at N2 as compared with N3 at a similar stage from both HPC and the smartphone applications.

TSMC’s management believes that the world has seen only the tip of the iceberg with AI

But on the other hand, AI is only in its nascent stage. Only last November, the first large language data is announced, ChatGPT announced. We only see the tip of the iceberg. 

TSMC’s management believes that the use of AI could accelerate scientific innovation in the field of semiconductor manufacturing

So I want to give the industry an optimistic note that even though 1 nanometer or sub 1 nanometer could be challenging, but we have a new technology capability using AI to accelerate the innovation in science.

TSMC’s management still believes that its narrowly-defined AI business will grow at 50% annually; management also sees AI application process chips making up a high-teens weightage of TSMC’s revenue by 2027, up from a low-teens weightage mentioned in the 2023 second-quarter earnings call, because of a sudden increase in demand

But for TSMC, we look at ours here, the AI’s a CAGR, that’s the growth rate every year, it’s about 50%. And we are confident that we can capture more opportunities in the future. So that’s what we said that up to 2027, we are going to have high teens of the revenue from a very narrow, we defined the AI application process, not to mention about the networking, not to mention about all others, okay?…

…[Question] You mentioned that we have a very narrow definition, we call server AI processor contribution and that you said it can be high teens in 5 years’ time because the last time, we said low teens.

[Answer] The demand suddenly being increased since last — I think, last year, the first quarter up to March or April, when ChatGPT become popular, so customers respond quickly and asked TSMC to prepare the capacity, both in front end and the back end. And that’s why we have confidence that this AI’s revenue will increase. We only narrowed down to the AI application process, by the way. So we look at ours here, that we prepare the technology and the capacities in both our front end and also back end. And so we — it’s in the early stage so far today. We already see the increase, the momentum. And we expect — if you guys continue to track this one, the number will increase. I have confidence to say that, although I don’t know how much.

TSMC’s management is seeing AI chips being placed in edge-devices such as smartphones and PCs 

And to further extend our value, actually, all the edge device, including smartphone, including the PC, they start to put the AI’s application inside. They have some kind of a neural process, for example, so the silicon content will be greatly increased. 

Tesla (NASDAQ: TSLA)

Tesla has released version 12 of its FSD (Full Self Driving) software, which is powered end-to-end by AI (artificial intelligence); Tesla will soon release it to over 400,000 vehicles in North America; FSD v.12 is the first time AI has been used for pathfinding and vehicle controls, and within it, neural nets replaced over 330,000 lines of code

For full self-driving, we’ve released version 12, which is a complete architectural rewrite compared to prior versions. This is end-to-end artificial intelligence. So [ nothing but ] nets basically, photons in and controls out. And it really is quite a profound difference. This is currently just with employees and a few customers, but we will be rolling out to all who — all those customers in the U.S. who request full self-driving in the weeks to come. That’s over 400,000 vehicles in North America. So this is the first time AI has been used not just for object perception but for pathfinding and vehicle controls. We replaced 330,000 lines of C++ code with neural nets. It’s really quite remarkable.

Tesla’s management believes that Tesla is the world’s most efficient company at AI inference because the company, out of necessity, has had to wring the most performance out of 3-year-old hardware

I think Tesla is probably the most efficient company in the world for AI inference. Out of necessity, we’ve actually had to be extremely good at getting the most out of hardware because hardware 3 at this point is several years old. So I don’t — I think we’re quite far ahead of any other company in the world in terms of AI inference efficiency, which is going to be a very important metric in the future in many arenas.

Tesla’s management thinks that the AI technologies the company has developed for vehicles translates well into a humanoid robot (Optimus); Tesla’s vehicles and Optimus both have the same inference computers

And the technologies that we — the AI technologies we’ve developed for the car translate quite well to a humanoid robot because the car is just a robot on 4 wheels. Tesla is arguably already the biggest robot maker in the world. It’s just a 4-wheeled robot. So Optimus is a robot with — a humanoid robot with arms and legs, just by far the most sophisticated humanoid robot that’s being developed anywhere in the world…

…As we improve the technology in the car, we improve the technology in Optimus at the same time. It runs the same AI inference computer that’s on the car, same training technology. I mean we’re really building the future. I mean the Optimus lab looks like the set of Westworld, but admittedly, that was not a super utopian situation.

Tesla’s management is hedging their bets for the company’s FSD-related chips with Nvidia’s GPUs while also pursuing Dojo (Tesla’s own AI chip design)

[Question] As a follow-up, your release does not mention Dojo, so if you could just provide us an update on where Dojo stands and at what point do you expect Dojo to be a resource in improving FSD. Or do you think that you now have sufficient supply of NVIDIA GPUs needed for the training of the system?

[Answer] I mean the AI part of your question is — that is a deep one. So we’re obviously hedging our bets here with significant orders of NVIDIA GPUs…

…And we’re pursuing the dual path of NVIDIA and Dojo.

Tesla’s management believes that Tesla’s progress in self-driving is limited by training and that in AI, the more training is done on the model, the less resources are required for inference

A lot of our progress in self-driving is training limited. Something that’s important with training, it’s much like a human. The more effort you put into training, the less effort you need in inference. So just like a person, if you train in a subject, sort of class, 10,000 hours, the less mental effort it takes to do something. If you remember when you first started to drive how much of your mental capacity it took to drive, it was — you had to be focused completely on driving. And after you’ve been driving for many years, it only takes a little bit of your mind to drive, and you can think about other things and still drive safely. So the more training you do, the more efficient it is at the inference level. So we do need a lot of training. And we’re pursuing the dual path of NVIDIA and Dojo, A 

Tesla’s management thinks that Dojo is a long shot – it has potential, but may not work out

But I would think of Dojo as a long shot. It’s a long shot worth taking because the payoff is potentially very high but it’s not something that is a high probability. It’s not like a sure thing at all. It’s a high risk, high payoff program. Dojo is working, and it is doing training jobs, so — and we are scaling it up. And we have plans for Dojo 1.5, Dojo 2, Dojo 3 and whatnot. So I think it’s got potential. I can’t emphasize enough, high risk, high payoff.

Tesla’s management thinks that Tesla’s AI-inference hardware in its vehicles can enable the company to perhaps possess the largest amount of compute resources for AI tasks in the world at some point in the future

There’s also our inference hardware in the car, so we’re now on what’s called Hardware 4, but it’s actually version 2 of the Tesla-designed AI inference chip. And we’re about to complete design of — the terminology is a bit confusing. About to complete design of Hardware 5, which is actually version 3 of the Tesla-designed chip because the version 1 was Mobileye. Version 2 was NVIDIA, and then version 3 was Tesla. So — and we’re making gigantic improvements from 1 — from Hardware 3 to 4 to 5. I mean there’s a potentially interesting play where when cars are not in use in the future, that the in-car computer can do generalized AI tasks, can run a sort of GPT4 or 3 or something like that. If you’ve got tens of millions of vehicles out there, even in a robotaxi scenario, whether in heavy use, maybe they’re used 50 out of 168 hours, that still leaves well over 100 hours of time available — of compute hours. Like it’s possible with the right architectural decisions that Tesla may, in the future, have more compute than everyone else combined.

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management believes that in a post-cookie world, advertisers will have to depend on authentication, new approaches to identity, first-party data, and AI-driven relevance tools – Trade Desk’s tools help create the best outcome in this world

The post-cookie world is one that will combine authentication, new approaches to identity, first-party data activation and advanced AI-driven relevance tools, all to create a new identity fabric for the Internet that is so much more effective than cookies ever were. The Internet is being replumbed and our product offerings create the best outcome for all of the open Internet. 

AI optimisations are distributed across Kokai, which is Trade Desk’s new platform that recently went live; Kokai helps advertisers understand and score every ad impression, and allows advertisers to use an audience-first approach in campaigns

In particular, Kokai represents a completely new way to understand and score the relevance of every ad impression across all channels. It allows advertisers to use an audience-first approach to their campaigns, targeting their audiences wherever they are on the open Internet. Our AI optimizations, which are now distributed across the platform, help optimize every element of the ad purchase process. Kokai is now live, and similar to Next Wave and Solimar, it will scale over the next year.

Based on Trade Desk’s management’s interactions with customers, the use of AI to forecast the impacts that advertisers’ decisions will have on their ad spending is a part of Kokai that customers love

A big part of what they love, to answer your question about what are they most excited about, is we have streamlined our reporting. We’ve made it way faster. There are some reports that you just have to wait multiple minutes for it because they’re just so robust, and we found ways to accelerate that. We’ve also added AI throughout the platform, especially in forecasting. So it’s a little bit like if you were to make a hypothetical trade in a trading platform for equity and then us tell you what we think is going to happen to the price action in the next 10 minutes. So we’re showing them what the effects of their changes are going to be before they even make them so that they don’t make mistakes. Because sometimes what happens is people put out a campaign. They’ll put tight restrictions on it. They’ll hope that it spends, then they come back a day or 2 or even 3 later and then realize they made it so difficult with their combination of targeting and pricing for us to buy anything that they didn’t spend much money. Or the opposite because they spent more and it wasn’t as effective as they wanted. So helping them see all of that before they do anything helped.

Trade Desk’s management believes that the company is reinforcing itself as the adtech AI leader; Trade Desk has been using AI in its platform since 2016

We are reinforcing our position as the adtech AI leader. We’ve been embedding AI into our platform since 2016, so it’s nothing new to us. But now it’s being distributed across our platform so our clients can make even better choices among the 15 million ad impression opportunities a second and understand which of those ads are most relevant to their audience segments at any given time.

Wix (NASDAQ: WIX)

Wix’s management added new AI features in 2023 to help users create content more easily; the key AI features introduced include a chat bot, code assistant, and text and image creators

This year, we meaningfully extended an already impressive toolkit of AI capabilities to include new AI-powered features that will help Wix users create visual and written web content more easily, optimized design and content layout, right code and manage their website and businesses more efficiently. The key AI product introduced in the last year include an AI chat experience for businesses, responsive AI design, AI code assistant, AI Meta Tag Creators and AI text and image creators among several other AI design tools. 

Wix’s management recently released an AI site generator that can create a full-blown, tailored, ready-to-publish website based on user prompts; management believes that Wix is the first to launch such an AI site generator; the site generator has received fantastic feedback so far, and is a good starting point for creating a new website, but it is only at Version 1

We also recently released our AI site generator and have heard fantastic feedback so far. I believe this will be the first AI tool on the market that creates a full-blown, tailored and ready-to-publish website integrated with relevant business application based on user prompt…

… So we released what I would call version 1. It’s a great way for people to start with the website, meaning that you come in and you say, I’m a Spa in New York City and I specialize in some specific things. And we’ll — and AI will interview you on the — what makes your business unique, where are you located? How many people? Tell us about those people and the staff members. And as a result, we generate a website for you that is — has all the great content, right? And the content will be text and images. The other thing that then will actually get you to this experience where you can choose how you want to have the design look like. And the AI will generate different designs for you. So you can tell why I like this thing, I want a variation on that, I don’t like the colors, please change the colors or I want colors that are more professionals or I want color that are blue and yellow. And there I will do it for you.

On the other hand, you can also say, well, I don’t really like the design, can you generate something very different or generate a small variation of that, in many ways, a bit similar to Midjourney, what Midjourney is doing with the images, we are doing with a full-blown website. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. 

Wix’s management is seeing that the majority of the company’s new users today have adopted at least one AI tool and this has been a positive for Wix’s business

In fact, the majority of new users today are using at least 1 AI tool on the web creation journey. This has resulted in reduced friction and enhanced the creation experience for our users as well as increased conversion and improve monetization. 

Wix’s management expects AI to be a driver of Wix’s growth in 2024 and beyond

We expect our AI technology to be a significant driver of growth in 2024 and beyond…

…Third, as Avishai mentioned, uptick of the milestone AI initiatives of 2023 has been incredible, and we expect to see ramping conversion and monetization benefits from our entire AI toolkit for both self-creators and partners this year…

…But then again, also 2025 will be much better than 2024. I think that the first reason is definitely the launching new products. At the end of the day, we are a technology, a product company, and this is how we drive our growth, mostly from new features, some new products. And this is what we did in the past, and we will continue also to do in the future. So definitely, it’s coming from the partners business with launching Studio. It was a great launch for us. We see the traction in the market. We see the demand. We see how our agencies use it. I think, as you know, we mentioned a few times about the number of new accounts with more than 50% are new. I think that it’s — for us, it’s a great proxy to the fact that we are going to see much more that it would be significantly the major growth driver for us in the next few years. The second one is everything that we’ve done with AI, we see a tremendous results out of it, which we believe that we will continue into the next year. And as you know, as always, the third one is about trying to optimize our pricing strategy. And this is what we’ve done in the past, we’ll continue to do in the future. [indiscernible] both mentioned like a fourth reason, which is the overall demand that we see on a macro basis.

Wix’s management has been driving the company to use AI for internal processes; the internal AI tools include an open internal AI development platform that everyone at Wix can contribute to, and a generative AI conversational assistant for product teams in Wix; the internal AI tools has also helped Wix to save costs and improve its gross margin

We also leverage AI to improve many of our internal processes at Wix, especially research and development velocity. This include an open internal AI deployment platform that allow for everyone at Wix to contribute to building AI-driven user features in tandem. We also have a Gen AI best platform dedicated to conversational assistant, which allow any product team at Wix to develop their own assistant tailored to specific user needs without having to start from scratch. With this platforms, we are able to develop and release high-quality AI-based features and tools efficiently and at scale…

…We ended 2023 with a total gross margin of 68%, an improvement of nearly 500 basis points compared to 2022. Throughout the year, we benefited from improved efficiencies in housing and infrastructure costs and optimization of support cost, partially aided by integrating AI into our workflows. Creative Subscriptions gross margin expanded to 82% in 2023. And Business Solutions gross margin grew to 29% for the full year as we continue to benefit from improving margin and new [indiscernible].

Wix’s management believes that there can be double-digit growth for the company’s self creators business in the long run partly because of AI products

And we mentioned that for self-creators in the long run, we believe that it will be a double-digit growth just because of that because it has the most effect of the macro environment which already started to see that it’s improving. But then again, also the new product and AI is 1 of the examples how we can bring increased conversion and also increase the growth of self-creators.

Zoom Video Communications (NASDAQ: ZM)

Zoom’s management launched Zoom AI Companion, a generative AI assistant, five months ago and it has been expanded to six Zoom products, all included at no extra cost to users; Zoom AI companion now has 510,000 accounts enabled and has created 7.2 million meeting summaries

Zoom AI Companion, our generative AI assistant, empowers customers and employees with enhanced productivity, team effectiveness and skills. Since its launch only five months ago, we expanded AI Companion to six Zoom products, all included at no additional cost to licensed users…

…Zoom AI companion have grown tremendously in just 5 months with over 510,000 accounts enabled and 7.2 million meeting summaries created as of the close of FY ’24. 

Zoom’s future roadmap for AI is guided by driving customer value

Our future roadmap for AI is 100% guided by driving customer value. We are hard at work developing new AI capabilities to help customers achieve their unique business objectives and we’ll have more to share in a month at Enterprise Connect

Zoom’s Contact Center suite is an AI-first solution that includes AI Companion; Contact Center suite is winning in head-to-head competition against legacy incumbents

Our expanding Contact Center suite is a unified, AI-first solution that offers tremendous value to companies of all sizes seeking to strengthen customer relationships and deliver better outcomes. The base product includes AI Companion and our newly launched tiered pricing allows customers to add specialized CX capabilities such as AI Expert Assist, workforce management, quality management, virtual agent, and omnichannel support. Boosted by its expanding features, our contact center suite is beginning to win in head-to-head competition with the legacy incumbents.

Zoom Revenue Accelerator gained recognition from Forrester as an AI-powered tool for sales teams

Zoom Revenue Accelerator was recognized as a “Strong Performer” in The Forrester Wave™ in its first year of being covered – an amazing testament to its value as a powerful AI-enabled tool driving value for sales teams.

A financial services company, Convera, was attracted to Zoom’s products because of AI Companion

Finally, let me thank Convera, the World’s FX payments leader. Zoom Phone was the foundation of their Zoom engagement and from there they adopted the wider Zoom One platform in less than two years. Seeing the benefits of the tight integration of our products underpinned by AI Companion, they recently began to deeply leverage Zoom Team Chat in order to streamline their pre, during and post meeting communication all within the Zoom Platform.

Zoom is monetising AI on many fronts

We are monetizing AI on many fronts. You look at our Zoom AI Companion, right? So first of all, for our existing customers, because they all like the value we created, right, to generate meeting summary, meeting [indiscernible] and so on and so forth, because of that, we really do not — because customers, they’re also trying to reduce the cost. That’s why we do not charge the customers for those features. However, a lot of areas we can monetize. Take our AI Companion, for example. Enterprise customers, how to lever enterprise customer directionally, source data and also to build a tailored — the Zoom AI Companion for those customers, sort of like a customized Zoom AI Companion, we can monetize. And also look at all the services. Maybe I’ll just take Contact Center, for example. We are offering Zoom Virtual Agent, that’s one we can monetize. And recently, we announced 3 tiers of Zoom Contact Center product. The last one is per agent per month, we charge $149. The reason why, there are a few features. One of the feature is Zoom Expert Assist, right? All those features are empowered by AI features.

Zoom’s AI-powered Virtual Agent was deployed internally and has saved Zoom 400,000 agent hours per month, and handled more than 90% of inbound inquiries; Zoom’s management believes that Zoom’s AI features help improve companies’ agent-efficiency in contact centers 

Zoom, we — internally, we deployed our Virtual Agent. Guess what? Every month, we saved 400,000 agent hours. And more than 90% inbound inquiries can be done by our Virtual Agent driven by the AI technology…

…If you look at our Zoom Meeting product, right, customer discovered that Zoom AI Companion to help you with the meeting summary. And after they discovered that feature and they would like to adopt that, right? Contact Center, exact same thing. And like Virtual Agent, Zoom Expert Assist, right, leverage those AI features. Manager kind of knows what’s going on in real time and also — and the agent while can have the AI, to get a real-time in order base and any update about these customers. All those AI features can dramatically improve the agent efficiency, right? That’s the reason why it’s kind of — will not take a much longer time for those agents to realize the value of the AI features because it’s kind of very easy to use. And I think that in terms of adoption rate, I feel like Contact Center AI adoption rate even probably faster than the other — the core features, so — core services.

Zoom’s management is seeing that having AI features at no additional cost to customers helps the company to attract users to Zoom Team Chat

[Question] And for Eric, what’s causing customers to move over to the Zoom chat function and off your main competitor like Teams? Just further consolidation onto one platform? Or is it AI Companion playing a larger role here, especially as you guys are including it as opposed to $30, $35 a month?

[Answer] Customers, they see — using their chat solution, they want to use AI, right? I send you — James, I send you a message. I want to leverage AI, send a long message. However, if you use other solutions, sometimes, other solutions itself, even without AI, it’s not free, right? And in our case, not only do we have core functionalities, but also AI Companion built in also at no additional cost. I can use — for any users, customers, you already have a Meeting license, Zoom Team Chat already built in, right? All the core features, you can use the Zoom AI Companion in order to leverage AI — write a chat message and so on and so forth. It works so well at no additional cost. The total cost of ownership of the Zoom Team Chat is much better than any other team chat solutions.


 Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

Insights From Warren Buffett’s 2023 Shareholder’s Letter

There’s much to learn from Warren Buffett’s latest letter, including his thoughts on oil & gas companies and the electric utility industry.

One document I always look forward to reading around this time of the year is Warren Buffett’s annual Berkshire Hathaway shareholder’s letter. Over the weekend, Buffett published the 2023 edition. This letter is especially poignant because Buffett’s long-time right-hand man, the great Charlie Munger, passed away last November. Besides containing a touching eulogy from Buffett to Munger, the letter also had some fascinating insights from Buffett that I wish to document and share. 

Without further ado (emphases are Buffett’s)…

The actions of a wonderful partner 

Charlie never sought to take credit for his role as creator but instead let me take the bows and receive the accolades. In a way his relationship with me was part older brother, part loving father. Even when he knew he was right, he gave me the reins, and when I blundered he never – never –reminded me of my mistake. 

It’s hard to tell a good business from a bad one

Within capitalism, some businesses will flourish for a very long time while others will prove to be sinkholes. It’s harder than you would think to predict which will be the winners and losers. And those who tell you they know the answer are usually either self-delusional or snake-oil salesmen. 

Holding onto a great business – one that can deploy additional capital at high returns – for a long time is a recipe for building a great fortune

At Berkshire, we particularly favor the rare enterprise that can deploy additional capital at high returns in the future. Owning only one of these companies – and simply sitting tight – can deliver wealth almost beyond measure. Even heirs to such a holding can – ugh! – sometimes live a lifetime of leisure…

…You may be thinking that she put all of her money in Berkshire and then simply sat on it. But that’s not true. After starting a family in 1956, Bertie was active financially for 20 years: holding bonds, putting 1⁄3 of her funds in a publicly-held mutual fund and trading stocks with some frequency. Her potential remained unnoticed. 

Then, in 1980, when 46, and independent of any urgings from her brother, Bertie decided to make her move. Retaining only the mutual fund and Berkshire, she made no new trades during the next 43 years. During that period, she became very rich, even after making large philanthropic gifts (think nine figures). 

Berkshire’s size is now a heavy anchor on the company’s future growth rates

This combination of the two necessities I’ve described for acquiring businesses has for long been our goal in purchases and, for a while, we had an abundance of candidates to evaluate. If I missed one – and I missed plenty – another always came along.

Those days are long behind us; size did us in, though increased competition for purchases was also a factor.

Berkshire now has – by far – the largest GAAP net worth recorded by any American business. Record operating income and a strong stock market led to a yearend figure of $561 billion. The total GAAP net worth for the other 499 S&P companies – a who’s who of American business – was $8.9 trillion in 2022. (The 2023 number for the S&P has not yet been tallied but is unlikely to materially exceed $9.5 trillion.) 

By this measure, Berkshire now occupies nearly 6% of the universe in which it operates. Doubling our huge base is simply not possible within, say, a five-year period, particularly because we are highly averse to issuing shares (an act that immediately juices net worth)…

…All in all, we have no possibility of eye-popping performance…

…Our Japanese purchases began on July 4, 2019. Given Berkshire’s present size, building positions through open-market purchases takes a lot of patience and an extended period of “friendly” prices. The process is like turning a battleship. That is an important disadvantage which we did not face in our early days at Berkshire.  

Are there a dearth of large, great businesses outside of the USA? 

There remain only a handful of companies in this country capable of truly moving the needle at Berkshire, and they have been endlessly picked over by us and by others. Some we can value; some we can’t. And, if we can, they have to be attractively priced. Outside the U.S., there are essentially no candidates that are meaningful options for capital deployment at Berkshire.

Markets can occasionally throw up massive bargains because of external shocks

Occasionally, markets and/or the economy will cause stocks and bonds of some large and fundamentally good businesses to be strikingly mispriced. Indeed, markets can – and will – unpredictably seize up or even vanish as they did for four months in 1914 and for a few days in 2001.

Stock market participants today exhibit even more gambling-like behaviour than in the past

Though the stock market is massively larger than it was in our early years, today’s active participants are neither more emotionally stable nor better taught than when I was in school. For whatever reasons, markets now exhibit far more casino-like behavior than they did when I was young. The casino now resides in many homes and daily tempts the occupants.

Stock buybacks are only sensible if they are done at a discount to business-value

All stock repurchases should be price-dependent. What is sensible at a discount to business-value becomes stupid if done at a premium.

Does Occidental Petroleum play a strategic role in the long-term economic security of the USA?

At yearend, Berkshire owned 27.8% of Occidental Petroleum’s common shares and also owned warrants that, for more than five years, give us the option to materially increase our ownership at a fixed price. Though we very much like our ownership, as well as the option, Berkshire has no interest in purchasing or managing Occidental. We particularly like its vast oil and gas holdings in the United States, as well as its leadership in carbon-capture initiatives, though the economic feasibility of this technique has yet to be proven. Both of these activities are very much in our country’s interest.

Not so long ago, the U.S. was woefully dependent on foreign oil, and carbon capture had no meaningful constituency. Indeed, in 1975, U.S. production was eight million barrels of oil-equivalent per day (“BOEPD”), a level far short of the country’s needs. From the favorable energy position that facilitated the U.S. mobilization in World War II, the country had retreated to become heavily dependent on foreign – potentially unstable – suppliers. Further declines in oil production were predicted along with future increases in usage. 

For a long time, the pessimism appeared to be correct, with production falling to five million BOEPD by 2007. Meanwhile, the U.S. government created a Strategic Petroleum Reserve (“SPR”) in 1975 to alleviate – though not come close to eliminating – this erosion of American self-sufficiency.

And then – Hallelujah! – shale economics became feasible in 2011, and our energy dependency ended. Now, U.S. production is more than 13 million BOEPD, and OPEC no longer has the upper hand. Occidental itself has annual U.S. oil production that each year comes close to matching the entire inventory of the SPR. Our country would be very – very – nervous today if domestic production had remained at five million BOEPD, and it found itself hugely dependent on non-U.S. sources. At that level, the SPR would have been emptied within months if foreign oil became unavailable.

Under Vicki Hollub’s leadership, Occidental is doing the right things for both its country and its owners. 

Nobody knows what the price of oil would do in the short-term and the long-term

No one knows what oil prices will do over the next month, year, or decade.

Nobody can predict the movement of major currencies

Neither Greg nor I believe we can forecast market prices of major currencies. We also don’t believe we can hire anyone with this ability. Therefore, Berkshire has financed most of its Japanese position with the proceeds from ¥1.3 trillion of bonds.

Rail is a very cost-efficient way to move products around America, and railroads should continue to be an important asset for the USA for a long time to come

Rail is essential to America’s economic future. It is clearly the most efficient way – measured by cost, fuel usage and carbon intensity – of moving heavy materials to distant destinations. Trucking wins for short hauls, but many goods that Americans need must travel to customers many hundreds or even several thousands of miles away…

…A century from now, BNSF will continue to be a major asset of the country and of Berkshire. You can count on that.

Railroad companies gobble up capital, such that its owners have to spend way more on annual maintenance capital expenditure than depreciation – but this trait allowed Berkshire to acquire BNSF for far less than its replacement value

BNSF is the largest of six major rail systems that blanket North America. Our railroad carries its 23,759 miles of main track, 99 tunnels, 13,495 bridges, 7,521 locomotives and assorted other fixed assets at $70 billion on its balance sheet. But my guess is that it would cost at least $500 billion to replicate those assets and decades to complete the job.

BNSF must annually spend more than its depreciation charge to simply maintain its present level of business. This reality is bad for owners, whatever the industry in which they have invested, but it is particularly disadvantageous in capital-intensive industries.

At BNSF, the outlays in excess of GAAP depreciation charges since our purchase 14 years ago have totaled a staggering $22 billion or more than $11⁄2 billion annually. Ouch! That sort of gap means BNSF dividends paid to Berkshire, its owner, will regularly fall considerably short of BNSF’s reported earnings unless we regularly increase the railroad’s debt. And that we do not intend to do.

Consequently, Berkshire is receiving an acceptable return on its purchase price, though less than it might appear, and also a pittance on the replacement value of the property. That’s no surprise to me or Berkshire’s board of directors. It explains why we could buy BNSF in 2010 at a small fraction of its replacement value.

Railroad companies are having trouble with hiring because of tough working conditions

An evolving problem is that a growing percentage of Americans are not looking for the difficult, and often lonely, employment conditions inherent in some rail operations. Engineers must deal with the fact that among an American population of 335 million, some forlorn or mentally-disturbed Americans are going to elect suicide by lying in front of a 100-car, extraordinarily heavy train that can’t be stopped in less than a mile or more. Would you like to be the helpless engineer? This trauma happens about once a day in North America; it is far more common in Europe and will always be with us.

American railroad companies are at times at the mercy of the US government when it comes to employees’ wages, and they are also required to carry products they would rather not

Wage negotiations in the rail industry can end up in the hands of the President and Congress. Additionally, American railroads are required to carry many dangerous products every day that the industry would much rather avoid. The words “common carrier” define railroad responsibilities.

Last year BNSF’s earnings declined more than I expected, as revenues fell. Though fuel costs also fell, wage increases, promulgated in Washington, were far beyond the country’s inflation goals. This differential may recur in future negotiations.

Has the electric utility industry in the USA become uninvestable because of a change in the authorities’ stance toward electric utilities?

For more than a century, electric utilities raised huge sums to finance their growth through a state-by-state promise of a fixed return on equity (sometimes with a small bonus for superior performance). With this approach, massive investments were made for capacity that would likely be required a few years down the road. That forward-looking regulation reflected the reality that utilities build generating and transmission assets that often take many years to construct. BHE’s extensive multi-state transmission project in the West was initiated in 2006 and remains some years from completion. Eventually, it will serve 10 states comprising 30% of the acreage in the continental United States. 

With this model employed by both private and public-power systems, the lights stayed on, even if population growth or industrial demand exceeded expectations. The “margin of safety” approach seemed sensible to regulators, investors and the public. Now, the fixed-but-satisfactoryreturn pact has been broken in a few states, and investors are becoming apprehensive that such ruptures may spread. Climate change adds to their worries. Underground transmission may be required but who, a few decades ago, wanted to pay the staggering costs for such construction?

At Berkshire, we have made a best estimate for the amount of losses that have occurred. These costs arose from forest fires, whose frequency and intensity have increased – and will likely continue to increase – if convective storms become more frequent.

It will be many years until we know the final tally from BHE’s forest-fire losses and can intelligently make decisions about the desirability of future investments in vulnerable western states. It remains to be seen whether the regulatory environment will change elsewhere.

Other electric utilities may face survival problems resembling those of Pacific Gas and Electric and Hawaiian Electric. A confiscatory resolution of our present problems would obviously be a negative for BHE, but both that company and Berkshire itself are structured to survive negative surprises. We regularly get these in our insurance business, where our basic product is risk assumption, and they will occur elsewhere. Berkshire can sustain financial surprises but we will not knowingly throw good money after bad.

Whatever the case at Berkshire, the final result for the utility industry may be ominous: Certain utilities might no longer attract the savings of American citizens and will be forced to adopt the public-power model. Nebraska made this choice in the 1930s and there are many public-power operations throughout the country. Eventually, voters, taxpayers and users will decide which model they prefer. 

When the dust settles, America’s power needs and the consequent capital expenditure will be staggering. I did not anticipate or even consider the adverse developments in regulatory returns and, along with Berkshire’s two partners at BHE, I made a costly mistake in not doing so. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

How Innovation Happens

Innovation can appear from the most unexpected places, take unpredictable paths, or occur when supporting technologies improve over time.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But Jeremy and I are still long-term optimistic on the stock market.

This is because we still see so much potential in humanity. There are nearly 8.1 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but we have faith that humanity can clean it up. To us, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. We will remain long-term optimistic on stocks so long as we continue to have this faith.

There may be times in the future when it seems that mankind’s collective ability to innovate is faltering (things are booming now with the AI rush). But here are three stories I learnt recently that would help me – and I hope you, too – keep the faith.

The first story is from Morgan Housel’s latest book Same As Ever. In it, he wrote: 

“Author Safi Bahcall notes that Polaroid film was discovered when sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those crystals turned out to be the best polarizers ever discovered. Who predicts that? Who sees that coming? Nobody. Absolutely nobody.”

What the quinine and polarizers story shows is that the root of innovative ideas can show up completely unexpectedly. This brings me to the second story, which is also from Same As Ever. This time, it is Housel’s recounting of how the invention of planes moved in an unpredictable path that led to the invention of nuclear power plants (nuclear power is a zero-emission, clean energy source, so it could play a really important role in society’s sustainable energy efforts), and how a 1960s invention linking computers to manage Cold War secrets unpredictably led to the photo-sharing social app Instagram:

“When the airplane came into practical use in the early 1900s, one of the first tasks was trying to foresee what benefits would come from it. A few obvious ones were mail delivery and sky racing.

No one predicted nuclear power plants. But they wouldn’t have been possible without the plane. Without the plane we wouldn’t have had the aerial bomb. Without the aerial bomb we wouldn’t have had the nuclear bomb. And without the nuclear bomb we wouldn’t have discovered the peaceful use of nuclear power. Same thing today. Google Maps, TurboTax, and Instagram wouldn’t be possible without ARPANET, a 1960s Department of Defense project linking computers to manage Cold War secrets, which became the foundation for the internet. That’s how you go from the threat of nuclear war to filing your taxes from your couch—a link that was unthinkable fifty years ago, but there it is.”

This idea of one innovation leading to another, brings me to my third story. There was a breakthrough in the healthcare industry in November 2023 when the UK’s health regulator approved a drug named Casgevy – developed by CRISPR Therapeutics and Vertex Pharmaceuticals – for the treatment of blood disorders known as sickle cell disease and  beta thalassaemia. Casgevy’s greenlight is groundbreaking because it is the first drug in the world to be approved that is based on the CRISPR (clustered regularly interspaced short palindromic repeats) gene editing technique. A few weeks after the UK’s decision, Casgevy became the first gene-editing treatment available in the USA for sickle cell disease (the use of Casgevy for beta thalassaemia in the USA is currently still being studied). Casgevy is a huge upgrade for sickle cell patients over the current way the condition is managed. Here’s Sarah Zhang, writing at The Atlantic in November 2023:

When Victoria Gray was still a baby, she started howling so inconsolably during a bath that she was rushed to the emergency room. The diagnosis was sickle-cell disease, a genetic condition that causes bouts of excruciating pain—“worse than a broken leg, worse than childbirth,” one doctor told me. Like lightning crackling in her body is how Gray, now 38, has described the pain. For most of her life, she lived in fear that it could strike at any moment, forcing her to drop everything to rush, once again, to the hospital.

After a particularly long and debilitating hospitalization in college, Gray was so weak that she had to relearn how to stand, how to use a spoon. She dropped out of school. She gave up on her dream of becoming a nurse.

Four years ago, she joined a groundbreaking clinical trial that would change her life. She became the first sickle-cell patient to be treated with the gene-editing technology CRISPR—and one of the first humans to be treated with CRISPR, period. CRISPR at that point had been hugely hyped, but had largely been used only to tinker with cells in a lab. When Gray got her experimental infusion, scientists did not know whether it would cure her disease or go terribly awry inside her. The therapy worked—better than anyone dared to hope. With her gene-edited cells, Gray now lives virtually symptom-free. Twenty-nine of 30 eligible patients in the trial went from multiple pain crises every year to zero in 12 months following treatment.

The results are so astounding that this therapy, from Vertex Pharmaceuticals and CRISPR Therapeutics, became the first CRISPR medicine ever approved, with U.K. regulators giving the green light earlier this month; the FDA appears prepared to follow suit in the next two weeks.” 

The manufacturing technologies behind Casgevy include electroporation, where an electric field is used to increase the permeability of a cell’s membrane. This enables molecules, such as genetic material and proteins, to be introduced in a cell for the purposes of gene editing. According to an expert-call on electroporation that I reviewed, the technology has been around for over four decades, but only started gaining steam in recent years with the decline in genetic sequencing costs; without affordable genetic sequencing, it was expensive to know if a gene editing process done via electroporation was successful. The relentless work of Illumina has played a huge role in lowering genetic sequencing costs over time.

These show how one innovation (cheaper genetic sequencing) supported another in a related field (the viability of electroporation) that then enabled yet another in a related field (the creation of gene editing therapies).    

The three stories I just shared highlight the different ways that innovation can happen. It can appear from the most unexpected places (quinine and polarizers); it can take unpredictable paths (from planes to nuclear power plants); and it can occur when supporting technologies improve over time (the development of Casgevy). What they signify is that we shouldn’t lose hope in mankind’s creative prowess when it appears that nothing new of significance has been built for a while. Sometimes, what’s needed is just time


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Is Bitcoin a Speculative or Productive Asset?

There are productive income-generating assets, and then there are speculative assets.

The Bitcoin hype train is back! Bitcoin halving, Bitcoin ETF approval and the prospect of lower interest rates have put Bitcoin back at the center of attention.

But before jumping on the bandwagon, it’s worth asking – is bitcoin is a productive or speculative asset?

Productive assets are able to generate income for the owner such that we don’t mind holding the asset forever. Speculative assets can’t.

To profit from speculative assets, investors need to find a buyer who will purchase the asset at a higher price, which is known as the “greater fool theory”. 

The greater fool theory suggests that we can make money as long as someone else comes along and buys the asset for a higher price despite the asset producing no income to the owner.

This may be profitable for a while, but relying on this method of making money is pure speculation and the party will end when the world runs out of “greater fools”.

With this in mind, let’s see what assets are productive and what are just speculative assets.

Bitcoin

Bitcoin does not produce income for the owner and hence the owner of the Bitcoin can only make a profit by selling it to someone else at a higher price.

By definition, this is relying on the greater fool theory and is speculation. 

I judge an asset by whether you will be willing to hold on to an asset forever. In the case of Bitcoin, holding on to it does you no good and you can only profit if you sell it.

Bitcoin is a clear case of a speculative asset.

Art

I’ve heard people comment that Bitcoin holds value because of its scarcity and hence is akin to rare art which can also appreciate in price. 

But the fact is art is a speculative asset too. Art yields no income for the owner of the asset and the owner relies on selling the art piece at a higher price to make money.

Similar to Bitcoin, art does not generate income so holding the piece of art forever does not generate any returns. Most art are speculative assets.

However, occasionally, rare art may bring some form of cash flow to the owner if the art piece can be rented to a display centre or museum. If that’s the case, then rare art pieces can be considered an investment that generates income.

At least for art, the artwork can be considered a beautiful asset which some people appreciate and may pay to see or buy as a decorative ornament.

Real estate

Real estate generates income for the owner in the form of rental income. Rental provides real estate owners with income that eventually offsets the amount paid for the asset.

Real estate investors don’t need to sell the property to realise an investment gain. Rent out the asset long enough and they’ve made enough rental income to offset the property price.

Real estate is clearly a productive asset.

Stocks

Owning stock of a company is having a part ownership of the business. It entitles you to a share of the profits through dividends.

As such, stock investors do not need to rely on price performance but can earn a good return simply by collecting dividends paid from profits of the company.

However, we cannot paint all stocks with the same brush. 

There are occasionally stocks that trade at such high valuations that people who buy in at that price will never make back their money from dividends. The only way to profit is by selling it to a “greater fool” at a higher price. 

These stocks that fall into this category hence move into the “speculative asset” category.

Bonds

Bonds are a “loan” that you make to a company or government body. In exchange, the “borrower” will pay you interest plus return the full loan amount at the end of the “loan period”.

Bonds provide the investor with a regular income stream and the investor can also get the principle back at the maturity date assuming no default. 

Given the predictable income stream, bonds are a productive asset that produce cash flows to the investor.

Stock derivatives

Stock derivatives are financial assets that derive their value from stock prices. These can be options, futures, warrants etc.

Derivatives such as options can provide the investor with the option to purchase a stock at a particular price before a given date.

However, as stock derivatives have a predetermined expiry date, they are highly dependent on relatively short term stock prices and hence is a speculative asset.

The difference between stocks and stock derivatives is that a stock pays you dividends whereas a derivative does not. On top of that, the derivative has an expiry date which means owners of derivatives rely on short term price movements of the stock to make a profit.

The Bottom line

Don’t get me wrong. I’m not saying investing in Bitcoin, art or derivatives cannot be profitable. In fact, investing in speculative assets has made some people very wealthy. That’s because speculative assets can keep appreciating due to the sheer number of people who believe in them.

For instance, the narrative around bitcoin and the amount of money flowing into cryptocurrencies at the moment have caused bitcoin price to rise substantially in the last decade or so, minting billionaires in the process.

But while it can be profitable, speculation is a difficult game to play and depends on the narrative surrounding the asset. In addition, since the asset is not backed by cash flows, the price can come crashing down and owners are left holding a “non-productive” asset that produces no cash flows.

Personally, this is a game I rather not play. I prefer to invest in productive assets that can produce cash flows to the owner so that I don’t have to rely on narratives or a “greater fool” to profit.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not currently have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

Don’t Judge Your Investments By Their Stock Prices

If you find yourself celebrating (or crying) just because of short term price movements, read this…

Back in December 2020, I wrote an article on Moderna and BioNTech. The two companies were the front runners in the COVID vaccine race and their vaccines were on the brink of FDA approval.

In the article, I concluded that their stock prices had already priced in potential profits from their COVID vaccines. When the article was published, Moderna’s stock price was around US$152 and BioNTech’s was at US$120.

Subsequently, both Moderna and BioNTech’s stock prices continued to rise, reaching a peak of around US$449 and US$389, respectively, by mid-2021. At this point, my conclusion in the article seemed wildly inaccurate. But fast forward to today and Moderna and BioNTech’s stock prices have fallen to just US$107 and US$104, respectively. Both companies’ shares now trade around their respective prices back when I wrote my December 2020 article.

Stock prices fluctuate too much

The point of this article you’re reading now is not to say that I was “right”. On the contrary, just because the stock prices of both companies are around what they were, does not make my December 2020 article right. 

As Moderna’s and BioNTech stock prices have shown, stock prices fluctuate wildly and often do not accurately reflect companies’ intrinsic values. As a long term stock investor, I don’t want to fool myself into thinking that I was right simply because a stock’s price went up or down. What really matters to a long-term investor is whether a company can return dividends over the lifetime of its business and whether that return is more than what the investor paid for the stock.

Judging an investor’s long-term performance therefore requires patience. It takes decades – not months or years – to judge investment performance. We can only judge the investment performance of a stock after the entire lifecycle of the company has completed, which may even stretch for hundreds of years.

Even if you sold for a profit

I’ll go a step further and say that even if we have sold a stock for a profit, it does not mean we were right. Yes, we may have made a profit, but it could be due to the buyer on the other end of the deal overpaying for the stock – we were just lucky that they mispriced the stock. 

You don’t have to look much further than Moderna and BioNTech’s stock prices in 2021. An investor could have bought in December 2020 and sold in mid-2021 for a huge gain. This does not mean that the investor had bought at a good price. It could simply mean that the mid-2021 price was overvalued.

Ultimately, I don’t judge a stock’s investment performance based on the price at the point of sale. What matters is the profit/cash flow that the company generates and dividends paid to shareholders. 

To me, the share price is too volatile and is just short term noise that fluctuates daily.

This reminds me of a quote from the movie, Wolf on Wall Street. Matthew McConaughey’s character said something funny yet somewhat true about stock prices, “It’s a.. Fugazi, Fogazi. It’s a wazi, it’s a woozy. It’s fairy dust. It doesn’t exist, it’s never landed, it is not matter, It’s not on the elemental chart. It’s not f*ing real”.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Companies Need to Stop Doing These Stupid Things

Stock-based compensation, EBITDA, and buybacks are often conducted poorly by companies.

We see companies do stupid things all the time that erodes shareholder value. Here are three of them that really irk me.

Targeting stock-based compensation as a percent of revenue

Many companies don’t seem to understand stock-based compensation. 

Twilio is one such example. In an investor presentation last year, Twilio mentioned that it was targeting to reduce stock-based compensation as a percent of revenue.

Stock-based compensation on the income statement is recorded based on the share price at the time of grant. Using a percent of revenue as a stock-based compensation measure just shows how little management understands it.

Stock-based compensation on the income statement can drop simply because share prices have fallen. So lower stock-based compensation on the income statement does not necessarily correlate with a lower number of shares issued. 

In fact, if share prices drop drastically – as was seen with tech stocks in 2022 – stock-based compensation recorded on the income statement may end up being lower, but the absolute number of shares vested could be even more than before. This can lead to even larger dilution for shareholders.

Twilio is not the only company that does not understand stock-based compensation. More recently, DocuSign also suggested that it is targeting stock-based compensation based on a percent of revenue, which shows a lack of understanding of the potential dilutive effects of this form of expense.

Instead of focusing on the accounting “dollars” of stock-based compensation, companies should focus on the actual number of shares that they issue.

Focusing on EBITDA

Too many companies make financial targets based on EBITDA.

EBITDA stands for earnings before interest, taxes, depreciation and amortisation. Although I appreciate the use of EBITDA in certain cases, it is usually not the right metric for companies to focus on. 

In particular, EBITDA ignores depreciation expenses, which often need to be accounted for, especially when a business requires maintenance capital expenditures. Capital expenditure is cash spent this year that is not recorded as an expense on the income statement yet. Instead it is recorded as an asset which will depreciate over time in the future. Ignoring this depreciation is akin to completely ignoring the cash outlay used in prior years.

Management teams are either being dishonest by focusing on EBITDA or truly do not appreciate the pitfalls of focusing on maximising EBITDA instead of actual cash flow per share. In other words, they’re either incompetent or dishonest. Either way, it’s bad.

Framing stock buybacks as returning cash to shareholders

Too many companies frame buybacks as a way to return cash to shareholders. However, if we are long-term shareholders who do not plan to sell our shares, we don’t get any cash when a company buys back stock.

Don’t get me wrong.

I think buying back stock when shares are relatively cheap is a great use of capital. However, saying that buybacks is returning cash to shareholders is not entirely correct. Only a small group of shareholders – the shareholders who are selling – receive that cash.

Instead, companies should call buybacks what they really are: A form of investment. Buybacks reduce a company’s shares outstanding. This results in future profits and dividend payouts being split between fewer shares which hopefully leads to a higher dividend per share in the future for long term shareholders.

Naming buybacks as a form of returning cash to shareholders is undermining the truly long-term shareholders who in reality have not seen any cash returned to them. 

If a company mistakenly thinks that buybacks are a form of returning cash to shareholders, it may also mislead them to buy back stock periodically without consideration of the share price. Doing this can be harmful to shareholders.

On the other hand, if the company correctly realises that buybacks are instead a form of investment, then the share price will matter to them and they will be more careful about buying back shares at a good price.

Bottom line

Companies do stupid things all the time.

Although I can give them the benefit of the doubt for many stupid things they do, I draw the line when a company cannot grasp simple accounting concepts or make silly statements.

It may seem trivial, but making silly statements shows a lack of understanding of key concepts that mould a company’s capital allocation decisions.

Executives are paid good money to make good decisions and I expect a basic level of understanding from the people who make key decisions on shareholders’ behalf.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Docusign. Holdings are subject to change at any time.

Ben Graham’s Q&A

Ben Graham appeared in a news clip in the 1950s, answering questions and assuaging people’s worries about the stock market.

I recently came across an old US TV news clip from the 1950s that featured Ben Graham, the mentor of Warren Buffett, and the author of the highly influential investing texts, The Intelligent Investor and Security Analysis. In the clip, Graham was leading a seminar at Columbia University together with Dean Courtney Brown. The two men gave a short speech and answered questions from the crowd. 

The news clip also featured a short interview of Senator William Fulbright, who at the time, was commissioning a study on the US stock market after stock prices had advanced near the heights of the 1929 peak just before the Great Depression of the 1930s reared its ugly head. (The study was conducted and published in 1955.)

I was fascinated by the news clip, because Fulbright and the people asking questions to Graham and Brown, had worries about the stock market that are similar to today. For example, Fulbright was concerned that stock prices were too high and might collapse drastically yet again, similar to the great crash that happened during the Great Depression. In another example, the question at the 21:09 mark was concerned about inflation that was driven by “deficits spending”, “easy money policy”, “increased union wages”, “increased minimum wage”, and a “rogue [spending] programme of US$101 billion which the government has just announced” – these are worries in the 1950s that would absolutely fit in today. And importantly, the Dow Jones Industrial Average (I’m using the Dow because it is the index that is referenced in the news clip) is up from around 400 points in 1955 to over 37,000 currently. 

I decided to create a transcript of the news clip for my own reference in the future, and thought of sharing it with the possibility that it might be useful for any of you reading this. Enjoy!

Transcript

TV presenter (10:00): There is no shortage of experts on the market. As for us we’re barely able to tell the difference between a bull and a bear. So we sat in on part of a seminar at The Graduate School of Business at Columbia University. After all it’s older than the stock exchange and we thought professors familiar with the language of the street might treat the market with detachment. Dean Courtney Brown and Professor Benjamin Graham were instructing future brokers and customersmen. Here is See It Now’s short course in the market.

Courtney Brown (10:36): First let me give a caution. I hardly need give it to a group of informed students such as you. No one knows precisely why the market behaves as it behaves, either in retrospect, or in prospect. The best we can do as you well know is express informed judgments. But it is important that those judgments be informed. We do know that there has been a substantial rise. That rise has been going on for a number of years, particularly since the middle of 1953. And we do know that the rate of that rise has been very rapid, uncomfortably like that of the 1928-29 period. It has resulted in a lot of comparisons being made in the press. Moreover the present level of stock prices, as measured by the Dow Jones Averages, is about equal to, indeed a little above the peaks of 1929.

A number of explanations have been advanced regarding the stock market’s rise that suggests it may reflect a return to inflationary conditions. This doesn’t seem to me to be very convincing. First because there is no evidence of inflation in the behaviour of commodity prices, either at the wholesale or at the retail level and there hasn’t been over the past a year and a half – extraordinary stability in the behaviour of both indexes. There is so much surplus capacity around in almost every direction that it’s hard to conceive of a strong inflationary trend reasserting itself at this time.

Still another explanation is that the stock market has gone up because there has been a return of that kind of speculative fever that has from time to time in the past gripped the country – the Florida land boom, the 1929 stock boom. They’ve occurred in history as you know, all the way back from the Tulip speculations in Holland. I suspect there’s a certain element of truth in this one. However, it doesn’t seem to me that it gives us too much concern because there has been no feeding of this fever by the injection of credit. I think it is important for us to observe that the amount of brokers’ loans – loans made to brokers for the financing of securities of their customers that have been bought on margin – are less and then US$2 billion at present. In 1929, they were in excess of US$8.5 billion and there is now a larger volume of securities on the stock exchange. Now gentlemen, Professor Graham will pick up the story at that point.

Ben Graham (13:37): One of the comparisons is interesting is one not between 1929, which is so long ago but 1950 which is only a few years ago. It would be very proper to ask why a price is twice as much as they are now when the earnings of companies both in ‘54 and probably in 1955 are less than they were in 1950. Now that is an extraordinary difference and the explanation cannot be found in any mathematics but it has to be found in investor psychology. 

Ben Graham (14:10): You can have an extraordinary difference in the price level merely because not only speculators but investors themselves are looking at the situation through rose-coloured glasses rather than dark-blue glasses. It may well be true that the underlying psychology of the American people has not changed so much and that what the American people have been waiting for for many years has been an excuse for going back to the speculative attitudes which used to characterize them from time to time. Now if that is so, then the present situation can carry a very large degree of danger to people who are now becoming interested in common stocks for the first time. It would seem if history counts for anything, that the stock market is much more likely than not to advance to a point where of real danger.

Unknown questioner (15:03): You said that stock prices now are not too high but that you fear they will go higher. Well then are you recommending the decline?

Courtney Brown (15:09) Well here I’ll defend you on that [laughs].

Ben Graham (15:10): [Laughs] Yeah, go right ahead.

Courtney Brown (15:17): Those who have watched the security market’s behaviour over the years have become more and more impressed with the fact that stocks always go too high on the upside and tend to go too low on the downside. The swings in other words are always more dramatic and more – the amplitude of change is greater than might normally be justified by an analytical appraisal of the values that are represented there. I think what Professor Graham had to say was that his analysis of a series of underlying values would indicate that the stock prices are just about in line with where they might properly be.

However, from experience that would be the least likely thing to happen that stocks would just stabilise right here. Now if it’s the least likely thing to happen, and you have to select a probability between going up further or down further because of the strong momentum that they have had, I think I would be inclined to agree with him [referring to Graham] that the more probable direction would be towards a somewhat higher level.

Unknown questioner (16:24) When stockholders believed the market was too high, they switched from stocks to cash. Now, many people feel that due to capital gains tax they are not free to act. They are, what you might say, locked in. What effect does this have on the stock market in general?

Courtney Brown (16:41): No question about the fact that it does discourage some sales that might otherwise be made because one selling stocks trying to replace them would have to replace them at substantially lower prices and to come out even after paying the capital gains tax. However, that’s not the only reason people are reluctant to sell stocks and buy bonds. Stocks are still yielding about 4.5% on the basis of current dividend payments whereas bonds of prime quality are closer to 3%. Here again we find a contrast with the situation in 1929, when stocks were yielding about 3.5% and prime bonds closer to 5%.

Unknown questioner (17:24): In addition to raising margin requirements, should the federal government take other measures to check a speculative boom in the stock market, and which method is the better?

Ben Graham (17:34): My own opinion would be that the Federal Reserve should first exhaust the possibilities of raising the margin requirements to 100% and then consider very seriously before they imposed other sanctions if needed 

Unknown questioner (17:47): What is the significance of the broadening public participation in stock purchasing and ownership? 

Courtney Brown (17:58): There are probably two elements there that are important. One, the broadening participation of the public in stock purchases is one measure of the degree of speculative fever that we were talking about before. However, subject to that being controlled – and I believe that it can be controlled as Professor Graham has indicated. But over and above that, there is a broad social significance to that, it seems to me. What in essential terms means is that the ownership of American industry is being more widely dispersed among more and more people. This has very favourable repercussions in terms of our political and social life.

Unknown questioner (18:45): This question concerns the so-called Wall Street professional. Our Wall Street professionals, usually more accurate in their near or long-term market trends – forecasts of stock market trends. If not, why not?

Ben Graham (19:03): I said you say that they are more often wrong than right on their forecasts?

Unknown questioner (19:08): What I mean is are they more accurate in the shorter term than the long-term forecasts?

Ben Graham (19:11): Well we’ve been following that interesting question for a generation or more and I must say frankly that our studies indicate that you have your choice between tossing coins and taking the consensus of expert opinion. And the results are just about the same in each case. Your question as to why they are not more dependable – it’s a very good one and interesting one. My own explanation for that is this: That everybody in Wall Street is so smart, that their brilliance offsets each other, and that whatever they know is already reflected in the level of stock prices pretty much. And consequently what happens in the future represents what they don’t know.

Unknown questioner (19:56): Would you kindly comment on an item appearing in the newspapers to the effect that while 45% of buying today is on margin, the money borrowed is equal to only 1% of the value of listed stock.

Courtney Brown (20:12): The amount of trading on the stock exchange is a very small part of the total value of all the securities that are listed there on. And when you say that the total amount of borrowing on margins financed by brokerage loans is only 1% of the value, it is a reconcilable figure. You can’t reconcile it unless you have the detailed data with you, but it isn’t incompatible in any way.

Ben Graham (20:34): I might add a point on that Dean Brown and that is the slow increase in brokers loans as compared with 45% marginal trade, would indicate that a good deal of the marginal trading is between people who are taking in each other’s washing – that is the marginal buyers are buying from sellers who are previously on margin. And that’s why the rate of growth of brokers’ loans is so much smaller now than it had been in the 1920s, when I think a good deal of the selling had come from long-term owners and really smart people who were selling out to the suckers.

Unknown questioner (21:09): I want to raise a point of argument here on this question of inflation. Seems to me that you’re correct in stating that there’s been no inflation in ‘54 but there also appears to be several long-term inflationary points in the economy today. These I think are the deficits spending that’s supposed to be continued by the government, the easy money policy which is expected to continue, the question of increased union wages, the talk about increased minimum wage, and the talk about a guaranteed wage. All these and on top of this, the rogue program of US$101 billion which the government has just announced. These seem to me to be long-term inflationary things in the US economy and I wish you’d talk about these.

Courtney Brown (21:57): That’s a question that has a good many angles on it. Perhaps we both better try it. Prof Graham, why don’t you take the first crash?

Ben Graham (22:00): I think there are two answers to that in my mind. The first is that acknowledging that there are inflationary elements in governmental policy as it’s now being carried out, it may be argued that those are just necessary to keep things on an even keel because without them, we might have some inbuilt deflationary factors in the way business operates through increased productivity capacity and so forth.

Courtney Brown (22:27): I’ve been impressed with the possibility of labour costs as an inflationary factor. But a rise in wages does not necessarily mean a rise in labour costs. It depends upon the relationship of the rate of change in wages and the rate of change in output  per man-hour, or productivity. Now if wages are related to productivity, as you know they were in the General Motors contract, there is no necessary inflationary consequence to be anticipated. However, apart from that, it’s entirely possible that if wages go ahead faster than changes in productivity there could be a seriously inflationary factor. 

Unknown questioner (23:13): On the basis of your recent answer with regard to the psychological impact of the present condition of the market on the small investor, do you discount the entire theory of dollar averaging? 

Ben Graham (23:30): I think there’s no doubt for this, accepting your premise the man will put the same amount of money in the market year after year for the next 20 years, let’s say, there is a great chance of coming out ahead regardless of when he begins and particularly regardless we should begin now. You have to allow for the human nature factor that no man can really say definitely just how he’s going to behave over the next 10 to 20 years. And there is danger that people start with the idea of being systematic investors over the next 10 to 20 years, may change their attitude as the market fluctuates – in the first instance, put more money into the market because they become speculators, and secondly, get disgusted and scared and don’t buy at all later on when prices get low. It’s a psychological danger – the fault is not in the stars or in the system but in ourselves I think. 

TV presenter (24:27): That was a glimpse of a seminar examining the stock market at Columbia University. We move now to Washington, where Democratic Senator William J Fulbright has announced that his Banking and Currency committee will conduct an investigation of the market.

Unknown questioner (24:40): Senator Fulbright, why is your committee going to investigate the stock market?

William Fulbright (24:43): Well Mr Mayor, there are two principal reasons. One is that my committee has jurisdiction over the subject matter through its control and responsibility for the SEC. The second reason is that the unusual increase during the last 12 to 18 months in the level of prices would seem to warrant a study at this time. 

Unknown questioner (25:04): Are you worried about another 1929?

William Fulbright (25:06): But of course there’s certainly a possibility of it. This situation is reminiscent of 1929. We know the Great Depression in the early ‘30s was heralded by the tremendous increase, the great rise in the stock market and then the great drop. That’s unsettling to the whole economy and it frightens people. It causes great harm to people on fixed incomes and so on. And another thing about it is that the greatest criticism of our system and our economy by our enemies – especially the Communists – is the instability of our economy and the why of our fluctuations and we should endeavour to minimise those fluctuations. Now I don’t know all the reasons involved in this. That’s why we’re going to have the study. But the objective is is to inform the Congress and inform the people as far as we can about the conditions that now exist and we would then hope to be able to develop some remedy for it, some way to control these wild fluctuations. 

I confess with what limited knowledge I have, it does disturb me because it has gone up for such a long time and to such a great extent – I think far beyond what the conditions in the country itself warrant. I happen to know of my own knowledge that in the agricultural areas in the southwest, we are having a very severe depressed period. There is no boom in the agricultural areas, the rural areas of the West, and the Southwest. So that most of this boom is concentrated in the market and I think it is unhealthy but I’m unwilling to take a dogmatic stand now. That’s why as I say, we’ll have the study. 

Unknown questioner (26:52): Well Senator Fulbright, I think you have referred to this as a friendly investigation. What exactly is a friendly investigation?

William Fulbright (27:00): Well what I meant to convey is that I have no knowledge nor even suspicion of wrongdoing, manipulation, or anything of that kind in this increase. And I approach it in a friendly spirit in the spirit of trying to find out for the information of the country and of our committee and the Congress, what has been taking place. I’m not approaching it with the idea that we’re going to reveal a lot of wrongdoing.

TV presenter (27:27): The stock exchange hasn’t been investigated for 20 years, but it remains the subject of curiosity and concern as to whether what is good for the exchange is good for the country and the people who live here. There have been no official charges that it has been rigged or manipulated but rather the question of whether or not the market is healthy. There is wide disagreement amongst the experts as to why the market behaves as it does. But there is considerable agreement that it behaves the way it does because people behave the way they do. 

Good night and good luck. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI (2023 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s third quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest comments, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management believes that generative AI is a generational opportunity to deliver new products and services

We believe that every massive technology shift offers generational opportunities to deliver new products and solutions to an ever-expanding set of customers. AI and generative AI is one such opportunity, and we have articulated how we intend to invest and differentiate across data, models and interfaces. 

The integration of Adobe’s generative AI Firefly models with the company’s Creative Cloud’s suite of products have led to more than 4.5 billion generations since their launch in March

The general availability of our generative AI Firefly models and their integrations across Creative Cloud drove tremendous customer excitement with over 4.5 billion generations since launch in March.

Adobe’s management has released three new Firefly models for different functions

The release of 3 new Firefly models, Firefly Image 2 model, Firefly Vector model and Firefly Design model, offering highly differentiated levels of control with effects, photo settings and generative match

Adobe’s Creative Cloud subscription plans now include generative credits; Adobe’s management introduced generative credits to Adobe’s paid plans to drive adoption of the plans and drive usage of the generative AI functions; management does not expect the generative credits (or packs) to have a large impact on Adobe’s financials in the short term beyond driving more customer sign-ups

We also introduced generative credits as part of our Creative Cloud subscription plans…

…Secondly, we priced the generative packs — sorry, we integrated the generative capabilities and credits directly into our paid plans with the express intent of driving adoption of the paid subscription plans and getting broad proliferation of the ability to use those…

… I don’t personally expect generative packs to have a large impact in the short term other than to drive more customers to our paid existing subscription plans.

Photoshop Generative Fill and Generative Expand are now generally available and are seeing record adoption, with them being among the most used features in the Photoshop product

The general availability of Photoshop Generative Fill and Generative Expand, which are seeing record adoption. They’re already among the most used features in the product.

Adobe’s management believes that Adobe Express’s generative AI capabilities are driving adoption of the product

 The family of generative capabilities across Express, including text to image, text effects, text to template and generative fill are driving adoption of Express and making it even faster and more fun for users of all skill levels.

Adobe’s management is seeing high level of excitement among customers for the Firefly integrations across Adobe’s product suite

Customer excitement around Firefly integrations across our applications has been great to see with community engagement, social interactions and creative marketing campaigns driving organic brand search volume, traffic and record demand. 

Adobe’s management expects generative AI features to deliver additional value and attract new customers to Adobe’s Document Cloud suite of products; generative AI capabilities for Document Cloud is now in private beta, with a public beta to come in the next few months and general availability (GA) to arrive later in 2024

Much like the Creative business, we expect generative AI to deliver additional value and attract new customers to Document Cloud. Acrobat’s generative AI capabilities, which will enable new creation, comprehension and collaboration functionality have already been rolled out in a private beta. We expect to release this in a public beta in the coming months…

…What we’re really excited about as we bring the AI assistant to market, which, by the way, as I mentioned, is now in private beta. Expect it to come out in the next few months as a public beta and then GA later in the year.

Adobe’s management is focusing Adobe’s generative AI efforts within its Experience Cloud suite of products in three areas: (1) Building an AI assistant, (2) reimagining Experience Cloud’s existing applications, and (3) creating new generative AI solutions

Generative AI accelerates our pace of innovation across the Experience Cloud portfolio, enabling us to build on our capabilities to deliver personalized digital experiences. Our efforts are focused in 3 areas: one, augmenting our applications with an AI assistant that significantly enhances productivity for current users and provides an intuitive conversational interface to enable more knowledge workers to use our products; two, reimagining existing Experience Cloud applications like we did with Adobe Experience Manager; and three, developing entirely new solutions built for the age of generative AI like Adobe GenStudio.

Adobe’s management recently released Adobe GenStudio, a solution with generative AI capabilities that combines Creative Cloud, Express, and Experience Cloud, to help brands create content; Adobe GenStudio is seeing tremendous customer interest

Release of Adobe GenStudio, an end-to-end solution that brings together best-in-class applications across Creative Cloud, Express and Experience Cloud with Firefly generative AI at the core to help brands meet the rising demand for content. GenStudio provides a comprehensive offering spanning content ideation, creation, production and activation. We are seeing tremendous interest in GenStudio from brands like Henkel, Pepsi and Verizon and agencies like Publicis, Omnicom and Havas as they look to accelerate and optimize their content supply chains.

Adobe now has a pilot program where some customers are able to bring their own assets and content to extend Adobe’s Firefly models in a custom way; Adobe is exposing Firefly through APIs to that customers can build Firefly into their workflows; Adobe is enabling users to integrate Firefly-generated-content into a holistic Adobe workflow

So with Firefly and Express, very excited about the momentum that we continue to see. You heard that we crossed 4.5 billion generations now so we continue to see really, really strong adoption and usage of it, partially as a stand-alone business but also integrated into our Photoshop and Illustrator and these existing workflows.

And we’re starting to see a lot of interest not just in the context of using it as part of the existing products but also using it as part of the ecosystem within enterprises. So we’ve been working with a number of customers to not just enable them with Firefly, which is the predominance of the growth that we’re seeing in Q4 for enterprise adoption but also have a number of pilot customers already engaged around custom model extensions so that they can bring their own assets and their own content into what Firefly generates.

Second, we’re also enabling the ability to expose it through APIs so they can build it into their existing workflows. And third, we’re, of course, connecting it and tying it all into Adobe Express, which now also has its own Firefly and additional capabilities like things so that you can not just sort of create content using Firefly but then start to assemble it, start to schedule social posts around it, start to do multi-language translations, that those are all features that are already in there and then create a stakeholder workflow from people working in Photoshop to the marketers that are trying to post externally. So that’s where things get very interesting and exciting in terms of the connection we have with GenStudio and everything that Anil is doing.

Adobe’s management intends to improve the generative capabilities over time, which might be more expensive in terms of the generative credits consumed, and management believes this will help drive Adobe’s growth over time

But what will happen over the course of the year and the next few years is that we will be integrating more and more generative capabilities into the existing product workflows. And that will drive — and we’ll be integrating capabilities like video generation, which will cost more than 1 generation, and that will drive a natural inflation in that market and that will become a driver for growth subsequently. 

Adobe’s management believes that Firefly is a great on-ramp for Adobe Express, and a great catalyst for all of Adobe’s products across the spectrum (the same underlying generative AI technology is also a great catalyst for Adobe’s Document Cloud business)

And that sort of brings them as an on-ramp into Express, which would be the other part. Express is certainly the introductory pricing, the ability to get millions more into the fold. And the ability right now, it used to be that Express and other offerings in that is to all worry about do I have the right templates? Well, AI is going to completely change that. We have our own models. And so Firefly will allow anybody to take whatever creative idea that they have and make that available. So I think Firefly really helps with the Express offering.

On the Creative Cloud, David mentioned this. I mean, if you look at the adoption of that functionality and usage that’s being driven, whether it’s in Photoshop right now, Illustrator, as we add video, both in terms of providing greater value, and we certainly will, therefore, have the uplift in pricing as well as the retentive ability for Firefly, that’s where I think you’re going to see a lot of the really interesting aspects of how Firefly will drive both adoption as well as monetization.

And then if you go at the other end of the spectrum to the enterprise, GenStudio, every single marketer that I know and CFO and CMO are all worried about how much am I spending on data? How do I get agility in my campaigns? And the fact that Firefly is integrated into both Express as well as when we do the custom models for them so they can upload their own models and then have the brand consistency that they want. So Firefly really is the fact that we have our own models, a great catalyst for business all across the spectrum…

… And then you take the same technology that we have in Creative and think about its impact in both Document Cloud when we do that and the ability to have summaries and have conversational interfaces with PDF, thereby making every single PDF, as David again said, both for communication, collaboration and creation far more compelling. I think you’re going to see that same kind of uplift in usage and therefore, monetization on the Acrobat side.

DocuSign (NASDAQ: DOCU)

DocuSign’s management will be introducing generative AI enhancements to its CLM (Contract Lifecycle Management) platform; Veeco was an eSignature customer that has started using CLM, and DocuSign’s AI CLM features will help Veeco with surfacing actionable insights from customer contracts

CLM continues to grow well, particularly with North American enterprise customers. And for the fourth year in a row, our CLM solution was recognized as a leader by Gartner in contract life cycle management, noting our strong market understanding, product strategy and road map vision, including upcoming Generative AI enhancements. This quarter, we expanded a relationship that began more than 5 years ago with Veeco USA. Who’s the leader in workplace innovation. Veeco began using DocuSign eSignature and has added CLM as part of this transformation into a digital services company. Our AI solution will help Veeco streamline and enhance search and review of executed customer contracts with actionable insights to better serve its customers

MongoDB (NASDAQ: MDB)

MongoDB’s management held a customer feedback session recently and they saw four themes that emerged from the conversations, one of which was that customers of all sizes are interested in AI

This quarter, we held our most recent global Customer Advisory Board meeting where customers across various geographies and industries came together to share feedback and insight about the experience using MongoDB. From these discussions as well as our ongoing C-suite dialogue with our customers, a few themes emerge. First, AI is in nearly every conversation with customers of all sizes.

MongoDB’s management is seeing great early feedback from MongoDB’s partnership with AWS CodeWhisperer; MongoDB’s management also thinks that Microsoft Github Copilot is capable of generating useful code

We’re seeing great early feedback from our partnership with AWS’ CodeWhisperer, the AI-powered footing companion that is now trained on MongoDB data to generate codesuggestions based on MongoDB’s best practices from over 15 years of history. Microsoft GitHub Copilot is also proficient at generating code suggestions that reflect best practices in developers to build highly performant applications even faster on MongoDB.

MongoDB’s management is seeing software developers being asked to also build AI functionalities into their applications

And with the recent advances in Gen AI, building applications is no longer the sole domain of AI/ML experts. Increasingly, it’s software developers who are being asked to build powerful AI functionality directly into their applications. We are well positioned to help them do just that.

MongoDB’s Atlas Vector Search – the company’s AI vector search feature – recently received the highest NPS (net promoter score) among vector databases from developers; crucially, the NPS survey was done on the preview version of Vector Search and not even on the generally available version, which is better

In a recent state of AI survey reported by Retool, Atlas Vector Search received by far the highest Net Promoter Score from developers compared to all other vector databases available…

……As I said in the prepared remarks, there was a recent analysis done by a consultancy firm called [ Retool ] that really spoke to lots of customers, and we came out of top on — in terms of NPS. And by the way, our product was a preview product. It wasn’t even the GA product. 

MongoDB’s Atlas Vector Search allows developers to combine vector searches with another kind of search capabilities available in MongoDB, resulting in the ability to run very complex queries

Moreover, developers can combine vector search with any other query capabilities available in MongoDB, namely analytics, tech search, geospatial and time series. This provides powerful ways of defining additional filters on vector-based queries that other solutions just cannot provide. For example, you can run complex AI and rich queries such as “find pants and shoes in my size that look like the outfit in this image within a particular price range and have free shipping” or “find real estate listings with houses that look like this image that were built in the last 5 years and are in an area within 7 miles west of downtown Chicago with top-rated schools.”

MongoDB’s Atlas Vector Search allows customers to scale nodes independently, which gives customers the ability to achieve the right level of performance at the most efficient cost, so management thinks this is a very compelling value proposition for customers

One of the announcements we also made was that you can now do workload isolation. So for search or vector search functionality, you can scale those nodes independently of your overall cluster. So what that really does is allow customers to really configure their clusters to have the right level of performance at the most efficient cost. So we’ve been very sensitive on making sure that based on the different use cases, you can scale up and down different nodes based on your application needs. So by definition, that will be a very compelling value proposition for customers…

…[Question] With Vector Search comes quite a bit more data. So how are you making sure that customers don’t receive a surprise bill and end up unhappy?

[Answer] In terms of your question around the amount of data and the data builds, obviously, vectors can be memory-intensive. And the amount of vectors you generate will obviously drive the amount of usage on those nodes. That’s one of the reasons we also introduced dedicated search nodes so you can asymmetrically scale particular nodes of your application, especially your search nodes without having to increase the overall size of your cluster. So you’re not, to your point, soft for the big bill for underlying usage, for nonusage right? So you only scale the nodes that are really need that incremental compute and memory versus nodes that don’t, and that becomes a much more cost-effective way for people to do this. And obviously, that’s another differentiator for MongoDB.

MongoDB’s management believes that customers are aware that their legacy data infrastructure is holding them back from embracing AI (legacy data infrastructure do not allow customers to work with real-time data for AI purposes) but the difficulty in modernising the infrastructure is daunting for them; MongoDB’s management thinks that the modernisation of data infrastructure for AI is still a very early trend but it will be one of the company’s largest long-term opportunities

They are aware that their legacy platforms are holding them back from building modern applications designed for an AI future. However, customers also tell us that they lack the skills and the capacity to modernize. They all want to become modern, but daunted by the challenges as they are aware it’s a complex endeavor that involves technology, process and people. Consequently, customers are increasingly looking to MongoDB to help them modernize successfully…

… There is a lot of focus on data because with AI. Data in some way, it becomes a new code, you can train your models with your proprietary data that allows you to really drive much more value and build smarter applications. Now the key thing is that it’s operational data because with applications, this data is always constantly being updated. And for many customers, most of those applications are right now running on legacy platforms so that operational data is trapped in those legacy platforms. And you can’t really do a batch process of e-tailing all that data into some sort of warehouse and then still able to leverage the real-time use of that data. That’s why customers are now much more interested in potentially modernizing these legacy platforms than they ever have before…

…I would say it’s still very, very early days, we definitely believe that this will be one of the largest long-term opportunities for our business. we’re in the very early days.

MongoDB’s management has launched Query Converter, which uses AI to convert a customer’s existing SQL-related workflows to work with MongoDB’s NoSQL database platform, and customers have tried it out successfully

We launched Relational Migrator earlier this year to help customers successfully migrate data from their legacy relational databases to MongoDB. Now we’re looking beyond data migration to the full life cycle of application modernization. At our local London event, we unveiled the query converter, which uses genetic AI to analyze existing SQL queries and store procedures and convert them to work with MongoDB’s query API. Customers already tooled successfully to convert decades-old procedures to modernize their back-end with minimal need for manual changes.

MongoDB’s management thinks it’s too early to tell how the usage of MongoDB’s AI features by customers will impact MongoDB’s gross margin at maturity

[Question] And then the follow-up is more it’s around AI. So if I look at the demos that you guys have around vector search and how search is getting a lot better, that seems very compelling. And it seems like really straightforward for our clients to improve their the customer experience that they use it for a customer facing up, for example. What is the — what are the implications for gross margins for you, Michael, like do you have to do a lot more computer to be able to handle it?

[Answer] So I think it’s a little too early to tell. There’s obviously plenty of variability in the workloads depending on the nature what the underlying application is. So I think it’s a little early to give a strong direction to that… But I think too early to make a specific call or quantification on the gross margin impacts of AI.

MongoDB’s management thinks that Atlas Vector Search will be a big opportunity for MongoDB, but it’s early days and they find it hard to exactly quantify the revenue opportunity

We’ve seen a lot of demand from customers. And we feel like this is a big, big opportunity. Again, it’s early days. It’s going to take time to materialize, but this is, again, one of the other big growth opportunities for our business. That being said, in terms of the revenue opportunity, it’s really hard to quantify now because the use cases that customers are starting with are still kind of, I would say, early intent because people are still playing around with the technology. But we are seeing, as I mentioned, in UKG is using it to essentially provide an AI-powered assistant for its people. One Energy, European energy company is using terabytes of geospatial data and is using vectors to basically get better insights in terms of the images that they’re getting from the work they’re doing in terms of drilling for oil. So it’s still very, very early days. So hard to give you like an exact numbers.

When it comes to copilot tools for software coding, MongoDB’s management is seeing varying levels of productivity improvement for software developers based on the tools they are using; MongoDB’s management also sees the software written with copilots as being mostly for internal use currently

[Question] As customers began to trial some of these copilot code tools will say. What type of feedback have you gotten from them as it relates to the pace with which they’ve been able to reduce net new workload time to market, how much faster or efficient are customers getting using these tools?

[Answer] We get different answers from a lot of different customers. It really depends on which tool they’re using. Without commenting on who’s better, who’s worse, we definitely see a difference in the quality of the output between the different tools. I think it’s going to take some time for these tools to mature. So I think you’re seeing a lot of customers do a lot of testing and prototyping. I would also tell you that they’re doing a lot of this on internal-facing applications because there’s still lots of questions about IP rights and what is potentially copyrightable and then help to be licensable if they offer this as a shrink-wrap software or service to their end customers. So we’re seeing more of this work on internally facing applications but the productivity gains really do vary by tool and all the very do vary by the sophistication of the app being built. So it’s hard for me to give you a real number. I know there’s people out there quoting 30% or 40% improvement. But it really depends on the customer and the use case and tool that they’re trying to use.

MongoDB’s CEO, Dev Ittycheria, thinks his views – that (1) vector search would become just another functionality in a more holistic database platform, and (2) the database platform that can integrate vector search functionality well into developers’ workflow will win – has played out

I would say that I think 6, 9 months ago, there was a lot of interest in vector databases and there were some point solutions that got a lot of name recognition and a lot of people are wondering, is there a risk that we could be disrupted by them? And at that point in time, we made it clear that we believe vectors, we’re really another form of an index and that every database platform would ultimately incorporate vectors into their architecture. And the winner really would be the technology that made the vector functionality very integrated and cohesive as part of the developer workflow. I would argue that it’s really played out. 

MongoDB’s management saw customers having to work with two databases when performing vector searches for AI purposes; these customers were asking MongoDB to bring vector search capabilities into its database platform because working with one platform helps customers speed up their work and reduce costs

One of the reasons we actually built search is because we got feedback from our customers in many instances, a lot of our customers were dual homing data to MongoDB and to some sort of search database. So consequently, not only had to manage 2 databases, keep that data in sync, but also manage the plumbing that connected those 2 database platforms and customers told us they much would — this is like we don’t understand why you’re not offering a solution because we much rather have it all in one platform with one API. And that ultimately drove our desire to build out our search functionality, which is really becoming more and more popular. So the point for customers is that if you can remove friction in terms of how they can use the platform leverage the platform, have one set of kind of semantics in terms of — to address a broad set of use cases, it really simplifies the data architecture. And the more you simplify data architecture, the more nimble you can be and the more cost-effective you can be, and that’s what’s really resting with customers.

Okta (NASDAQ: OKTA)

Okta’s management introduced Okta AI during the company’s Oktane event in October; Okta AI is powered by the data that Okta has collected over the years from its 18,800 customers and 7,000+ integrations, and is infused into several of Okta’s products

The headline of the event was the introduction of Okta AI, the identity solution for the next era of computing. Okta AI is AI for Identity. It’s powered by the massive amounts of data the company has accumulated over the years, including anonymized insights crowdsourced from our 18,800 customers and the 7,000+ integrations in the Okta Integration Network, as well as data on usage, policies, threats, and risk signals. Okta AI uses that data to perform powerful, real-time security, developer, and policy actions. Okta AI is also infused into several of our products. It makes our existing products more valuable and new products possible — all while expanding what it means to be integrated and protected.

An example of Okta AI at work is Identity Threat Protection, which enables companies to automatically log users out of apps during a security issue

Identity Threat Protection with Okta AI, a new product that will enable businesses to prevent and respond to threats faster than ever before. It empowers organizations to automate the detection and remediation of Identity threats across the tech ecosystem. It extends adaptive risk evaluation from the point of authentication to any time a user is logged in and helps you quickly prevent and respond to threats. Identity Threat Protection allows for an array of powerful new actions like Universal Logout. For the first time in our industry, it’s possible to automatically log users out of their apps during a security issue. Threat actors might be getting more sophisticated, but we are using the power of AI and our ecosystem to keep our customers safe and a step ahead.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Data Cloud’s introduction was great timing because it coincided with the boom in generative AI and a company can’t make AI useful without data

And Data Cloud, this hyperscale, this real-time customer data platform that is performing incredibly well for us, it’s the foundation of every AI transaction, but it’s the foundation of every large deal that we did this quarter. That is what is so exciting. And in just our third quarter, Data Cloud has ingested an astonishing 6.4 trillion records, 6.4 trillion records. That’s 140% year-over-year increase. It triggered 1.4 trillion activations, a 220% increase year-over-year. This is a monster product. I could not be more excited. And it’s the perfect time, we didn’t really understand that it was going to line up so well with this generative AI revolution. It’s a product we’ve been working on for a couple of years. Just the timing of it has been incredible because listen, if you don’t have your data together, in a company, you’re not going to deliver AI. It’s not like companies are going to run their AI off of Reddit or off of some kind of big public data set. They have to have their data set together to make AI work for them, and that is why the Data Cloud is so powerful for them

Salesforce’s management believes that Salesforce is the No.1 AI CRM and is leading the industry in the current AI innovation cycle; they also believe that the current cycle is unlike anything they have ever seen and it’s a view that’s shared widely

We are the #1 AI CRM. If that isn’t clear already, we’re leading the industry through the unprecedented AI innovation cycle. It’s unlike anything I’ve seen and most of the people that I talk to all over the world feel the same way. 

Salesforce’s management believes that trust is going to be important in the AI era and Salesforce will be protecting customer data with a trust layer so that the data can’t be easily accessed by 3rd-party foundation models

Now as I’ve said before, this AI revolution is going to be a trust revolution. It’s not just about CRM, data or AI. It’s also about trust. And I think the trust layer and the way that we’ve architected our platform so that our customers are not basically taking — getting taken advantage of these next-generation large language models, these foundation models, they are so hungry for all of this data, and they want our customers’ data so that they can grow. We’re not going to let them have it. We’re going to separate ourselves from those models through a trust layer so customers can be protected. This is going to be so important for the future of how Salesforce architects itself with artificial intelligence.

Salesforce’s management is seeing customers across the world wanting to invest in AI for more productivity; management also travelled the world and noticed that customers are very excited about AI but at the same time, they are confused about AI’s capabilities – this excitement was not in place a year ago because generative AI apps had not surfaced yet

I’ve been on the road pretty much nonstop especially over the last month. I’ve been in — throughout Europe. I’ve been now in Asia. I’ve been throughout the United States. And I just continue to see these same trends, which is customers are investing for the future and they’re investing and inspired by AI to give them more productivity. Look, they realize unemployment is just so low. Where are they going to hire more people? It’s so hard for them to hire, they’re going to have to get more productivity from their employees. They’re going to do that through this great new technology, and we’re going to help them make that happen…

…And on a global basis, and like I said, in some of these customers in the last 30 days, I was in — I can give you my direct experience. I was in San Francisco, Los Angeles, Las Vegas, Stuttgart, Germany, I was in Nice, Monaco. I visited with our customers throughout that area. And also, I went up to Amsterdam, to France. I had a large customer dinner in the U.K. in London. I went to the U.K. Safety Summit. I then came back and went to Japan. I think I see something very consistently, which is customers are extremely excited about AI everywhere we go. It could be government, it could be commercial organizations. It could be technologists. Everyone is excited about AI. At the same time, there is a lot of confusion about what AI can and cannot do…

… And this excitement, this energy, these ideas of innovation of AI were not in place a year ago. Because don’t forget, a year ago, I don’t think any of us have used ChatGPT or Bard or Anthropic or Cohere or Adapt or any of the new AI companies. None of us had really had our hands on or envisioned what it really meant to us or that we would have Copilots, and that those Copilots would give us the ability to do all kinds of next-generation capabilities. But a year later, it’s a technology revolution. 

Salesforce has been deploying its own generative AI tools at a quick pace and management thinks the results have been excellent

I’ve been impressed with how quickly we deployed our own trusted generative AI tools and applications internally. We’ve launched Sales, GPT and Slack Sales, Elevate internally, and our global support team is live with Service GPT, and we’re seeing incredible results. We’ve streamlined our quoting process with automation, eliminating over 200,000 manual approvals so far this year. And since the introduction in September, our AI-driven chatbot has autonomously resolved thousands of employee-related queries without the need for human involvement.

Salesforce’s management thinks that every customer’s AI transformation is going to begin and end with data 

What I’ll tell you is you’re seeing something that we have been seeing and calling out for the last few quarters, but we probably have not been able to illuminate it to the level that you see now in the numbers, which is that every customer and every customer transformation and every customer AI transformation is going to begin and end with data. And for us to achieve that goal, those customers are going to have to get to another level of excellence with their data. 

Salesforce’s management thinks that there’s still a lot that AI-companies need to do to make AI safe for customers, but it’s getting better over time

We have — we still have a lot of work, as everyone does in our industry, on AI and making it safe for our customers. This is going to be incredibly important. I think for a lot of customers, they realize that they’d like to just let this AI unleashed autonomously but it still hallucinates a huge amount and it also is quite toxic. So we’re not quite ready for that revolution. But every day, it’s getting a little better. 

Salesforce’s management thinks that the movie Minority Report contains a good scene on how AI can be used to automate the personalised customer experience – management also thinks that this is something that many of Salesforce’s customers want to achieve for their own customer experience

And when I — going through the streets of Tokyo, it’s not quite the minority report, which is a movie that was partly written by our futurist, Peter Schwartz, but it’s getting closer to that idea. And when I walked into some of these stores, there’s definitely a lot more automation based on my customer record but not quite the level of automation that Tom Cruise felt when he walked into that Gap store, if you remember that scene, which was so amazing, which is very much front of mind for a lot of our customers because they want to have that capability and they want us to deliver that for them.

Salesforce’s management explained how Data Cloud can be very useful for companies that are deploying AI: Companies can use their own data, via Data Cloud, to augment generative AI models to produce personalised and commercially-useful output that otherwise could not be done

But they’re going to get frustrated when the Copilot that they are given from other companies don’t have any data. They just have data grounded to maybe the application that’s sitting in front of them, but it doesn’t have a normalized data framework on — integrated into the Copilot. So while I think Copilots on productivity applications are exciting because you can tap into these kind of broad consumer databases that we’ve been using. So as an example, the Copilot is I’m writing an e-mail. So now my — I’m saying to the copilot, hey, now can you rewrite this email for me or some — make this 50% shorter or put it into the words of William Shakespeare. That’s all possible and sometimes it’s a cool party trick.

It’s a whole different situation when we say, “I want to write an e-mail to this customer about their contract renewal. And I want to write this e-mail, really references the huge value that they receive from our product and their log-in rates. And I also want to emphasize how the success of all the agreements that we have signed with them have impacted them, and that we’re able to provide this rich data to the Copilot and through the prompt and the prompt engineering that is able to deliver tremendous value back to the customer.” And this date, this customer value will only be provided by companies who have the data. And we are just very fortunate to be a company with a lot of data. And we’re getting a lot more data than we’ve ever had. And a lot of that is coming from the Data Cloud because it’s amplifying the capabilities of all the other data we have. 

Salesforce’s management thinks that there will be significant improvements to Salesforce’s AI features in the near future

I think the demonstrations at Dreamforce were outstanding. The demonstrations that we’ll deliver in our February release will be mind-boggling for our customers of what they will be able to get done. And I think that by the time we get to Dreamforce ’25 or ’24 in September ’24, what we’ll see is nothing that we could have possibly imagined just 24 months earlier before these breakthroughs in generative AI have really taken hold through the whole industry.

Salesforce’s management thinks that no single company will control the development of AI because they think that open source AI models are now as strong as proprietary models and will lead the way; management also thinks that unlike the development of mobile operating systems which is controlled by 2 companies, there are thousands of companies that are working on open-source AI and this will lead to rapid innovation

No one company has a hold on this. I think it’s pretty clear at this point that because of the way AI is built through open source, that these models are very much commodity models, and these responses are very much commodity responses. So we’ve always felt that way about AI for more than a decade. We said that its growth has really been amplified by open source development. Because these open source models now are as strong as commercial models are or proprietary models, I think that what we really can see is that, that is going to accelerate this through every customer. There’s not going to be any kind of restrictions because of the proprietariness or the cost structures of these models. We’re going to see this go much faster than any other technology.

The reference point, as I’ve been using as I travel around, is really mobile operating systems. Mobile operating systems are very important, and we all have one on our desk or in our pocket right now. But really, the development of mobile operating systems has been quite constrained because they’re really held mostly by 2 companies and 2 sets of engineering teams. That’s not how this technology is being built. This technology is highly federated across thousands of companies and thousands of engineering teams who are sharing this technology. And because of that, you’re ending up with a rate of innovation unlike anything we’ve seen in the history of our industry and is moving us into areas very quickly that could become uncomfortable. So this is an exciting moment.

Veeva Systems (NYSE: VEEV)

Veeva’s management has not seen a big impact on the clinical side of Veeva’s business from generative AI

In terms of the generative AI, honestly, I haven’t seen a big impact in clinical. There was good experimentation and projects around helping to write or evaluate protocols, for example, but not using things like generative AI to do statistical analysis or predict where the patients are. I think there, the more appropriate tool which people are using and continue to use more and more data science. Really having the right data, running the right algorithms, being systematic about it. So yes, I just haven’t seen that impact of generative AI. You see it more in other areas that relate to content creation and asking of questions, writing safety narratives, things like that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

Investing Like a Business Owner

We often forget that investing in stocks is investing in businesses. As such we need to think like a business owner to succeed.

Rob Vinall is one of the top performing fund managers in the past decade and a half.

Vinall manages a fund named the Business Owner Fund. Since inception 15 years ago, the Business Owner Fund has returned 589%, or an annualised rate of 13.7%, in euro terms. One thing about Vinall that stands out to me is that as his fund’s name suggests, he strives to invest like a business owner.

Too often, investors look at stocks as just prices that move up and down and make investments decisions based on these prices. They often forget that there are businesses and cash flows behind these stock prices and stock tickers.

Step into the shoes of a business owner

Imagine you are starting a restaurant business. There are two big financial numbers you need to consider before you start. They are: (1) how much do you need to put into the business and (2) how much can you get out of it over time?

For instance, let’s say the initial start up cost is $1m. But you can take out $200k in dividends every year after that for the next 20 years. Knowing these projections, you can decide if it is worthwhile to start your restaurant business. In the above projections, you can calculate that over twenty years, you would have quadrupled your money.

Investing in stocks should also involve the same thinking. How much can we get out of the stock over the lifespan of the business? That means, how much in dividends per share can we get over the lifespan of the business and will that cover the cost that we spend on buying the shares.

But what about selling the stock?

A business owner who owns her own restaurant may not have an opportunity to sell the restaurant. As such, the only way to receive any returns is from the profits of the business. This means that the business owner naturally places emphasis on ensuring the profits that the business can generate exceeds how much she puts in.

On the other hand, when we invest in stocks, we can sell the stock. This is both a blessing and a curse in my opinion. It’s good because it provides us with liquidity if we need the cash. But it’s bad because investors then tend to focus on the stock price and not the business fundamentals.

Like a business owner, stock investors should be focused on the cash flow of the business rather than its share price. This means looking at the future cash flow per share, and ultimately how much dividends, they can receive over the lifespan of the business.

In the long-term, while a company may not be paying dividends yet, the earnings and cash flows allows a company to eventually dish out dividends, which should offset the amount you paid for your investment and more.

Final words

Investing in the stock market should be similar to being a business owner. We should focus on how much profits a company can return to us instead of how much we can sell the stock at a future date. 

The quoted stock price on the stock market can fluctuate wildly and will depend greatly on external factors such as the risk free rate or how Wall Street views the company. This can distract us from what is truly important and why we really invested in the company.

By focusing on the cash flows of the business, we can more safely predict our returns instead of being beholden to the externalities of the environment that may impact our sale price.

Ultimately, just like a business owner, we should focus on our returns from the dividends instead of wasting energy hoping that the share price goes up. This is often outside our control and if it does then great but if it doesn’t, it shouldn’t matter as the overall returns from our cash flow should be good enough for us to make a positive return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.