Having a Margin of Safety

How do we buy stocks with a margin of safety and how wide of a margin do we need?

Warren Buffett once said that we should invest only when the price provides us with a margin of safety. But what does a margin of safety really mean? Let’s break it down.

Accounting for shortfalls in forecasts

Investing is a game of probability. 

It is impossible to forecast the exact cash flows and dividends that a company will pay in the future. This is where the concept of a margin of safety comes in. Morgan Housel once wrote:

“Margin of safety is simply the distance between your predictions coming true and needing those predictions to come true. You can still try to predict the future, but a margin of safety gives you room for error to be wrong.”

For instance, we may forecast a company to provide us with $1 per share in dividends for 10 years and then close down after the 10 years is over. 

Using a dividend discount model and a 10% required rate of return, we can calculate that the value of the shares should be $6.14 each. In other words, if we pay $6.14, it will give us a 10% annual return based on the expected dividends we can receive over time.

But what if our forecast falls short? Say the company ends up paying a dividend of only $0.80 per share each year. In this case, paying $6.14 for the company’s shares will not get us our desired return of 10% per year.

To account for this potential 20% shortfall in dividends per share, we should have a margin of safety. We can calculate that we should only buy the stock if the stock price is $4.92 so that we have a “margin of safety” in case our forecast falls short.

Accounting for different discount rates

But a margin of safety does not only mean that we should account for the company’s actual results deviating from our forecasts. There is another crucial factor that comes into play.

If you intend to sell the stock, we need to factor in our sale price, which will be dependent on the buyer’s required rate of return, or discount rate.

For instance, we want to buy the same company above but instead of buying and holding for the full 10 years, we intend to sell the shares after just 5 years.

If we are buying the stock for the full 10 years, we can pay $6.14 per share, knowing that we will get a 10% return simply by collecting the dividend and reinvesting the dividend at a 10% rate.

But if we intend to sell the shares after 5 years, another factor comes into play – the sale price of the shares at the 5-year mark. Obviously, if we can’t get a good price during the sale, our returns will be subpar.

If the person buying the stock from us at the 5-year mark also requires a 10% rate of return, we can sell the stock at “his price” ($3.79) and still receive a 10% annualised return.

However, if the person that we are selling the stock to requires a 12% rate of return, he will only be willing to pay us $3.60 for the shares. In this case, we will receive less than a 10% annual return over our 5-year holding period.

So instead of paying $6.14 per share, we should only pay $5.82 per share to provide us with a margin of safety in case the required rate of return of the buyer goes up to 12% at our point of sale.

Margin for upside

Factoring in a margin of safety provides us comfort that we can achieve our desired rate of return. In addition, if things go smoothly, there is the potential to earn even more than our required rate of return.

But while the concept seems straightforward, its application is a bit more challenging. It requires a keen understanding of business and a valuation that provides sufficient margin of safety. 

It also requires some judgement on our part. How much of a margin of safety is enough? For companies with very stable and predictable dividend streams, our margin of safety can be narrower. But for companies with less predictable dividend streams, we may want to factor in a larger margin of safety.

I also prefer to demand a relatively high rate of return so that it is unlikely that the required rate of return by the buyer at the point of sale will negatively impact my return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the third quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees generative AI as an opportunity to reimagine the company’s product and transform Airbnb into the ultimate travel agent

First, I think that we are thinking about generative AI as an opportunity to reimagine much of our product category and product catalog. So if you think about how you can sell a lot of different types of products and new offerings, generative AI could be really, really powerful.  It can match you in a way you’ve never seen before. So imagine Airbnb being almost like the ultimate travel agent as an app. We think this can unlock opportunities that we’ve never seen. 

Airbnb’s management believes that digital-first travel companies will benefit from AI faster than physical-first travel companies

So Airbnb and OTAs are probably going to benefit more quickly from AI than, say, a hotel will just because Airbnb and OTAs are more digital. And so the transformation will happen at the digital surface sooner.

Airbnb’s management believes that Airbnb’s customer service can improve significantly by placing an AI agent between a traveller and her foreign host

One of the areas that we’re specifically going to benefit is customer service. Right now, customer service in Airbnb is really, really hard, especially compared to hotels. The problem is, imagine you have a Japanese host booking with — hosting a German guest and there’s a problem, and you have these 2 people speaking different languages calling customer service, there’s a myriad of issues, there’s no front desk, we can’t go on-premise. We don’t understand the inventory, and we need to try to adjudicate an issue based on 70 different policies that can be up to 100 pages long. AI can literally start to solve these problems where agents can supervise a model that can — in second, come up with a better resolution and provide front desk level support in nearly every community in the world. 

Airbnb’s management believes that AI can lead to a fundamentally different search experience for travellers

But probably more importantly, Kevin, is what we can do by reimagining the search experience. Travel search has not really changed much in 25 years since really Expedia, Hotels.com, it’s pretty much the same as it’s been. And Airbnb, we fit that paradigm. There’s a search box, you enter a date location, you refine your results and you book something. And it really hasn’t changed much for a couple of decades. I think now with AI, there can be entirely different booking models. And I think this is like a Cambrian moment for like the Internet or mobile for travel where suddenly an app could actually learn more about you. They could ask you questions and they could offer you a significantly greater personalized service. Before the Internet, there were travel agents, and they actually used to learn about you. And then travel got unbundled, it became self-service and it became all about price. But we do think that there’s a way that travel could change and AI could lead the way with that. 

Airbnb’s management believes that all travel apps will eventually trend towards being an AI travel agent

And I generally think for sure, as Airbnb becomes a little more of a so-called like AI travel agent, which is what I think all travel apps will trend towards to some extent.

Alphabet (NASDAQ: GOOG)

Alphabet’s management has learnt a lot from trials of Search Generative Experience (SGE), and the company has added new capabilities (videos and images); Search Generative Experience has positive user feedback and strong adoption

This includes our work with the Search Generative Experience, which is our experiment to bring generative AI capabilities into Search. We have learned a lot from people trying it, and we have added new capabilities like incorporating videos and images into responses and generating imagery. We have also made it easier to understand and debug generated code. Direct user feedback has been positive with strong growth in adoption.

SGE allows Alphabet to serve a wider range of information needs and provide more links; ads will continue to be relevant in SGE and users actually find ads useful in SGE; Alphabet wants to experiment with SGE-native ad formats

With generative AI applied to Search, we can serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. We are surfacing more links with SGE and linking to a wider range of sources on the results page, creating new opportunities for content to be discovered. Of course, ads will continue to play an important role in this new Search experience. People are finding ads helpful here as they provide useful options to take action and connect with businesses. We’ll experiment with new formats native to SGE that use generative AI to create relevant, high-quality ads customized to every step of the Search journey.

Alphabet’s management thinks SGE could be a subscription service; it’s still very early days in the roll-out of SGE and management wants to get the user experience correct (Alphabet has gone through similar transitions before, so management is confident about this)

And I do think over time, there will be newer paths, just like we have done on YouTube. I think with the AI work, there are subscription models as a possible path as well. And obviously, all of the AI investments we are doing applies across Cloud, too, and I’m pretty optimistic about what’s ahead there as well…

…On the first part about SGE, we are still in very, very early days in terms of how much we have rolled it out, but we have definitely gotten it out to enough people both geographically across user segments and enough to know that the product is working well, it improves the experience and — but there are areas to improve, which we are fine-tuning. Our true north here is getting at the right user experience we want to, and I’m pretty comfortable seeing the trajectory. And we’ve always worked through these transitions, be it from desktop to mobile or from now mobile to AI and then to experience. And so it’s nothing new. 

Alphabet is making it easier for people to identify AI-generated content through digital watermarks

One area we are focused on is making sure people can more easily identify when they are encountering AI-generated content online. Using new technology powered by Google DeepMind SynthID, images generated by Vertex AI can be watermarked in a way that is invisible to the human eye without reducing the image quality. Underlying all this work is the foundational research done by our teams at Google DeepMind and Google Research. 

Alphabet’s management is committed to changing Alphabet’s cost base to accommodate AI investments; Alphabet has, for a long time, driven its cost curves down spectacularly, and management is confident that it will be the same for the current build-out of AI infrastructure

As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts. We remain committed to durably reengineering our cost base in order to help create capacity for these investments in support of long-term sustainable financial value. Across Alphabet, teams are looking at ways to operate as effectively as possible focused on their biggest priorities…

…When I looked at the strength of the work we have done across our infrastructure as a company, our technical infrastructure as a company, and various given stages, at a given moment in time when we adopted new generations of technology, we have looked at the cost of it. But then the curves, the efficiency curves, we have driven on top of it has always been phenomenal to see. And I see the current moment as no different. Already through this year, we are driving significant efficiencies both in our models, in training costs and serving costs and our ability to adapt what’s needed to the right use case. 

Alphabet has new tools (including those powered by AI) that make it easier for (1) creators to produce content for Youtube’s various formats, (2) creators to connect with advertisers, and (3) advertisers drive higher ROI on advertising

At Made On YouTube in September, we announced new tools that make it easier to create engaging content. Dream Screen is an experimental feature that allows creators to add AI-generated video or image backgrounds to Shorts. And YouTube Create is a new mobile app with a suite of production tools for editing Shorts, longer videos or both…

…AI will do wonders for creation and storytelling. From Dream Screen and YouTube Create, which Sundar talked about, to features that audit up content in multiple languages, flip interim existing assets, remix and clip videos and more, we’re just getting started. We’re also helping brands break through its speed and scale across the funnel to drive results. Spotlight Moments launched last week. It uses AI to identify trending content around major cultural moments for brand sponsorship opportunities. There’s video reach campaigns, which are expanding to in-feed and Shorts, and will be generally available in November. AI is helping advertisers find as many people as possible and their ideal audience for the lowest possible price. Early tests are delivering 54% more reach at 42% lower cost. And then with video view campaigns, AI is serving skippable ads across in-stream, in-feed and Shorts and helping advertisers earn the maximum number of views at the lowest possible cost. So far, they’re driving 40% more views on average versus in-stream alone. Then for YouTube and other feed-based services, there’s our new demand-gen campaign, which launched in April, rolled out worldwide last week and was designed for the needs of today’s social marketers to engage people as they stream, scroll and connect. It combines video and image ads in one campaign with access to 3 billion users across YouTube and Google and the ability to optimize and measure across the funnel using Google AI. Demand gen is already driving successful brands like Samsung and Toyota.

Alphabet’s management believes that Google Cloud offers optimised infrastructure for AI training and inference, and more than 50% of all generative AI start-ups are using Google Cloud; Alphabet’s TPUs (tensor processing units) are winning customers; Google Cloud’s Vertex AI platform offers more than 100 AI models and the number of active generative AI projects built on Vertex AI grew by seven times sequentially

We offer advanced AI optimized infrastructure to train and serve models at scale. And today, more than half of all funded generative AI start-ups are Google Cloud customers. This includes AI21 Labs, Contextual, Elemental Cognition, Writer and more. We continue to provide the widest choice of accelerator options. Our A3 VMs [virtual machines] powered by NVIDIA’s H100 GPU are generally available, and we are winning customers with Cloud TPU v5e, our most cost efficient and versatile accelerator to date. On top of our infrastructure, our Vertex AI platform helps customers build, deploy and scale AI-powered applications. We offer more than 100 models, including popular third-party and open source models, as well as tools to quickly build Search in conversation use cases. From Q2 to Q3, the number of active generative AI projects on Vertex AI grew by 7x, including Highmark Health, which is creating more personalized member materials.

Duet AI, Alphabet’s AI assistant, is built on Google’s large foundation models and is used by large companies to boost developer productivity and smaller companies to help with data analytics; more than 1 million testers have used Duet AI in Google Workspace

Duet AI was created using Google’s leading large foundation models and is specially trained to help users to be more productive on Google Cloud. We continue expanding its capabilities and integrating it across a wide range of cloud products and services. With Duet AI, we are helping leading brands like PayPal and Deutsche Bank boost developer productivity, and we are enabling retailers like Aritzia and Gymshark to gain new insights for better and faster business results…

…In Workspace, thousands of companies and more than 1 million trusted testers have used Duet AI. They are writing and refining content in Gmail and Docs, creating original images from text within slides, organizing data and sheets and more.

Alphabet’s new consumer hardware products have an AI chip – Tensor G3 – built in them

Our portfolio of Pixel products are brought to life, thanks to our combination of foundational technologies AI, Android and Google Tensor. Google Tensor G3 is the third generation of our tailor-built chip. It’s designed to power transformative experiences by bringing the latest in Google AI research directly to our newest phones. 

Gemini is the foundation of the next-generation AI models that Google Deepmind will be releasing throughout 2024; Gemini will be multi-modal and will be used internally across all of Alphabet’s products as well as offered externally via Vertex 

On Gemini, obviously, it’s effort from our combined Google DeepMind team. I’m very excited at the progress there as we’re working through getting the model ready. To me, more importantly, we are just really laying the foundation of what I think of as the next-generation series of models we’ll be launching throughout 2024. The pace of innovation is extraordinarily impressive to see. We are creating it from the ground up to be multimodal, highly efficient tool and API integrations and, more importantly, laying the platform to enable future innovations as well. And we are developing Gemini in a way that it is going to be available at various sizes and capabilities, and we’ll be using it immediately across all our products internally as well as bringing it out to both developers and cloud customers through Vertex. So I view it as a journey, and each generation is going to be better than the other. And we are definitely investing, and the early results are very promising.

Alphabet’s AI tools are very well received by advertisers and nearly 80% of advertisers use at least one AI-powered search ads product

Our AI tools are very well received, AI, gen AI are top of mind for everybody, really. There’s a ton of excitement, lots of questions about it. Many understand the value. Nearly 80% of our advertisers already use at least one AI-powered search ads product. And yes, we’re hearing a lot of good feedback on, number one, our ads AI Essentials, which are really helping to unlock the power of AI and set up for durable ROI growth on the advertiser side, this is — those are products like the foundation for data and measurement, things like Google Tech, consent mode and so on; and obviously, Search and PMax, we talked about it; and then all the gen AI products, all those different ones. So there’s a whole lot of interest in those products, yes.

Amazon (NASDAQ: AMZN)

Anthropic, a high-profile AI startup, recently chose AWS as its primary cloud provider, and Anthropic will work with Amazon to further develop Amazon’s Trainium (for training AI models) and Inferentia (for AI inference work) chips; Amazon’s management believes the collaboration with Anthropic will help Amazon bring further price performance advantages to Trainium and Inferentia

Recently, we announced the leading LLM maker Anthropic chose AWS as its primary cloud provider. And we’ll use Trainium training and Inferentia to build, trade and deploy future LLMs. As part of this partnership, AWS and Anthropic will collaborate on the future development of training and inferential technology. We believe this collaboration will be helpful in continuing to accelerate the price performance advantages that Trainium and Inferentia deliver for customers.

Perplexity is another AI startup that chose to run their models with Trainium and Inferentia

We are also seeing success with generative AI start-ups like Perplexity AI who chose to go all in with AWS, including running future models in Trainium and Inferentia.

Amazon’s management believes that Amazon’s Trainium and Inferentia chips are very attractive to people in the industry because they offer better price-performance characteristics and they can meet demand; Anthropic and Perplexity’s decisions to go with Trainium and Inferentia are statements to that effect

I would also say our chips, Trainium and Inferentia, as most people know, there’s a real shortage right now in the industry and chips, it’s really hard to get the amount of GPUs that everybody wants. And so it’s just another reason why Trainium and Inferentia are so attractive to people. They have better price performance characteristics than the other options out there, but also the fact that you can get access to them. And we’ve done a I think, a pretty good job providing supply there and ordering meaningfully in advance as well. And so you’re seeing very large LLM providers make big bets on those chips. I think anthropic deciding to train their future LLM model on Trainium and using Inferentia as well is really a statement. And then you look at the really hot start-up perplexity.ai, who also just made a decision to do all their Trainium and Inferentia on top of Trainium and Inferentia. So those are two examples. 

Amazon recently announced the general availability of Amazon Bedrock (AWS’s LLMs-as-a-service), which gives access to a variety of 3rd-party large language models (LLMs) as well as Amazon’s own LLM called Titan; Meta’s Llama-2 LLM will also be on Bedrock, the first time it is available through a fully-managed service

In the middle layer, which we think of as large language models as a service, we recently introduced general availability for Amazon Bedrock, which offers customers access to leading LLMs from third-party providers like anthropics, stability AI, coherent AI 21 as well as from Amazon’s own LLM called Titan, where customers can take those models, customize them using their own data, but without leaking that data back into the generalized LLM have access to the same security, access control and features that they run the rest of their applications with in AWS all through a managed service. In the last couple of months, we’ve announced the imminent addition of Meta’s Llama 2 model to Bedrock the first time it’s being made available through a fully managed service.

Amazon’s management believes that Bedrock helps customers experiment rapidly with different LLMs and is the easiest way to build and scale enterprise-ready generative AI applications; customer reaction to Bedrock has been very positive; 

Also through our expanded collaboration with Anthropic, customers will gain access to future anthropic models through bedrock with exclusive early access to unique features model customization and the ability to fine-tune the models. And Bedrock has added several new compelling features, including the ability to create agents which can be programmed to accomplish tasks like answering questions or automating workflows. In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change. Bedrock helps customers with this fluidity, allowing them to rapidly experiment with move between model types and sizes and enabling them to pick the right tool for the right job. The customer reaction to Bedrock has been very positive and the general availability is buoyed that further. Bedrock is the easiest way to build and scale enterprise-ready generative AI applications and a real game changer for developers and companies trying to get value out of this new technology…

Bedrock’s ability to let customers conduct fast experiments is very useful because customers sometimes get surprised at the true costs of running certain AI models

Because what happens is you try a model, you test the model, you like the results of the model and then you plug it into your application and what a lot of companies figure out quickly is that using the really large — the large models and the large sizes ends up often being more expensive than what they anticipated and what they want to spend on that application. And sometimes too much latency in getting the answers as it shovels through the really large models. And so customers are experimenting with lots of different types of models and then different model sizes to get the cost and latency characteristics that they need for different use cases. It’s one of the things that I think is so useful about Bedrock is that customers are trying so many variants right now but to have a service that not only lets you leverage lots of third party as well as Amazon large language miles, but also lots of different sizes and then makes the transition of moving those workloads easy between them is very advantageous.

Amazon Code Whisperer, AWS’s coding companion, has a lot of early traction and has become more powerful recently by having the capability to be customised on a customer’s own code base (a first-of-its kind feature)

Generative AI coding companion Amazon Code Whisper has gotten a lot of early traction and got a lot more powerful recently with the launch of its new customization capability. The #1 enterprise request for coding companions has been wanting these companions to be familiar with customers’ proprietary code bases is not just having code companions trained on open source code. Companies want the equivalent of a long-time senior engineer who knows their code base well. That’s what Code Whisper just launched, another first of its kind out there in its current forum and customers are excited about it.

Amazon’s management believes that customers want to bring AI models to their data, not the other way around – and this is an advantage for AWS as customers’ data resides within AWS

It’s also worth remembering that customers want to bring the models to their data, not the other way around. And much of that data resides in AWS as the clear market segment leader in cloud infrastructure. 

There are many companies that are building generative AI apps on AWS and this number is growing fast

The number of companies building generative AI apps and AWS is substantial and growing very quickly, including Adidas, Booking.com, Bridgewater, Clarient, GoDaddy, Lexus Nexus, Merck, Royal Philips and United Airlines, name a few

Generative AI’s growth rate within AWS is very fast – even faster than Amazon’s management expected – and management believes that the absolute amount of generative AI business within AWS compares very favourably with other cloud providers

I could see it also just the growth rate for us in generative AI is very fast. Again, I have seen a lot of different numbers publicly. It’s real hard to measure an apples-to-apples. But in our best estimation, our — the amount of growth we’re seeing in the absolute amount of generative AI business we’re seeing compares very favorably with anything else I’ve seen externally.

Generative AI is already a pretty significant business for AWS, but it’s still early days

What I would tell you is that we have been surprised at the pace of growth in generative AI. Our generative AI business is growing very, very quickly, as I mentioned earlier. And almost by any measure, it’s a pretty significant business for us already. And yet I would also say that companies are still in the relatively early stages.

All of Amazon’s significant businesses are working on generative AI applications, with examples including using generative AI to (1) help consumers discover products, (2) forecast inventory in various locations, (3) help 3rd-party sellers create new product pages, (4) help advertisers with image generation for ads, and (5) improve Alexa

Beyond AWS, all of our significant businesses are working on generative AI applications to transform their customer experiences. There are too many for me to name on this call, but a few examples include, in our stores business, we’re using generative AI to help people better discover products they want to more easily access the information needed to make decisions. We use generative AI models to forecast inventory we need in our various locations and to derive optimal last mile transportation routes for drivers to employ. We’re also making it much easier for our third-party sellers to create new product pages by entering much less information and getting the models to the rest. In advertising, we just launched a generative AI image generation tool, where all brands need to do is upload a product photo and description to quickly create unique lifestyle images that will help customers discover products they love. And in Alexa, we built a much more expansive LLM and previewed the early version of this. Apart from being a more intelligent version of herself, Alexa’s new conversational AI capabilities include the ability to make multiple requests at once as well as more natural and conversational requests without having to use specific phrases.

Amazon’s management still believes in the importance of building the world’s best personal assistant and they thinksAlexa could be one of these assistants

We continue to be convicted that the vision of being the world’s best personal assistant is a compelling and viable one and that Alexa has a good chance to be one of the long-term winners in this arena. 

While Amazon’s management is pulling back Amazon’s capital expenditure on other areas, they are increasing capital expenditure for AI-related infrastructure

For the full year 2023, we expect capital investments to be approximately $50 billion compared to $59 billion in 2022. We expect fulfillment and transportation CapEx to be down year-over-year partially offset by increased infrastructure CapEx, support growth of our AWS business, including additional investments related to generative AI and large language model efforts. 

Apple (NASDAQ: AAPL)

Apple’s management sees AI and machine learning as fundamental technologies to the company and they’re integrated in virtually every product that Apple ships

If you kind of zoom out and look at what we’ve done on AI and machine learning and how we’ve used it, we view AI and machine learning as fundamental technologies, and they’re integral to virtually every product that we ship. 

Apple’s AI-powered features include Personal Voice in iOS17, and fall detection, crash detection, and ECG on the Apple Watch; Apple’s management does not want to label Apple’s AI-powered features with “AI” – instead the features are labelled as consumer benefits

And so just recently, when we shipped iOS 17, it had features like Personal Voice and Live Voicemail. AI is at the heart of these features. And then you can go all the way to then life-saving features on the Watch and the phone like fall detection, crash detection, ECG on the watch. These would not be possible without AI. And so we don’t label them as such, if you will. We label them as to what their consumer benefit is, but the fundamental technology behind it is AI and machine learning.

Apple is investing in generative AI but management has no details to share yet

In terms of generative AI, we have — obviously, we have work going on. I’m not going to get into details about what it is because as you know, we really don’t do that. But you can bet that we’re investing, we’re investing quite a bit. We are going to do it responsibly. And it will — you will see product advancements over time where those technologies are at the heart of them.

Arista Networks (NYSE: ANET)

From the vantage point of Arista Networks’ management, Oracle has become an important AI data centre company

Our historic classification of our Cloud Titan customers has been based on industry definition of customers with or likely to attain greater than 1 million installed compute service. Looking ahead, we will combine Cloud and AI customer spend into one category called Cloud and AI Titan sector. And as a result of this combination, Oracle OCI becomes a new member of the sector, while Apple shift to cloud specialty providers…

…So I think OCI has become a meaningful top-tier cloud customer and they belong in the cloud tightening category and in addition to their AI investments as well. So for reasons of classification and definition, the change is very warranted. And yes, they happened to be a good customer of Arista, that’s nice as well.

Arista Networks’ management has observed that its large customers have different needs when it comes to AI and non-AI networking technologies 

During the past year, our Cloud Titan customers have been planning a different mix of AI networking and classic cloud networking for their compute and storage clusters.

Arista Networks’ management believes that the company’s recent deal with a public sector organisation to provide Ethernet networking technology for the organisation’s AI initiative is an example of why Ethernet is important in AI

Our next [ one ] showcases our expansion of Arista in the public sector with their AI initiative. This grant-funded project utilizes Arista simplified operational models with CloudVision. New AI workloads require high scale, high ratings, high bandwidth and low latency as well as a need for granular visibility. This build out of a single EVPN-VXLAN based 400-gig fabric is based on deep buffers fines and underscores the importance of a lossless architecture for AI networking.

Arista Networks’ management is seeing its customers prioritise AI in their data centre spending right now, but demand for other forms of data centre-related spending will follow

We’ve always looked at that the cloud network as a front end and the back end. And as we said last year, many of our cloud customers are favoring spending more on the back end with AI, which doesn’t mean they stop spending on front end, but they’re clearly prioritized and doubled down on AI this year. My guess is as we look at the next few years, they’ll continue to double down on AI. But you cannot build an AI bank cluster without thinking of the front end. So we’ll see a full cycle here, while today the focus is greatly on AI and the back end of the network. In the future, we expect to see more investments in the front end as well.

Arista Networks’ management sees AI networking as being dominated by Infiniband today- with some room for a combination of Infiniband and Ethernet – but they still believe that AI networking will trend toward Ethernet over time, with 2025 being a potential inflection point

Today if I look at the 5 major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at is — are adopting on dual strategy of both Ethernet and InfiniBand. So I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back end and the front-end using IP as a singular protocol. And so we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back end and, of course, front end based on Ethernet.

Arista Networks’ management sees networking spend as contributing to 10%-15% of the total cost of an AI data centre 

Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100-gig, 400-gig and 800-gig, where the optics and the switches are more than 10%, perhaps even 15% in some cases, 20, a lot of its governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400 and 800. So we — you’ll continue to see that 10% to 15% range.

Arista Networks’ management sees diversified activity when it comes to the development of AI data centres

[Question]  And just what you’re seeing in terms of other people kind of building out some of these AI clusters, if you classify some of those customers as largely focused on back end today, and those represent opportunities going forward? Or just kind of what the discussion is outside of the Cloud Titans amongst some of these other guys that are building very large networks?

[Answer]  The Tier 2 cloud providers are doing exactly what the Tier 1 is doing just at a smaller scale. So the activity is out there. Many companies are trying to build these clusters, maybe not hundreds of thousands GPUs but thousands of GPUs together in their real estate if they can get them. But the designs that we’re working on with them, the type of sort of features, fine-tuning is actually very, very similar to the cloud, just at a smaller scale. So we’re very happy with that activity and this is across the board. It’s very positive to see this in the ecosystem that it’s not limited just 4 or 5 customers.

Arista Networks’ management is observing that data centre companies are facing a shortage of GPUs (graphics processing units) and they are trying to develop AI with smaller GPU clusters

I think they’re also waiting for GPUs like everyone else is. So there’s that common problem that we’re not the only one with lead time issues. But to clarify the comment on scale, Anshul and I are also seeing some very interesting enterprise projects against smaller scale. So a lot of customers are trying AI for small clusters, not too different from what we saw with HPC clusters back in the day.

Arista Networks’ management believes that good networking technology for AI requires not just good silicon, but the right software, so they are not concerned about Arista Networks’ suppliers moving up the stack

It’s not just the merchant silicon but how you can enable the merchant silicon with the right software and drivers, and this is an area that really Arista excels, and if you just have chips, you can’t build the system. But our system-wide features, whether it’s a genetic load balancing, or latency analyzer to really improve the job completion time and deal with that frequent communication and generative AI is also fundamentally important…

… [Question] So I think there was a mention on merchant silicon earlier in the Q&A. And one of your merchant silicon partners has actually moved up the stack towards the service provider routing. I’m just curious if there’s any intention on going after that piece if that chip is made available to you?

[Answer] I believe you are referring to the latest announcement Broadcom on their 25.60 Jericho chip that was announced recently.

[Question] Yes, the Qumran3D.

[Answer] Qumran3D, exactly. So it’s the same family, same features. And as you know, we’ve been a great partner of Broadcom for a long time, and we will continue to build new products. This is not a new entry, so to speak. We’ve been building these products that can be used on switches our orders for a while, and that bandwidth just doubled going to now 25.6. So you can expect some products from us in the future with those variants as well. But really — nothing really changed…

…And the investment we have made in our routing stack over the last 10 years, I want to say, has just gotten better and stronger. Power in the Internet, power in the cloud, power in the AI, these are hard problems. And they require thousands of engineers of investment to build the right VXLAN, BGP routing, EVPN, et cetera. So it’s not just a chip. It’s how we name the chip to do these complicated routing algorithms.

AI is becoming a really important component of Arista Networks’ customers

We’re simply seeing AI is going to become such an important component of all our cloud titans that is now a combined vertical.

Datadog (NASDAQ: DDOG)

Datadog’s management is excited about generative AI and large language models and they believe that the adoption of AI will lead to additional growth in cloud workloads

Finally, we continue to be excited about the opportunity in generative AI and Large Language Models. First, we believe adopting NextGen AI will require the use of cloud and other modern technologies and drive additional growth in cloud workloads.

Datadog is building LLM observability products

So we are continuing to invest by integrating with more components at every layer of the new AI stack and by developing our own LLM observability products. 

Datadog’s management is seeing adoption of AI across many of its customers, but the activity is concentrated in AI-native customers

And while we see signs of AI adoption across large parts of our customer base, in the near term, we continue to see AI-related usage manifest itself most accurately with next-gen AI native customers who contributed about 2.5% of our ARR this quarter.

Datadog is adding value to its own platform using AI with one example being Bits AI, Datadog’s test-and-analysis tool

Besides observing the AI stack, we also expect to keep adding value to our own platform using AI. Datadog’s unified platform and purely SaaS model, combined with strong multiproduct adoption by our customers generates a large amount of deep and precise observability data. We believe combining AI capabilities with this broad data set will allow us to deliver differentiated value to customers. And we are working to productise differentiated value through recently announced capabilities such as our Bits AI assistant, AI generated synthetic test and AI-led air analysis and resolution, and we expect to deliver many more related innovation to customers over time.

Datadog’s management is seeing that AI-native customers are using Amazon’s AWS whereas the larger enterprises that are using AI are using Microsoft’s Azure

Interestingly enough, the — when we look at our cohort of customers that are that we consider to be AI native and built largely on AI in all AI providers, they tend to be on different clouds. What we see is that the majority of those companies actually have a lot of their usage on AWS. Today, the larger part of the usage or the larger of these customers are on Azure. So we see really several different adoption trends there that I think are interesting to the broader market.

Datadog’s management is seeing broad usage of AI across Datadog’s customers, but the customers are adopting AI only at low volumes

Whereas we see broad usage of AI functionality across the customer base, but at low volumes, and it corresponds to the fact that for most customers or most enterprises really, they’re still in the early stages of developing and shipping applications. So for now, the usage is concentrated among the model providers.

Datadgo’s management sees a lot of opportunity for Datadog as AI usage proliferates – for example, management believes that the widespread use of AI will result in the creation of a lot of code and these code will need to be monitored

So on the DevSecOps side, I think it’s too early to tell how much the revenue opportunity there is in the tooling specific lab there. When you think of the whole spectrum of tools, the closer you get to the developer side to how are is to monetize and the further you get towards operations and infrastructure, the easier it is to monetize. You can ship things that are very useful and very accretive to our platform because they get you a lot of users, a lot of attention and a lot of stickiness that are harder to monetize. So we’ll see where on the spectrum that is. What we know, though, is that broader Generative AI up and down the stack from the components themselves, the GPUs all the way up to the models and the various things that are used to orchestrate them and store the data and move the data around all of that is going to generate a lot of opportunity for us. We said right now, it’s conciliated among the AI native largely model providers. But we see that it’s going to broaden and concern a lot more of our customers down the road…

…So in general, the more complexity there is, the more useful observability, the more you see his value from writing code to actually understanding it and observing it. So to caricature if you — if you spend a whole year writing 5 lines of code that are really very deep, you actually know those 5 lines pretty well, maybe you don’t observe because you’ll see you understand exactly how they work and what’s going on with them. On the other hand, if thanks to all the major advances of technology and all of the very super source AI and you can just very quickly generate thousands of lines of code, ship them and start operating them, you actually have no idea how these work and what they do. And you need a lot of tooling observability to actually understand that and keep driving that and secure it and do everything you need to do with it over time. So we think that overall, this increases in productivity are going to favor observability.

Datadog’s management is also trying to guess how transformative AI will be, but there are signs that AI’s impact will be truly huge

In terms of the future growth of AI, look, I think like everyone, we’re trying to guess how transformative it’s going to be. It looks like it’s going to be pretty is, if you judge from just internally, how much of that technology we are adopting a how much is the productivity impact, it seems to be having. 

AI-related use cases are still just a small fraction of the overall usage of Datadog’s products, but Datadog’s management thinks that AI will drive a lot of the company’s growth in the future 

So again, today, we only see a tiny bit of it, which is early adoption by model providers and a lot of companies that are trying to scale up and experiment and figure out who it applies to their businesses and what they can ship to use the technology. But we think it’s going to drive a lot of growth in the years to come.

Datadog’s management can’t tell when Datadog’s broader customer base will start ramping up AI workloads but they are experimenting; most of the innovation happening right now is concentrated among the model providers

[Question] Olivier, you called out the 2.5 points from AI native customers a few times, but you’ve also said that the broader customer base should start adding AI workloads to our platform over time. When do you think that actually takes place and the broader customer base starts to impact that AI growth in more earnest?

[Answer] We don’t know. And I think it’s too early to tell. For one part, there’s some uncertainty in terms of — these customers are being to figure out what it is they are going to ship to their own customers. I think everybody is trying to learn that right now and experiment it. And — but the other part is also that right now, the innovation is largely concentrated among the model providers. And so it’s rational right now for most customers to rely on those instead of they’re deploying their own infrastructure. Again, we think it’s slightly going to change. We see a lot of demand in interest in other ways to host models and run models and customers and all those things like that. But today, that’s the — these are the trends of the market today basically.

Etsy (NASDAQ: ETSY)

Etsy’s management is improving the company’s search function by combining humans and machine learning technology to better identify the quality of each product listing on the Etsy platform

We’re moving beyond relevance to the next frontier of search focused on better identifying the quality of each Etsy listing, utilizing humans and ML technology so that from a highly relevant result set, we bring the very best of Etsy to the top, personalized to what we understand of your tastes and preferences. For example, from the start of the year, we’re tracking to a ninefold increase in the number of human-curated listings on Etsy to over 1.5 million listings by year-end. We’re also utilizing ML models designed to determine the visual appeal of items and incorporating that information into our search algorithms. 

Etsy’s management is using generative AI to improve the Etsy search-experience when buyers enter open-ended queries, which helps build purchase-frequency

There’s also a huge opportunity to evolve the Etsy experience so that we show buyers a more diverse set of options when they search for open-ended head query items such as back-to-school. On the left of this slide, you can see an example of how a search for back-to-school items looks on Etsy. We generally show multiple very similar versions of customized pencils, stickers, lawn signs and so on, all mixed together. This is suboptimal as it offers buyers only a few main ideas on the first page of search and requires a ton of cognitive load to distinguish between virtually identical items. We’ve recently launched a variety of experiments with the help of Gen AI to evolve these types of head query searches. As we move into 2024, when a buyer searches for broad queries, we expect to be able to show a far more diverse and compelling set of ideas, all beautifully curated by organizing search results into a number of ideas for you that are truly different and helping to elevate the very best items within each of these ideas, we can take a lot of the hard work out of finding exactly the perfect item. And help build frequency as we highlight the wide range of merchandise available on Etsy.

Etsy’s management is using machine learning to identify product-listings that are not conforming to the company’s product policies, and listing-takedowns are already up 140% year-on-year 

We’ve hired a lot of people, and we also have been investing a lot in machine learning and machine learning is really helping us to be able to identify among the 120 million listings on Etsy, those that may not conform with our policy. Takedowns are up 140% year-over-year. 

Fiverr (NYSE: FVRR)

Fiverr’s management has developed Fiverr Neo, a generative AI tool that helps customers scope their projects better and match them with suitable freelance talent, just like a human recruiter would, just better; management believes that Fiverr Neo will help save customers time when they are looking for freelance talent

The vision for Fiverr Neo is quite wild – we imagine Neo will serve as a personalized recruiting expert that can help our customers more accurately scope their projects and get matched with freelance talent, just like a human recruiter, only with more data and more brain power. What we have done so far is leverage the existing LLM engines to allow customers to express their project needs in natural language, which Neo will synthesize and define the scope before matching the client with a short list of choices pulled from the entire Fiverr freelancer database. It’s a substantial step forward from the existing experience and streamlines the time the customer needs to make an informed decision.

Fiverr’s management used a combination of Fiverr’s own software and LLMs from other companies to build Fiverr Neo

So there’s a lot of learning as we build this product. And what we’re doing is really a hybrid of technologies. Some of them are being developed by us. Some are off the shelf, most of the leading companies that are developing LLM, which have partnered with us. And we’re putting this to the maximum. I think a lot of these systems are not yet optimized for large scale and high performance but we find our own ways of developing a lot of this technology to provide a very smooth experience to our customers. 

Fiverr Neo is still new, but users are already experiencing more accurate matches

In terms of Fiverr neo, we’re very pleased with the rollout. Obviously, very, very young product, but we’re seeing over 100,000 users that are trying the product. And what we’re seeing from their experience is that we’re able to provide more accurate matches, which is basically what we wanted to do and have a higher engagement and satisfaction levels, which we’re very happy with and the beginning of a repeat usage of the product. 

Fiverr’s management thinks that AI has a positive impact on the product categories that Fiverr can introduce to its marketplace and management is ensuring that Fiverr’s catalog will contain any new skills that the AI-age will require; management thinks that a lot of AI hype at the beginning of the year has died down and the world is looking for killer AI applications

So I did address this also in how we think about next year and the fact that AI both impact the efficiency of how we work allows us to do pretty incredible things in our product. It also has an impact — positive impact on the categories that we can introduce. So again, we’re not getting into specific category breakdown. But what we’re seeing on the buyer side, I think we’ve introduced these categories, these categories continue growing. I think that a lot of the height that surrounded AI in the beginning of the year subsided and right now, it’s really looking for the killer applications that could be developed with AI, and we’re developing some of them and our customers are as well. So these are definitely areas where we continue seeing growth, but not just that, but we continue investing in the catalog side to ensure that the new types of skills that pop up are going to be addressed on the Fiverr market base.

Mastercard (NYSE: MA)

Mastercard’s management is using AI to improve the company’s fraud-related solutions and has signed agreements in Argentina, Saudi Arabia, and Nigeria in this area

AI also continues to play a critical role powering our products and fueling our network intelligence. We’re scaling our AI-powered transaction fraud monitoring solution, which delivers real-time predictive scores based on a unique blend of customer and network level insights. This powerful solution gives our customers the ability to take preventive action before the transaction is authorized. This quarter alone, we signed agreements in Argentina, Saudi Arabia and Nigeria with financial institutions and fintechs who will benefit from early fraud detection and with merchants who will experience less friction and higher approval rates.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about AI and how it can help MercadoLibre improve the user experience and its business operations

As you know, we don’t guide, but there are many exciting things going on, particularly, obviously, AI. That hopefully will enable us to provide our users a better experience, enable us to launch innovative ideas, and also scale and gain efficiencies, whether it is in customer service, or whether it is in fraud prevention or whether it is in the way our developers, 15,000 developers, go about developing and performing quality control, et cetera. So obviously, looking forward for the next 3 years, I think that’s a key thing to look into.

MercadoLibre’s management is working on using AI to improve the company’s product-search function and they are happy with the progress so far 

Last question in terms of AI and search, we are working on that. I mean we are putting a lot of effort into building solutions around AI. I think we don’t have much to disclose as of now, but search, reviews, questions and answers, buy box and products, as Marcos was saying, copilot for our developer. We’re looking at the broad range of AI uses for MercadoLibre to boost consumer demand and efficiency. And we’re happy with the progress that we have so far, but not much to be said yet.

MercadoLibre’s management has been using AI for many years in fraud prevention and credit scoring for the company’s services

We have been using AI for a long time now for many, many years, both in terms of fraud prevention and credit scoring. Both 2 instances, they are pretty much use cases which are ideal for AI, because we have, in the case of fraud prevention, millions of transactions every day and with a clear outcome, either fraud or not fraud. So with the right variables, we can build a very strong model that has predicted and have really best-in-class fraud prevention. And with that knowledge and given the experience we have been building on credits, we have also been — built our credit scoring models leveraging the AI.

Meta Platforms (NASDAQ: META)

The next-generation Ray-Ban Meta smart glasses has embedded AI

The next generation of Ray-Ban Meta smart glasses, which are the first smart glasses with our Meta AI built in.

Meta Platforms’ management thinks glasses are an ideal form-factor for an AI device as it can see exactly what you see and hear what you hear

And in many ways, glasses are the ideal form factor for an AI device because they enable your AI assistant to see what you see and hear what you hear. 

Llama 2 is now the leading open source AI model with >30 million downloads last month

We’re also building foundation models like Llama 2, which we believe is now the leading open source model with more than 30 million Llama downloads last month.

Beyond generative AI, Meta Platforms’ management is using recommendation AI systems for the company’s Feeds, Reels, ads, and integrity systems and these AI systems are very important to the company; AI feed recommendations led to increases in time spent on Facebook (7%) and Instagram (6%)

Beyond that, there was also a different set of sophisticated recommendation AI systems that powers our Feeds, Reels, ads and integrity systems. And this technology has less hype right now than generative AI but it is also very important and improving very quickly. AI-driven feed recommendations continue to grow their impact on incremental engagement. This year alone, we’ve seen a 7% increase in time spent on Facebook and a 6% increase on Instagram as a result of recommendation improvements. 

Meta Platforms’ AI tools for advertisers has helped drive its Advantage+ advertising product to reach a US$10 billion revenue run-rate, with more than 50% of the company’s advertisers using Advantage+ creative tools

Our AI tools for advertisers are also driving results with Advantage+ shopping campaigns reaching a $10 billion run rate and more than half of our advertisers using our Advantage+ creative tools to optimize images and text and their ads creative.

AI-recommended content has become increasingly incremental to engagement on Meta Platforms’ properties

AI-recommended content from unconnected accounts and feed continues to become increasingly incremental to engagement, including in the U.S. and Canada. These gains are being driven by improvements to our recommendation systems, and we see additional opportunities to advance our systems even further in the future as we deploy more advanced models.

Meta Platforms’ management believes that the company’s Business AIs can easily help businesses set up AIs to communicate with consumers at very low cost, which is important in developed economies where cost of labour is high (businesses in developing economies tend to hire humans to communicate with consumers)

Now I think that this is going to be a really big opportunity for our new Business AIs that I talked about earlier that we hope will enable any business to easily set up an AI that people can message to help with commerce and support. Today, most commerce and messaging is in countries where the cost of labor is low enough that it makes sense for businesses to have people corresponding with customers over text. And in those countries like Thailand or Vietnam, there’s a huge amount of commerce that happens in this way. But in lots of parts of the world, the cost of labor is too expensive for this to be viable. But with business AIs, we have the opportunity to bring down that cost and expand commerce and messaging into larger economies across the world. So making business AIs work for more businesses is going to be an important focus for us into 2024.

Meta Platforms’ management has started testing the company’s AI capabilities with a few partners in business messaging

We’ve recently started testing AI capabilities with a few partners and we’ll take our time to get the experience right, but we believe this will be a big unlock for business messaging in the future.

Meta Platforms’ management still believes in the benefits of open-sourcing Meta’s AI models: It increases adoption (which benefits the company as the security features and cost-efficiency of the models improves) and talent is more attracted to Meta Platforms

We have a pretty long history of open sourcing parts of our infrastructure that are not kind of the direct product code. And a lot of the reason why we do this is because it increases adoption and creates a standard around the industry, which often drives forward innovation faster so we benefit and our products benefit as well as there’s more scrutiny on kind of security and safety-related things so we think that there’s a benefit there.

And sometimes, more companies running models or infrastructure can make it run more efficiently, which helps reduce our costs as well, which is something that we’ve seen with open compute. So I think that there’s a good chance that, that happens here over time. And obviously, our CapEx expenses are a big driver of our costs, so any aid in innovating on efficiency is sort of a big thing there.

The other piece is just that over time with our AI efforts, we’ve tried to distinguish ourselves as being a place that does work that will be shared with the industry and that attracts a lot of the best people to come work here. So a lot of people want to go to the place to work where their work is going to touch most people. One way to do that is by building products that billions of people use. But if you’re really a focused engineer or researcher in this area, you also want to build the thing that’s going to be the standard for the industry. So that’s pretty exciting and it helps us do leading work.

Meta Platforms’ management thinks the AI characters that the company introduced recently could lead to a new kind of medium and art form and ultimately drive increasing engagement for users of the company’s social apps

We’re designing these to make it so that they can help facilitate and encourage interactions between people and make things more fun by making it so you can drop in some of these AIs into group chats and things like that just to make the experiences more engaging. So this should be incremental and create additional engagement. The AIs also have profiles in Instagram and Facebook and can produce content, and over time, going to be able to interact with each other. And I think that’s going to be an interesting dynamic and an interesting, almost a new kind of medium and art form. So I think that will be an interesting vector for increasing engagement and entertainment as well.

Meta Platforms’ management thinks that generative AI is a really exciting technology and that it changes everything and although it’s hard to predict what generative AI’s impact is going to be on how individuals use Meta’s services, they still thinks it’s worth investing in it;In terms of how big this is going to be, it’s hard to predict because I don’t think that anyone has built what we’re building here. I mean, there’s some analogy is like what OpenAI is doing with ChatGPT, but that’s pretty different from what we’re trying to do. Maybe the Meta AI part of what we’re doing overlaps with the type of work that they’re doing, but the AI characters piece, there’s a consumer part of that, there’s a business part, there’s a creators part. I’m just not sure that anyone else is doing this. And when we’re working on things like Stories and Reels, there were some market precedents before that. Here, there’s technology which is extremely exciting. But I think part of what leading in an area and developing a new thing means is you don’t quite know how big it’s going to be. But what I predict is that I do think that the fundamental technology around generative AI is going to transform meaningfully how people use each of the different apps that we build…

…So I think you’re basically seeing that there are going to be — this is a very broad and exciting technology. And frankly, I think that this is partially why working in the technology industry is so awesome, right, is that every once in a while, something comes along like this, that like changes everything and just makes everything a lot better and your ability to just be creative and kind of rethink the things that you’re doing to be better for all the people you serve…

…But yes, it’s hard sitting here now to be able to predict like the metrics are going to be around, like what’s the balance of messaging between AIs and people or what the balance and Feeds between AI content and people content or anything like that. But I mean, I’m highly confident that this is going to be a thing and I think it’s worth investing in.

Meta Platforms’ management believes that generative AI will have a big impact on the digital advertising industry

It’s going to change advertising in a big way. It’s going to make it so much easier to run ads. Businesses that basically before would have had to create their own creative or images now won’t have to do that. They’ll be able to test more versions of creative, whether it’s images or eventually video or text. That’s really exciting, especially when paired with the recommendation AI.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is making AI real for everyone through the introduction of Copilots

With Copilots, we are making the age of AI real for people and businesses everywhere. We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

Microsoft’s management believes that Azure has the best AI infrastructure for both training and inference

We have the most comprehensive cloud footprint with more than 60 data center regions worldwide as well as the best AI infrastructure for both training and inference. And we also have our AI services deployed in more regions than any other cloud provider.

Azure AI provides access to models from OpenAI and open-sourced models (including Meta’s) and 18,000 organisations now use Azure OpenAI

Azure AI provides access to best-in-class frontier models from OpenAI and open-source models, including our own as well as from Meta and Hugging Face, which customers can use to build their own AI apps while meeting specific cost, latency and performance needs. Because of our overall differentiation, more than 18,000 organizations now use Azure OpenAI service, including new to Azure customers.

GitHub Copilot increases developer productivity by up to 55%; there are more than 1 million paid Copilot users and more than 37,000 organisations that subscribe to Copilot for business (up 40% sequentially)

With GitHub Copilot, we are increasing developer productivity by up to 55% while helping them stay in the flow and bringing the joy back to coding. We have over 1 million paid Copilot users and more than 37,000 organizations that subscribe to Copilot for business, up 40% quarter-over-quarter, with significant traction outside the United States.

Microsoft’s management is using AI to improve the healthcare industry: Dragon Ambient Experience (from the Nuance acquisition) has been used in more than 10 million patient interactions to-date to automatically document the interactions, andDAX Copilot can draft clinical notes in seconds, saving 40 minutes of documentation time daily for physicians

In health care, our Dragon Ambient Experience solution helps clinicians automatically document patient interactions at the point of care. It’s been used across more than 10 million interactions to date. And with DAX Copilot, we are applying generative models to draft high-quality clinical notes in seconds, increasing physician productivity and reducing burnout. For example, Atrium Health, a leading provider in Southeast United States, credits DAX Copilot with helping its physicians each save up to 40 minutes per day in documentation time.

Microsoft’s management has infused Copilot across Microsoft’s work-productivity products and tens of thousands of users are already using Copilot in early access

Copilot is your everyday AI assistant, helping you be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams. Tens of thousands of employees at customers like Bayer, KPMG, Mayo Clinic, Suncorp and Visa, including 40% of the Fortune 100, are using Copilot as part of our early access program.

Users find Copilot amazing and have enjoyed similar productivity gains as developers did with Github Copilot

Customers tell us that once they use Copilot, they can’t imagine work without it, and we are excited to make it generally available for enterprise customers next week. This quarter, we also introduced a new hero experience in Copilot, helping employees tap into their entire universe of work, data and knowledge using chat. And the new Copilot Lab helps employees build their own work habits for this era of AI by helping them turn good prompts into great ones…

…And in fact, the interesting thing is it’s not any one tool, right, which is the feedback even sort of is very clear that it’s the all up. You just keep hitting the Copilot button across every surface, right, whether it’s in Word to create documents, in Excel to do analysis or PowerPoint or Outlook or Teams. Like clearly, the Teams Meeting, which is an intelligent recap, right? It’s not just a dumb transcript. It’s like having a knowledge base of all your meetings that you can query and add to essentially the knowledge terms of your enterprise. And so we are seeing broad usage across and the interesting thing is by different functions, whether it’s in finance or in sales by roles. We have seen productivity gains like we saw with developers in GitHub Copilot.

At the end of the day, Microsoft management is still grounded about the rate of adoption of Copilot in Office, since it is an enterprise product

And of course, this is an enterprise product. I mean at the end of the day, we are grounded on enterprise cycle times in terms of adoption and ramp. And it’s incrementally priced. So therefore, that all will apply still. But at least for something completely new, to have this level of usage already and this level of excitement is something we’re very, very pleased with.

Microsoft’s management recently introduced Security Copilot, the world’s first generative AI cybersecurity product, and it is seeing high demand

 We see high demand for Security Copilot, the industry’s first and most advanced generative AI product, which is now seamlessly integrated with Microsoft 365 Defender. Dozens of organizations, including Bridgewater, Fidelity National Financial and Government of Alberta, have been using Copilot in preview and early feedback has been positive.

Bing users have engaged in over 1.9 billion chats and Bing has a new personalised answers feature, and better support for DALL-E-3 (more than 1.8 billion images have been created with DALL-E-3 to-date)

Bing users have engaged in more than 1.9 billion chats, and Microsoft Edge has now gained share for 10 consecutive quarters. This quarter, we introduced new personalized answers as well as support for DALL-E 3, helping people get more relevant answers and to create incredibly realistic images. More than 1.8 billion images have been created to date.

Bing is now incorporated into Meta’s AI chat experience

We’re also expanding to new end points, bringing Bing to Meta’s AI chat experience in order to provide more up-to-date answers as well as access to real-time search information. 

Azure saw higher-than-expected AI consumption

In Azure, as expected, the optimization trends were similar to Q4. Higher-than-expected AI consumption contributed to revenue growth in Azure.

Micosoft’s management is seeing new AI project starts in Azure, and these bring other cloud projects

Given our leadership position, we are seeing complete new project starts, which are AI projects. And as you know, AI projects are not just about AI meters. They have lots of other cloud meters as well. So that sort of gives you one side of what’s happening in terms of enterprise.

Microsoft’s management believes the company has very high operating leverage with AI, since the company is using one model across its entire stack of products, and this operating leverage goes down to the silicon level

Yes, it is true that we have — the approach we have taken is a full-stack approach all the way from whether it’s ChatGPT or Bing chat or all our Copilots all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties. And also over time, you can see that sort of stack optimization all the way to the silicon because the abstraction layer to which the developers are riding is much higher up than no-level kernels, if you will. So therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open-source models or proprietary models. We also have a bunch of open-source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it, but the thing is we have scale leverage of one large model that was trained and one large model that’s been used for inference across all our first-party SaaS apps as well as our API in our Azure AI service…

…In addition, what Satya mentioned earlier in a question, and I just want to take every chance to reiterate it, if you have a consistent infrastructure from the platform all the way up through its layers that every capital dollar we spend, if we optimize revenue against it, we will have great leverage because wherever demand shows up in the layers, whether it’s at the SaaS layer, whether it’s at the infrastructure layer, whether it’s for training workloads, we’re able to quickly put our infrastructure to work generating revenue on our BEAM workloads. I mean I should have mentioned all the consumer workloads use the same frame.

Microsoft’s management believes that having the discipline to concentrate Microsoft’s tech stack and capital spend is important because the costs of developing and using AI can run up really quickly

I think, is very important for us to be very disciplined on both I’ll call it our tech stack as well as our capital spend all to be concentrated. The lesson learned from the cloud side is this, we’re not running a conglomerate of different businesses. It’s all one tech stack up and down Microsoft’s portfolio. And that I think is going to be very important because that discipline, given what the spend like — it will look like for this AI transition, any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that its chips, together with the Infiniband networking technology, are the reference architecture for AI

NVIDIA HDX with InfiniBand together are essentially the reference architecture for AI supercomputers and data center infrastructures.

Inferencing is now a major workload for Nvidia chips

Inferencing is now a major workload for NVIDIA AI compute.

Nvidia’s management is seeing major consumer internet companies ramping up generative AI deployment, and enterprise software companies starting to

Most major consumer Internet companies are racing to ramp up generative AI deployment. The enterprise wave of AI adoption is now beginning. Enterprise software companies such as Adobe, Databricks, Snowflake and ServiceNow are adding AI copilots and assistance with their pipelines.

Recent US export controls have affected Nvidia’s chip exports to China, Vietnam, and parts of the Middle East

Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products, including our Hopper and MPIR 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe will be more than offset by strong growth in other regions.

Many countries are keen to invest in sovereign AI infrastructure, and Nvidia’s management is helping them do so as it is a multi-billion economic opportunity

Many countries are awaiting to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystem. For example, we are working with India Government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI enterprise software to fuel advancement across France and Europe. National investment in compute capacity is a new economic imperative and serving the sovereign AI infrastructure market represents a multibillion-dollar opportunity over the next few years…

…The U.K. government announced it will build 1 of the world’s fastest AI supercomputer called Isambard-AI with almost 5,500 Grace Hopper Super chips. German Supercomputing Center, Elec, also announced that it will build its next-generation AI supercomputer with close to 24,000 Grace Hopper super chips and Quantum 2 InfiniBand, making it the world’s most powerful AI supercomputer with over 90 exaflops of AI performance…

…You’re seeing sovereign AI infrastructures. People countries that now recognize that they have to utilize their own data, keep their own data, keep their own culture, process that data and develop their own AI. 

Nvidia has a new chip with inference speeds that are 2x faster than the company’s flagship H100 GPUs (graphics processing units)

We also announced the latest member of the Hopper family, BH 200, which will be the first GPU to offer HBM3E, faster, larger memory to further accelerate generative AI and LLMs. It moves inference speed up to 2x compared to H100 GPUs for running LLM like [indiscernible]. 

Major cloud computing services providers will soon begin to offer instances for Nvidia’s next-generation GPU, the H200  

Compared to the H100, H200 delivers an 18x performance increase for infancy models like GPT-3, allowing customers to move to larger models and with no increase in latency. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200 base instances starting next year. 

Nvidia’s management is seeing very strong demand for Infiniband; management believes that Infiniband is critical in the deployment of LLMs (large language models); management believes that the vast majority of large-scale AI factories had standardised on Infiniband because of Infiniband’s vastly superior value proposition compared to Ethernet (data-traffic patterns are very different for AI and for typical hyperscale cloud environments)

Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gain the scale and performance needed for training LLMs. Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe…

……The vast majority of the dedicated large-scale AI factories standardized on InfiniBand. And the reason for that is really because of its data rate and not only just the latency, but the way that it moves traffic around the network is really important. The way that you process AI and a multi-tenant hyperscale ethernet environment, the traffic pattern is just radically different. And with InfiniBand and with software-defined networks, we could do congestion control, adaptive routing, performance isolation and noise isolation, not to mention, of course, the data rate and the low latency that — and the very low overhead of InfiniBand that’s a natural part of InfiniBand. .

And so InfiniBand is not so much just a network. It’s also a computing fabric. We put a lot of software-defined capabilities into the fabric, including computation. We do floating point calculations and computation right on the switch and right in the fabric itself. And so that’s the reason why that difference in Ethernet versus InfiniBand where InfiniBand versus Ethernet for AI factories is so dramatic. And the difference is profound and the reason for that is because you’ve just invested in a $2 billion infrastructure for AI factories, a 20%, 25%, 30% difference in overall effectiveness, especially as you scale up is measured in hundreds of millions of dollars of value. And if you were renting that infrastructure over the course of 4 or 5 years, it really adds up. And so InfiniBand’s value proposition is undeniable for AI factories. 

Nvidia’s management is expanding the company into Ethernet and Nvidia’s Ethernet technology performs better than traditional offerings; management’s go-to-market strategy for Nvidia’s new Ethernet technology is to collaborate with the company’s large enterprise partners

We are expanding NVIDIA networking into the Ethernet space. Our new Spectrum end-to-end Ethernet offering with technologies, purpose-built for AI will be available in Q1 next year. We support from leading OEMs, including Dell, HP and Lenovo. Spectrum X can achieve 1.6x higher networking performance for AI communication compared to traditional ethernet offerings…

…And our company is — for all of our employees, doesn’t have to be as high performance as the AI factories, we use to train the models. And so we would like the AI to be able to run an Ethernet environment. And so what we’ve done is we invented this new platform that extends Ethernet. It doesn’t replace the Ethernet, it’s 100% compliant with Ethernet, and it’s optimized for east-west traffic, which is where the computing fabric is. It adds to Ethernet with an end-to-end solution with Bluefield as well as our Spectrum switch that allows us to perform some of the capabilities that we have in InfiniBand, not all but some, and we achieved excellent results.

And the way we go to market is we go to market with our large enterprise partners who already offer our computing solution. And so HPL and Lenovo has the NVIDIA AI stack, the NVIDIA enterprise software stack. And now they integrate with BlueField as well as bundle take to market their Spectrum switch and they’ll be able to offer enterprise customers all over the world with their vast sales force and vast network of resellers the — in a fully integrated, if you will, fully optimized at least end-to-end AI solution. And so that’s basically bringing AI to Ethernet for the world’s enterprise. 

Nvidia’s management believes that there’s a new class of data centres emerging, and they’ve named them as “AI factories”; these AI factories are being built all across the world 

This is the traditional data centers that you were just talking about, where we represent about 1/3 of that. But there’s a new class of data centers. And this new class of data centers, unlike the data centers of the past where you have a lot of applications running used by a great many people that are different tenants that are using the same infrastructure and the data center stores a lot of files. These new data essentials are very few applications if not 1 application used by basically 1 tenant. And it processes data. It trains models and it generates tokens, it generates AI. And we call these new data center AI factories. We’re seeing AI factories being built out everywhere in just about every country. 

Nvidia’s management is seeing the appearance of CSPs (cloud services providers) that specialise only in GPUs and processing AI

You’re seeing GTU specialized CSPs cropping up all over the world, and they’re dedicated to doing really 1 thing, which is processing AI. 

Nvidia’s management is seeing an AI adoption-wave moving from startups and CSPs to consumer internet companies, and then to enterprise software companies, and then to industrial companies

And so we’re just — we’re seeing the waves of generative AI starting from the start-ups and CSPs moving to consumer Internet companies moving to enterprise software platforms, moving to enterprise companies. And then — and ultimately, 1 of the areas that you guys have seen us spend a lot of energy on has to do with industrial generative AI. This is where NVIDIA AI and NVIDIA Omniverse comes together. And that is a really, really exciting work. And so I think the — we’re at the beginning of a and basically across the board industrial transition to generative AI to accelerated computing. This is going to affect every company, every industry, every country.

Nvidia’s management believes that Nvidia’s AI Enterprise service – where the company helps its customers develop custom AI models that the customers are then free to monetise in whatever manner they deem fit – will become a very large business for Nvidia

Our monetization model is that with each 1 of our partners, they rent a sandbox on DGX Cloud where we work together. They bring their data. They bring their domain expertise. We’ve got our researchers and engineers. We help them build their custom AI. We help them make that custom AI incredible. Then that customer AI becomes theirs, and they deploy it on a run time that is enterprise grade enterprise optimized or outperformance optimized, runs across everything NVIDIA. We have a giant installed base in the cloud on-prem anywhere. And it’s secure, securely patched, constantly patched and optimized and supported. And we call that NVIDIA AI enterprise.

NVIDIA AI Enterprise is $4,500 per GP per year. That’s our business model. Our business model is basically a license. Our customers then would that basic license can build their monetization model on top of. In a lot of ways we’re wholesale, they become retail. They could have a per-subscription license base. They could per instance or they could do per usage. There’s a lot of different ways that they could take to create their own business model, but ours is basically like a software license like an operating system. And so our business model is help you create your custom models, you run those custom models on NVIDIA AI Enterprise. And it’s off to a great start. NVIDIA AI Enterprise is going to be a very large business for us.

PayPal (NASDAQ: PYPL)

PayPal’s management wants to use AI and the data collected from the company’s Rewards program to drive a shopping recommendation engine

For example, our PayPal Cashback Mastercard provides 3% cash back on PayPal purchases as well as cash back on all other purchases. Customers with this card make, on average, 56 more purchases with PayPal in the year after they adopt the product than they did the year before. Over 25 million consumers have used PayPal Rewards in the past 12 months, and we’ve put more than $200 million back in our customers’ pockets with cashback and savings during that time. But even more interesting, through our Rewards product, we have an active database of over 300 million SKUs of inventory from our merchant partners. These data points can help us use AI to power a robust shopping recommendation engine, to provide more relevant rewards and savings back to our customers.

PayPal’s management believes that machine learning and generative AI can be applied to the company’s data to improve fraud protection and better connect merchants and consumers

 Our machine learning capabilities combine hundreds of risk and fraud models with dozens of real-time analytics engines and petabytes of payments data to generate insights by learning users’ behaviors, relationships, interests and spending habits. This scale gives us a very unique advantage in the market. Our ability to create meaningful profiles with the help of AI is exceptionally promising. You will see us using our data and the advances in generative AI in responsible ways to further connect our merchants and consumers together in a tight flywheel.

Shopify (NASDAQ: SHOP)

Shopify’s management has integrated Shopify Magic – the company’s suite of free AI features – across its products

At Shopify, we believe AI is for everyone, and its capabilities should be captured and embedded across the entirety of a business. We’ve integrated Shopify Magic, our suite of free AI-enabled features, across our products and workflows.

Shopify Magic can help merchants craft personalised pages and content, and is designed specifically for commerce

Shopify Magic can make the power of Shopify and a merchant’s own data to make it work better for them, whether it’s enabling unique personalized page and content generation like instantly crafting an About Us page in your brand voice and tone or building a custom page to showcase all the sizes available in your latest product collection…

…Now unlike other AI products, the difference with Shopify Magic is it’s designed specifically for commerce. And it’s not necessarily just 1 feature or 1 product. It’s really embedded across Shopify to make these workflows in our products just easier to use. It makes it easier for merchants to run and scale their businesses. And of course, we think it’s going to unlock a ton of possibilities for not just small merchants, but merchants of all sizes. And we’re going to continue to work on that over time. It’s just going to get better and better.

Shopify’s management is using AI internally so that the company can make better decisions and improve its customer support

We ourselves are using AI inside of Shopify to make better decisions, but also for things like — things like our support team using it so that questions like domain reconfiguration, or a new password, or I don’t know what my password is. Those things should not necessarily require high-touch communication. What that does is it means that our support team are able to have much higher-quality conversations and act as business coaches for the merchants on Shopify. 

Shopify’s management believes that Shopify is uniquely positioned to harness the power of AI because commerce and the company represent the intersection of humans and technology, and that is the domain of AI

If you kind of think about commerce and Shopify, we kind of interact at the intersection of humans and technology, and that’s exactly what AI is really, really good at. So we think we’re uniquely positioned to harness the power of AI, and the ultimate result of it will be these capabilities for our merchants to grow their businesses.

Shopify has AI-powered language translations for merchants within its software products

This includes things like launching shipping guidance for merchants, navigating them through streamlined privacy guidance, initiating localization experiments across various marketing channels and bringing localization tools and AI-backed language translations to the Shopify App Store.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees strong AI-related demand for its chips, but it’s not enough to offset cyclicality in its business 

Moving into fourth quarter 2023. While AI-related demand continues to be strong, it is not enough to offset the overall cyclicality of our business. We expect our business in the fourth quarter to be supported by the continued strong ramp of our 3-nanometer technology, partially offset by customers’ continued inventory adjustment.

TSMC;s management is seeing strong customer interest for its N2 technology node because the surge in AI-related demand leads to demand for energy-efficient computing, and TSMC’s technology platform goes beyond geometry-shrink (making transistors smaller), helping with power efficiency

The recent surge in AI-related demand supports our already strong conviction that demand for energy-efficient computing will accelerate in an intelligent and connected world. The value of our technology platform is expanding beyond the scope of geometry shrink alone and increasing toward greater power efficiency. In addition, as process technology complexity increases, the lead time and engagement with customers also start much earlier. As a result, we are observing a strong level of customer interest and engagement at our N2 similar to or higher than N3 at a similar stage from both HPC and smartphone applications.

TSMC’s management is seeing its customers add AI capabilities into smartphones and PCs and expects more of this phenomenon over time

We do see some activities from customers who add AI capability in end devices such as smartphone and PCs, [ so new growth ] engine and AI and PC, whatever. And we certainly hope that this one will add to the course, help TSMC more strengthen under our AI’s business…

…It started right now, and we will expect that the more and more customer will put that AI’s capability into the end devices, into their product.

TSMC’s management is seeing AI-related demand growing stronger and stronger and TSMC has to grow its manufacturing capacity to support this

The AI demand continues to grow stronger and stronger. So from TSMC’s point of view, now we have about — we have a capacity limitation to support them — to support the demand. We are working hard to increase the capacity to meet their demand, that’s for one thing.

TSMC’s management believes that any kind of AI-related chip will require leading edge chip technology and this is where TSMC excels

Whether customer developed the CPU, GPU, AI accelerator or ASIC for all the type for AI applications, the commonality is that they all require usage of leading-edge technology with stable yield delivery to support larger die size and a strong foundry design ecosystem. All of those are TSMC’s strengths. So we are able to address and capture a major portion of the market in terms of a semiconductor component in AI.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasing the company’s investments in its AI models and management wants to use AI for the company’s own benefit as well as that of society and its customers

We are increasing investment in our AI models, providing new features to our products and enhancing our targeting capabilities for both content and advertising. We aspire to position our leading AI capability, not only as a growth multiplier for ourselves, but also as a value provider to our enterprise customers and the society at large.

Tencent’s management recently upgraded the size and capabilities of the company’s foundational model – Tencent Hunyuan – which is now available to customers on a limited basis and deployed in some of Tencent’s cloud services

For cloud, we upgrade the size and capabilities of our proprietary foundation model, Tencent Hunyuan. We are making Hunyuan available on a limited basis to the public and to customers and deploying QiDian in Tencent Meeting and Tencent Docs…

…We have upgraded our proprietary foundation model, Tencent Hunyuan. We have made Tencent Hunyuan bot initially available to a smaller but expanding number of users via a mini program. Hunyuan is also now powering meeting summarization in Tencent Meeting and content generation in Tencent Docs. And externally, we’re enabling enterprise customers to utilize our large language model via APIs or model as a Service solutions in our cloud in functions such as coding, data analysis and customer service automation.

Tencent’s management believes that Tencent is one of China’s AI leaders with the development of Hunyuan

In terms of Hunyuan and the overall AI strategy, I would say we have been pretty far along in terms of building up Hunyuan, and we feel that we are one of the leaders within China, and we are also continuously increasing the size of the model and preparing for the next generation of our Hunyuan model, which is going to be a mixture of experts architecture, which we believe will further improve the performance of our Hunyuan model. And by building up Hunyuan, we actually have really build up our capability in general AI across the board. Because Hunyuan, the transformer-based model include — involve the handling of a large amount of data, large amount of training data, large size of computing cluster and a very dedicated fine-tuning process in terms of improving the AI performance.

Tencent’s management is using AI to improve the company’s advertising offerings, in areas such as ad targeting, attribution accuracy, and the generation of advertising visuals – management sees this as evidence that Tencent’s AI investments are already generating tangible results

We have expanded our AI models with more parameters to increase their ad targeting and attribution accuracy contributing to our ad revenue growth. We’re also starting to provide generative AI tools to advertiser partners, which enables them to dynamically generate ad visuals based on text fronts and to optimize ad sizes for different inventories, which should help advertisers create more appealing advertisements with higher click-through rates boosting their transactions in our revenue…

…And the general AI capability is actually helping us quite a bit in terms of the targeting technology related to advertising and our content provisioning service. So in short video by improving our AI capability, we can actually ramp up our video accounts at the faster clip. And in terms of the advertising business by increasing the targeting capability, we are actually increasing our ad revenue and by delivering better results to the — to our customers. So they are generating — so our AI capabilities is generating tangible result at this point in time. 

Tencent’s management wants to build an AI-powered consumer-facing smart agent down the road, but they are wary about the costs of inference

And we also feel that further in the future, when there’s actually a consumer-facing product that is more like a smart agent for people right now, that is further down the road, but it actually carries quite a bit of room for imagination…

…Now in terms of the Hunyuan and in the future, the potential of an AI assistant, I think it’s fair to say it’s still in a very, very early stage of concept design. So definitely not at the stage of product design yet and definitely not at the stage of thinking about monetization yet. But of course, right, if you look at any of these generative AI technology at this point in time, inference cost is a real variable cost, which needs to be considered in the entire equation. And that, to some extent, add to the challenge of the product design, too. So I would say, at this point in time, it’s actually very early stage. There is a promise and imaginary room for opportunity for the future. 

Tencent’s management believes that the company has sufficient amount of chips for the company’s AI-related development work for a couple more generations; the US’s recent semiconductor bans will not affect the development of Tencent’s AI models, but it could affect Tencent’s ability to rent out these chips through Tencent Cloud

Now in terms of the chip situation, right now, we actually have 1 of the largest inventory of of AI chips in China among all the players. And one of the key things that we have done was actually we were the first to put in order for H800, and that allow us to have a pretty good inventory of H800 chips. So we have enough chips to continue our development of Hunyuan for at least a couple more generations. And the ban does not really affect the development of Hunyuan and our AI capability in the near future. Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. 

Tencent’s management wants to explore the use of lower-performance chips for AI inference purposes and they are also exploring domestic suppliers of chips

Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. And going forward, we will have to figure out ways to make our — the usage of our AI chips more efficient. We’ll try to see whether we can offload a lot of the inference capability to lower-performance chips so that we can retain the majority of our high-performance AI chips for training purpose. And we also try to look for domestic stores for these training chips.

Tencent’s management believes that AI can bring significant improvement to a digital ad’s current average click-through rate of 1%

Today, a typical click-through rate might be around 1%. As you deploy large language models, then you can make more use of the thousands of discrete data points that we have potentially for targeting and bring them to bear and turn them into reality. And you can get pretty substantial uplifts in click-through rate and therefore, in revenue, which is what the big U.S. social networks are now starting to see.

Tesla (NASDAQ: TSLA)

Tesla vehicles have now driven over 0.5 billion miles with FSD (Full Self Driving) Beta and the mileage is growing

Regarding Autopilot and AI, our vehicle has now driven over 0.5 billion miles with FSD Beta, full self-driving beta, and that number is growing rapidly.

Tesla’s management sees significant promise with FSD v.12

We’re also seeing significant promise with FSD version 12. This is the end-to-end AI where it’s a photon count in, controls out or really you can think of it as there’s just a large bit stream coming in and a tiny bit stream going out, compressing reality into a very small set of outputs, which is actually kind of how humans work. The vast majority of human data input is optics, from our eyes. And so we are, like the car, photons in, controls out with neural nets, just neural nets, in the middle. It’s really interesting to think about that.

Tesla recently completed building a 10,000 GPU cluster of Nvidia’s H100s chips and has brought the cluster into operation faster than anyone has done (the H100s will help with the development of Tesla’s full self driving efforts)

We recently completed a 10,000th GPU cluster of H100s. We think probably bringing it into operation faster than anyone’s ever brought that much compute per unit time into production since training is the fundamental limiting factor on progress with full self-driving and vehicle autonomy.

Tesla’s management believes that AI is a game changer and wants the company to continue to invest in AI 

We will continue to invest significantly in AI development as this is really the massive game changer, and I mean, success in this regard in the long term, I think has the potential to make Tesla the most valuable company in the world by far.

Tesla’s management believes that the company’s AI team is the best in the world

The Tesla AI team is, I think, one of the world’s best, and I think it is actually by far the world’s best when it comes to real-world AI. But I’ll say that again: Tesla has the best real-world AI team on earth, period, and it’s getting better.

Tesla’s management is very excited about the company’s progress with autonomous driving and it is already driving them around with no human-intervention

I guess, I am very excited about our progress with autonomy. The end-to-end, nothing but net, self-driving software is amazing. I — drives me around Austin with no interventions. So it’s clearly the right move. So it’s really pretty amazing. 

Tesla’s management believes that the company’s work in developing autonomous driving can also be applied to Optimus (the company’s autonomous robots)

And obviously, that same software and approach will enable Optimus to do useful things and enable Optimus to learn how to do things simply by looking. So extremely exciting in the long term.

Tesla’s management believes that Optimus will have a huge positive economic impact on the world and that Tesla is at the forefront of developing autonomous robots; Tesla’s management is aware of the potential dangers to humankind that an autonomous robot such as Optimus can pose, so they are designing the robot carefully

As I’ve mentioned before, given that the economic output is the number of people times productivity, if you no longer have a constraint on people, effectively, you’ve got a humanoid robot that can do as much as you’d like, your economy is twice the infinite or infinite for all intents and purposes. So I don’t think anyone is going to do it better than Tesla, not by a long shot. Boston Dynamics is impressive, but their robot lacks the brain. They’re like the Wizard of Oz or whatever. Yes, lacks the brain. And then you also need to be able to design the humanoid robot in such a way that it can be mass manufactured. And then at some point, the robots will manufacture the robots.

And obviously, we need to make sure that it’s a good place for humans in that future. We do not create some variance of the Terminator outcome. So we’re going to put a lot of effort into localized control of the humanoid robot. So basically, anyone will be able to shut it off locally, and you can’t change that even if you put — like a software update, you can’t change that. It has to be hard-coded.

Tesla’s management believes that Mercedes can easily accept legal liability for any FSD-failures because Mercedes’ FSD is very limited whereas Tesla’s FSD has far less limitations 

[Question] Mercedes is accepting legal liability for when it’s Level 3 autonomous driving system drive pilot is active. Is Tesla planning to accept legal liability for FSD? And if so, when?

[Answer] I mean I think it’s important to remember for everyone that Mercedes’ system is limited to roads in Nevada and some certain cities in California, doesn’t work in the snow or the fog. It must have a [indiscernible] car in plains, only 40 miles per hour. Our system is meant to be holistic and drive in any conditions, so we obviously have a much more capable approach. But with those kind of limitations, it’s really not very useful.

Tesla’s management believes that technological progress building on technological progress is what will eventually lead to full self driving

I would characterize our progress in real world AI as a series of stacked log curves. I think that’s also true in other parts of AI, like [ LOMs ] and whatnot, a series of stacked log curves. Each log curve gets higher than the last one. So if we keep stacking them, we keep stacking logs, eventually, we get to FSD.

The Trade Desk (NASDAQ: TSLA)

The Trade Desk’s management believes that AI will change the world, but not everyone working on AI is delivering meaningful impact

AI has immense promise. It will change the world again. But not everyone talking about AI is delivering something real or impactful.

The Trade Desk’s management is not focusing the company’s AI-related investments on LLMs (large language models) – instead, they are investing in deep-learning models to improve bidding, pricing, value, and ad relevance for Trade Desk’s services

Large Language Models (the basis of ChatGPT) aren’t the highest priority places for us to make our investments in AI right now. Deep learning models pointed at bidding, pricing, value, and ad relevance are perfect places for us to concentrate our investments in AI—all four categories have private betas and some of the best engineers in the world pointed at these opportunities.

The Trade Desk’s management believes that they are many areas to infuse AI into the digital advertising dataset that the company holds

Second is the innovation coming from AI and the many, many opportunities we have ahead of us to find places to inject AI into what may be the most rich and underappreciated data asset on the Internet, which we have here at The Trade Desk.

The Trade Desk’s management believes that traders in the digital advertising industry will not lose their jobs to AI, but they might lose their jobs to traders who know how to work with AI

Traders know that their jobs are not going to be taken away by AI. But instead, they have to compete with each other. So their job could be taken away from a trader who knows how to use AI really well until all of them are looking at ways to use the tools that are fueled by AI that were provided, where AI is essentially doing 1 or 2 things. It’s either doing the math for them, if you will, of course, with very advanced learning models or, in other cases, it’s actually their copilot.

Old Navy achieved a 70% reduction in cost to reach each unique household using The Trade Desk’s AI, Koa

A great example of an advertiser pioneering new approaches to TV advertising with a focus on live sports is Old Navy…  But as Old Navy quickly found out, programmatic guaranteed has limitations. Programmatic guaranteed, or PG, does not allow Old Navy to get the full value of programmatic such as frequency management, audience targeting and the ability to layer on their first-party data. So they took the next step in the form of decision biddable buying within the private marketplace and focused on live sports inventory. CTV live sports advertising was appealing because it offered an opportunity to expose their brand against very high premium content that might be more restrictive and expensive in a traditional linear environment. They were able to use Koa, The Trade Desk’s AI, to optimize pacing and frequency management across the highest-performing inventory. As a result, they saw a 70% reduction in the cost to reach each unique household versus their programmatic guaranteed performance. 

Wix (NASDAQ: WIX)

Users of Wix’s Wix Studio product are enjoying its AI features

Users particularly [indiscernible] Studio responsive AI technology that simplify high-touch and time-sensitive tasks such as ensuring consistent design across web pages on different screen sizes. They are also enjoying the AI code assistant inside the new Wix IDE [integrated development environment], which allowed them to write clinic code and detect errors easily.

Wix recently released new AI products: (1) an SEO tool powered by AI called AI Meta Tags Creator, and (2) AI Chat Experience for Business, which allows new users to chat with an AI who will walk them through the Wix onboarding process; AI Chat Experience for Business is in its early days, but it has already driven a positive impact on Wix’s conversion and revenue

Earlier this week, we released our latest AI products. The first was AI Meta Tags Creator, a groundbreaking SEO tool powered by AI and our first AI-powered feature within our collection of SEO tools. Both self creators looking to generate SEO-friendly tags for each of their pages and professionals looking to enhance their efficiency and make real-time adjustments will benefit from this product. The second was our Conversational AI Chat Experience for Business. This feature, which is now live, paves the way to accelerate onboarding using AI in order to get businesses online more quickly and efficiently. These new tools continue to demonstrate our leadership in utilizing AI to help users of all types to succeed online… 

…Avishai spoke about the AI chat experience for business and its early weeks — and in its early weeks, we have already seen its positive impact on conversion and revenue.

Wix’s management expects Wix’s AI products to drive higher conversion, monetisation, and retention in the company’s Self Creators business

Compounding Partners growth is complemented by re-accelerating growth in our stable and profitable Self Creators business, which we saw once again this quarter. We expect our market-leading product innovation as well as our powerful AI products and technology to drive higher conversion, monetization and retention as we maintain our leadership position in the website building space.

Wix’s management believes that Wix’s AI products are helping to improve conversion because the new AI tools help to generate content for users, which reduces the inertia to create a website

I believe your second question was in regards to what kind of effect we are seeing from different AI products that we are launching, and mostly in regards to improvement in conversion. And we do actually see an improvement in conversion, which is probably the most important KPI by which we measure our success in deploying new products. The reason for that is that with AI, we are able to ask the user better questions and to understand in a smarter way, why is that the user is trying to achieve. From that, we are able to generate a better starting point for their business on top of Wix. And that is not just the skeleton, we are also able to fill in a lot of information, a lot of the content that the user would normally have to fill in manually. The result is that the amount of effort and knowledge that you need to create a website and for your business on Wix is dramatically reduced. And from that, we are able to see very good results in terms of improvement of conversion.

The use of AI tools internally has helped to improve Wix’s margins

So we saw this year a tremendous improvement in margins — in gross margin. And it came mostly from 2 places. The first one is a lot of improvements and savings that we have with our infrastructure, most of you know the hosting activity. So we had a lot of savings over there, but also about our core organization, for example, benefiting from all kind of AI tools that enable us to be more efficient.

Wix’s management believes that the company’s AI features help users with website-creation when it would normally take specialists to do so

And then because of the power of the AI tools, you can create very strong, very professional websites because the AI will continue and finish for you the thing that would normally require to specialize in different variations of web designs.

Zoom Video Communications (NASDAQ: ZM)

Zoom AI Companion, which helps create call summaries, is included in Zoom’s paid plans at no additional costs to customers, and more than 220,000 accounts have enabled it, with 2.8 million meeting summaries created to-date

We also showcased newly-released innovations like Zoom AI Companion, as well as Zoom AI Expert Assist and a Quality Management for the Contact Center. Zoom AI Companion is especially noteworthy for being included at no additional cost to our paid plans, and has fared tremendously well with over 220,000 accounts enabling it and 2.8 million meeting summaries created as of today.

Zoom’s management believes that Zoom AI Companion’s meeting-summary feature is really accurate and really fast; management attributes the good performance to the company’s use of multiple AI models within Zoom AI Companion

I think we are very, very proud of our team’s progress since it launched the Zoom AI Companion, as I mentioned earlier, right, a lot of accounts enabled that. Remember, this is no additional cost to [ outpay ] the customer. A lot of features.One feature of that is like take a meeting summary, for example. Amazingly, it’s very accurate and it really save the meeting host a lot of time. And also, our federated AI approach really contributed to that success because we do not count on a single AI model, and in terms of latency, accuracy, and also the response, the speed and so on and so forth, I think, it really helped our AI Companion.

Free users of Zoom are unable to access Zoom AI Companion

For sure, for free users, they do not — they cannot enjoy this AI Companion, for sure, it’s a [ data health ] for those who free to approve for online upgrade. So anyway, so we keep innovating on AI Companion. We have high confidence. That’s a true differentiation compared to any other AI features, functionalities offered by some of our competitors.

Zoom’s management thinks that Zoom’s AI features for customers will be a key differentiator and a retention tool

But I think what Eric was just mentioning about AI is probably really going to be a key differentiator and a retention — retention tool in the future, because as a reminder, all of the AI Companion features come included for our free — sorry, for our paid users. So we’re seeing it not only help with conversion, but we really believe that for the long term, it will help with retention as well.

Zoom’s management believes that Zoom’s AI features will help to reaccelerate Zoom’s net dollar expansion rate for enterprise customers

[Question] You’re showing stabilization here on some of the major metrics, the Enterprise expansion metric took a step down to 105%. And so just wondering what it takes for that metric to similarly show stabilization as given like in Q1 renewal cohort and kind of walking through that. Anything on the product side for us to consider or just any other commentary there is helpful.

[Answer] Well, as a reminder, it’s a trailing 12-month metric. So as we’ve worsely seen our growth rates come down this year that’s following behind it. But absolutely, we believe that AI Companion in general as well as the success that we are seeing in Zoom Phone, in Zoom Contact Center, Zoom Virtual Agent, all of those will be key contributors to seeing that metric start to reaccelerate again as we see our growth rate starting to reaccelerate as well.

Zoom’s management thinks tjat Zoom’s gross margin could decline – but only slightly – due to the AI features in Zoom’s products being given away for free at the moment

[Question] As I look at gross margins, how sustainable is it keeping at these levels? I know AI Companion is being given away from as part of the package, I guess, prepaid users. But if you think about the cost to run these models, the margin profile of Contact Center and Phone. How durable is it to kind of sustain these levels?

[Answer] But we do expect there’s going to be some impact on gross margins. I mean we — I don’t think it’s going to be significant because the team will continue to operate in the very efficient manner that they do and run our co-los [co-locateds] that way, but we do expect there’s going to be some impact to our gross margin as we move forward.

Zoom’s management wants to leverage AI Companion across the entire Zoom platform

So again, it’s a lot of other features as well. And like for me, I also use our — the client, [indiscernible] client, connect and other services you can, right? You can have you compose e-mail as well, right? It’s a lot of features, right? And down the road awareness for the Whiteboard with AI Companion as well. Almost every service entire platform, we’re going to lever the AI Companion. So and a lot of features and the AI Companion.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

Jensen Huang’s Wisdom

Nvidia’s co-founder and CEO was interviewed recently and there was plenty to learn from his sharing.

I listen to, or read the transcripts of, podcasts regularly. One of my favourite podcast episodes this year was Jensen Huang’s appearance earlier this month in an episode of the Acquired FM podcast hosted by Ben Gilbert and David Rosenthal. Huang is the co-founder and CEO of Nvidia, a chip designer with US$32.7 billion in trailing revenue that’s in the epicenter of the AI revolution today. During his 1.5 hour interview with Gilbert and Rosenthal, Huang shared many pieces of wisdom – the passages below in italics are my favourites. 

On how he sped up Nvidia’s chip development process by simulating the future

Jensen: We also made the decision to use this technology called emulation. There was a company called ICOS. On the day that I called them, they were just shutting the company down because they had no customers. I said, hey, look. I’ll buy what you have inventory. No promises are necessary.

The reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip and we got it back from the fab and we started working on our software, by the time that we found all the bugs because we did the software, then we taped out the chip again. We would’ve been out of business already.

David: And your competitors would’ve caught up.

Jensen: Well, not to mention we would’ve been out of business.

David: Who cares?

Jensen: Exactly. If you’re going to be out of business anyway, that plan obviously wasn’t the plan. The plan that companies normally go through—build a chip, write the software, fix the bugs, tape out a new chip, so on and so forth—that method wasn’t going to work. The question is, if we only had six months and you get to tape out just one time, then obviously you’re going to tape out a perfect chip.

I remember having a conversation with our leaders and they said, but Jensen, how do you know it’s going to be perfect? I said, I know it’s going to be perfect, because if it’s not, we’ll be out of business. So let’s make it perfect. We get one shot.

We essentially virtually prototyped the chip by buying this emulator. Dwight and the software team wrote our software, the entire stack, ran it on this emulator, and just sat in the lab waiting for Windows to paint.

David: It was like 60 seconds for a frame or something like that.

Jensen: Oh, easily. I actually think that it was an hour per frame, something like that. We would just sit there and watch it paint. On the day that we decided to tape out, I assumed that the chip was perfect. Everything that we could have tested, we tested in advance, and told everybody this is it. We’re going to tape out the chip. It’s going to be perfect.

Well, if you’re going to tape out a chip and you know it’s perfect, then what else would you do? That’s actually a good question. If you knew that you hit enter, you tape out a chip, and you knew it was going to be perfect, then what else would you do? Well, the answer, obviously, go to production.

Ben: And marketing blitz. And developer relations.

Jensen: Kick everything off because you got a perfect chip. We got in our head that we have a perfect chip.

David: How much of this was you and how much of this was your co-founders, the rest of the company, the board? Was everybody telling you you were crazy?

Jensen: No. Everybody was clear we had no shot. Not doing it would be crazy.

David: Otherwise, you might as well go home.

Jensen: Yeah, you’re going to be out of business anyway, so anything aside from that is crazy. It seemed like a fairly logical thing. Quite frankly, right now as I’m describing it, you’re probably thinking yeah, it’s pretty sensible.

David: Well, it worked.

Jensen: Yeah, so we taped that out and went directly to production.

Ben: So is the lesson for founders out there when you have conviction on something like the RIVA 128 or CUDA, go bet the company on it. This keeps working for you. It seems like your lesson learned from this is yes, keep pushing all the chips in because so far it’s worked every time. How do you think about that?

Jensen: No, no. When you push your chips in I know it’s going to work. Notice we assumed that we taped out a perfect chip. The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out. We developed the entire software stack. We ran QA on all the drivers and all the software. We ran all the games we had. We ran every VGA application we had.

When you push your chips in, what you’re really doing is, when you bet the farm you’re saying, I’m going to take everything in the future, all the risky things, and I pull in in advance. That is probably the lesson. To this day, everything that we can prefetch, everything in the future that we can simulate today, we prefetch it.

On Nvidia’s corporate culture and architecture and why it works

Ben: We have some questions we want to ask you. Some are cultural about Nvidia, but others are generalizable to company-building broadly. The first one that we wanted to ask is that we’ve heard that you have 40+ direct reports, and that this org chart works a lot differently than a traditional company org chart.

Do you think there’s something special about Nvidia that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives, and you’re like, no, you’re just here to do your fricking best work and the most important thing in the world. Now go. (a) Is that correct? and (b) is there something special about Nvidia that enables that?

Jensen: I don’t think it’s something special in Nvidia. I think that we had the courage to build a system like this. Nvidia’s not built like a military. It’s not built like the armed forces, where you have generals and colonels. We’re not set up like that. We’re not set up in a command and control and information distribution system from the top down.

We’re really built much more like a computing stack. The lowest layer is our architecture, then there’s our chip, then there’s our software, and on top of it there are all these different modules. Each one of these layers of modules are people.

The architecture of the company (to me) is a computer with a computing stack, with people managing different parts of the system. Who reports to whom, your title is not related to anywhere you are in the stack. It just happens to be who is the best at running that module on that function on that layer, is in-charge. That person is the pilot in command. That’s one characteristic.

David: Have you always thought about the company this way, even from the earliest days?

Jensen: Yeah, pretty much. The reason for that is because your organization should be the architecture of the machinery of building the product. That’s what a company is. And yet, everybody’s company looks exactly the same, but they all build different things. How does that make any sense? Do you see what I’m saying?

How you make fried chicken versus how you flip burgers versus how you make Chinese fried rice is different. Why would the machinery, why would the process be exactly the same?

It’s not sensible to me that if you look at the org charts of most companies, it all looks like this. Then you have one group that’s for a business, and you have another for another business, you have another for another business, and they’re all supposedly autonomous.

None of that stuff makes any sense to me. It just depends on what is it that we’re trying to build and what is the architecture of the company that best suits to go build it? That’s number one.

In terms of information systems and how you enable collaboration, we’re wired up like a neural network. The way that we say this is that there’s a phrase in the company called ‘mission is the boss.’ We figure out what is the mission of what is the mission, and we go wire up the best skills, the best teams, and the best resources to achieve that mission. It cuts across the entire organization in a way that doesn’t make any sense, but it looks a little bit like a neural network.

David: And when you say mission, do you mean Nvidia’s mission is…

Jensen: Build Hopper.

David: Okay, so it’s not like further accelerated computing? It’s like we’re shipping DGX Cloud.

Jensen: No. Build Hopper or somebody else’s build a system for Hopper. Somebody has built CUDA for Hopper. Somebody’s job is to build cuDNN for CUDA for Hopper. Somebody’s job is the mission. Your mission is to do something.

Ben: What are the trade-offs associated with that versus the traditional structure?

Jensen: The downside is the pressure on the leaders is fairly high. The reason for that is because in a command and control system, the person who you report to has more power than you. The reason why they have more power than you is because they’re closer to the source of information than you are.

In our company, the information is disseminated fairly quickly to a lot of different people. It’s usually at a team level. For example, just now I was in our robotics meeting. We’re talking about certain things and we’re making some decisions.

There are new college grads in the room. There are three vice-presidents in the room, there are two e-staff in the room. At the moment that we decided together, we reasoned through some stuff, we made a decision, everybody heard it exactly the same time. Nobody has more power than anybody else. Does that make sense? The new college grad learned at exactly the same time as the e-staff.

The executive staff, the leaders that work for me, and myself, you earned the right to have your job based on your ability to reason through problems and help other people succeed. It’s not because you have some privileged information that I knew the answer was 3.7, and only I knew. Everybody knew.

On the right way to learn from business books

Jensen: In the last 30 years I’ve read my fair share of business books. As in everything you read, you’re supposed to first of all enjoy it, be inspired by it, but not to adopt it. That’s not the whole point of these books. The whole point of these books is to share their experiences.

You’re supposed to ask, what does it mean to me in my world, and what does it mean to me in the context of what I’m going through? What does this mean to me and the environment that I’m in? What does this mean to me in what I’m trying to achieve? What does this mean to Nvidia and the age of our company and the capability of our company?

You’re supposed to ask yourself, what does it mean to you? From that point, being informed by all these different things that we’re learning, we’re supposed to come up with our own strategies.

What I just described is how I go about everything. You’re supposed to be inspired and learn from everybody else. The education’s free. When somebody talks about a new product, you’re supposed to go listen to it. You’re not supposed to ignore it. You’re supposed to go learn from it.

It could be a competitor, it could be an adjacent industry, it could be nothing to do with us. The more we learn from what’s happening out in the world, the better. But then, you’re supposed to come back and ask yourself, what does this mean to us?

David: You don’t just want to imitate them.

Jensen: That’s right.

On the job of the CEO in a company

Jensen: That’s right. You want to pave the way to future opportunities. You can’t wait until the opportunity is sitting in front of you for you to reach out for it, so you have to anticipate.

Our job as CEO is to look around corners and to anticipate where will opportunities be someday. Even if I’m not exactly sure what and when, how do I position the company to be near it, to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I’m saying? But you’ve got to be close enough to do the diving catch.

On seeing the future of computing and AI before others did

Ben: Speaking of the speed of light—David’s begging me to go here—you totally saw that InfiniBand would be way more useful way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company. Why did you see that when no one else saw that?

Jensen: There were several reasons for that. First, if you want to be a data center company, building the processing chip isn’t the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.

A desktop computer in a data center uses the same CPUs, uses the same GPUs, apparently. Very close. It’s not the processing chip that describes it, but it’s the networking of it, it’s the infrastructure of it. It’s how the computing is distributed, how security is provided, how networking is done, and so on and so forth. Those characteristics are associated with Melanox, not Nvidia.

The day that I concluded that really Nvidia wants to build computers of the future, and computers of the future are going to be data centers, embodied in data centers, then if we want to be a data center–oriented company, then we really need to get into networking. That was one.

The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one training job is orchestrated across millions of processors.

It’s the inverse of hyperscale, almost. The way that you design a hyperscale computer with off-the-shelf commodity ethernet, which is just fine for Hadoop, it’s just fine for search queries, it’s just fine for all of those things—

Ben: But not when you’re sharding a model across.

Jensen: Not when you’re sharding a model across, right. That observation says that the type of networking you want to do is not exactly ethernet. The way that we do networking for supercomputing is really quite ideal.

The combination of those two ideas convinced me that Mellanox is absolutely the right company, because they’re the world’s leading high-performance networking company. We worked with them in so many different areas in high performance computing already. Plus, I really like the people. The Israel team is world class. We have some 3200 people there now, and it was one of the best strategic decisions I’ve ever made.

David: When we were researching, particularly part three of our Nvidia series, we talked to a lot of people. Many people told us the Mellanox acquisition is one of, if not the best of all time by any technology company.

Jensen: I think so, too. It’s so disconnected from the work that we normally do, it was surprising to everybody.

Ben: But framed this way, you were standing near where the action was, so you could figure out as soon as that apple becomes available to purchase, like, oh, LLMs are about to blow up, I’m going to need that. Everyone’s going to need that. I think I know that before anyone else does.

Jensen: You want to position yourself near opportunities. You don’t have to be that perfect. You want to position yourself near the tree. Even if you don’t catch the apple before it hits the ground, so long as you’re the first one to pick it up. You want to position yourself close to the opportunities.

That’s kind of a lot of my work, is positioning the company near opportunities, and the company having the skills to monetize each one of the steps along the way so that we can be sustainable.

On why zero-billion dollar markets are better than $10 billion markets

David: I’ve heard you or others in Nvidia (I think) used the phrase zero billion dollar—

Jensen: That’s exactly right. It’s our way of saying there’s no market yet, but we believe there will be one. Usually when you’re positioned there, everybody’s trying to figure out why are you here. When we first got into automotive, because we believe that in the future, the car is going to be largely software. If it’s going to be largely software, a really incredible computer is necessary.

When we positioned ourselves there, I still remember one of the CTOs told me, you know what? Cars cannot tolerate the blue screen of death. I said, I don’t think anybody can tolerate that, but that doesn’t change the fact that someday every car will be a software-defined car. I think 15 years later we’re largely right.

Oftentimes there’s non-consumption, and we like to navigate our company there. By doing that, by the time that the market emerges, it’s very likely there aren’t that many competitors shaped that way.

We were early in PC gaming, and today Nvidia’s very large in PC gaming. We reimagined what a design workstation would be like. Today, just about every workstation on the planet uses Nvidia’s technology. We reimagine how supercomputing ought to be done and who should benefit from supercomputing, that we would democratize it. And look today, Nvidia’s in accelerated computing is quite large.

We reimagine how software would be done, and today it’s called machine learning, and how computing would be done, we call it AI. We reimagined these things, try to do that about a decade in advance. We spent about a decade in zero billion dollar markets, and today I spent a lot of time on omniverse. Omniverse is a classic example of a zero billion dollar business.

Ben: There are like 40 customers now? Something like that?

David: Amazon, BMW.

Jensen: Yeah, I know. It’s cool.

On protecting a company’s moat (or competitive advantage)

Jensen: Oftentimes, if you created the market, you ended up having what people describe as moats, because if you build your product right and it’s enabled an entire ecosystem around you to help serve that end market, you’ve essentially created a platform.

Sometimes it’s a product-based platform. Sometimes it’s a service-based platform. Sometimes it’s a technology-based platform. But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers and customers who are built around you. That network is essentially your moat.

I don’t love thinking about it in the context of a moat. The reason for that is because you’re now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. That network is about enabling other people to enjoy the success of the final market. That you’re not the only company that enjoys it, but you’re enjoying it with a whole bunch of other people.

On the importance of luck in a company’s success

David: Is it fair to say, though, maybe on the luck side of the equation, thinking back to 1997, that that was the moment where consumers tipped to really, really valuing 3D graphical performance in games?

Jensen: Oh yeah. For example, luck. Let’s talk about luck. If Carmack had decided to use acceleration, because remember, Doom was completely software-rendered.

The Nvidia philosophy was that although general-purpose computing is a fabulous thing and it’s going to enable software and IT and everything, we felt that there were applications that wouldn’t be possible or it would be costly if it wasn’t accelerated. It should be accelerated. 3D graphics was one of them, but it wasn’t the only one. It just happens to be the first one and a really great one.

I still remember the first times we met John. He was quite emphatic about using CPUs and his software render was really good. Quite frankly, if you look at Doom, the performance of Doom was really hard to achieve even with accelerators at the time. If you didn’t have to do bilinear filtering, it did a pretty good job.

David: The problem with Doom, though, was you needed Carmac to program it.

Jensen: Exactly. It was a genius piece of code, but nonetheless, software renders did a really good job. If he hadn’t decided to go to OpenGL and accelerate for Quake, frankly what would be the killer app that put us here? Carmack and Sweeney, both between Unreal and Quake, created the first two killer applications for consumer 3D, so I owe them a great deal.

On the importance of having an ecosystem of 3rd-party developers surrounding your company

David: I want to come back real quick to you told these stories and you’re like, well, I don’t know what founders can take from that. I actually do think if you look at all the big tech companies today, perhaps with the exception of Google, they did all start—and understanding this now about you—by addressing developers, planning to build a platform, and tools for developers.

All of them—Apple, not Amazon. […] That’s how AWS started. I think that actually is a lesson to your point of, that won’t guarantee success by any means, but that’ll get you hanging around a tree if the apple falls.

Jensen: As many good ideas as we have. You don’t have all the world’s good ideas and the benefit of having developers is you get to see a lot of good ideas.

On keeping AI safe, and how AI can change the world for the better

Ben: I want to think about the future a little bit. I’m sure you spend a lot of time on this being on the cutting edge of AI.

We’re moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they’re creating, which has to be amazing for humanity in the long run. In the short term, it’s going to be inevitably bumpy as we figure out what that means.

What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?

Jensen: First of all, we have to keep AI safe. There are a couple of different areas of AI safety that’s really important. Obviously, in robotics and self-driving car, there’s a whole field of AI safety. We’ve dedicated ourselves to functional and active safety, and all kinds of different areas of safety. When to apply human in the loop? When is it okay for a human not to be in the loop? How do you get to a point where increasingly human doesn’t have to be in the loop, but human largely in the loop?

In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention.

You’ve seen some of the work that we’ve done, instead of scraping the Internet we, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence, generative AI.

In the area of large language models in the future of increasingly greater agency AI, clearly the answer is for as long as it’s sensible—and I think it’s going to be sensible for a long time—is human in the loop. The ability for an AI to self-learn, improve, and change out in the wild in a digital form should be avoided. We should collect data. We should carry the data. We should train the model. We should test the model, validate the model before we release it in the wild again. So human is in the loop.

There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity. Obviously, the way autopilot works for a plane, two-pilot system, then air traffic control, redundancy and diversity, and all of the basic philosophies of designing safe systems apply as well in self-driving cars, and so on and so forth. I think there are a lot of models of creating safe AI, and I think we need to apply them.

With respect to automation, my feeling is that—and we’ll see—it is more likely that AI is going to create more jobs in the near term. The question is what’s the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people because they want to expand into more areas.

So the question is, if you think about a company and say, okay, if we improve the productivity, then need fewer people. Well, that’s because the company has no more ideas. But that’s not true for most companies. If you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.

So long as we believe that they’re more areas to expand into, there are more ideas in drugs, there’s drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there are more ideas in technology, so long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.

Now you go back in history. We can fairly say that today’s industry is larger than the world’s industry a thousand years ago. The reason for that is because obviously, humans have a lot of ideas. I think that there are plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it’s likely to generate jobs.

Now obviously, net generation of jobs doesn’t guarantee that any one human doesn’t get fired. That’s obviously true. It’s more likely that someone will lose a job to someone else, some other human that uses an AI. Not likely to an AI, but to some other human that uses an AI.

I think the first thing that everybody should do is learn how to use AI, so that they can augment their own productivity. Every company should augment their own productivity to be more productive, so that they can have more prosperity, hire more people.

I think jobs will change. My guess is that we’ll actually have higher employment, we’ll create more jobs. I think industries will be more productive. Many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off their feet and get back to growth and prosperity. I see it a little bit differently, but I do think that jobs will be affected, and I’d encourage everybody just to learn AI.

David: This is appropriate. There’s a version of something we talked about a lot on Acquired, we call it the Moritz corollary to Moore’s law, after Mike Moritz from Sequoia.

Jensen: Sequoia was the first investor in our company.

David: Of course, yeah. The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia’s returns. He was looking at fund three or four, I think it was four maybe that had Cisco in it. He was like, how are we ever going to top that? Don’s going to have us beat. We’re never going to beat that.

He thought about it and he realized that, well, as compute gets cheaper, and it can access more areas of the economy because it gets cheaper, and can it get adopted more widely, well then the markets that we can address should get bigger. Your argument is basically AI will do the same thing. The cycle will continue.

Jensen: Exactly. I just gave you exactly the same example that in fact, productivity doesn’t result in us doing less. Productivity usually results in us doing more. Everything we do will be easier, but we’ll end up doing more. Because we have infinite ambition. The world has infinite ambition. If a company is more profitable, they tend to hire more people to do more.

On the importance of prioritising your daily activities

David: What is something that you believe today that 40-year-old Jensen would’ve pushed back on and said, no, I disagree.

Jensen: There’s plenty of time. If you prioritize yourself properly and you make sure that you don’t let Outlook be the controller of your time, there’s plenty of time.

David: Plenty of time in the day? Plenty of time to achieve this thing?

Jensen: To do anything. Just don’t do everything. Prioritize your life. Make sacrifices. Don’t let Outlook control what you do every day.

Notice I was late to our meeting, and the reason for that, by the time I looked up, oh my gosh. Ben and David are waiting.

David: We have time.

Jensen: Exactly.

David: Didn’t stop this from being your day job.

Jensen: No, but you have to prioritize your time really carefully, and don’t let Outlook determine that.

On what is the really important thing in a business plan: The problem you want to solve

Jensen: I didn’t know how to write a business plan.

Ben: Which it turns out is not actually important.

Jensen: No. It turns out that making a financial forecast that nobody knows is going to be right or wrong, turns out not to be that important. But the important things that a business plan probably could have teased out, I think that the art of writing a business plan ought to be much, much shorter.

It forces you to condense what is the true problem you’re trying to solve? What is the unmet need that you believe will emerge? And what is it that you’re going to do that is sufficiently hard, that when everybody else finds out is a good idea, they’re not going to swarm it and make you obsolete? It has to be sufficiently hard to do.

There are a whole bunch of other skills that are involved in just product positioning, pricing, go to market and all that stuff. But those are skills, and you can learn those things easily. The stuff that is really, really hard is the essence of what I described.

I did that okay, but I had no idea how to write the business plan. I was fortunate that Wilf Corrigan was so pleased with me in the work that I did when I was at LSI Logic, he called up Don Valentine and told Don, invest in this kid. He’s going to come your way. I was set up for success from that moment and got us off the ground.

On entrepreneurs’ superpower

David: Well, and that being our final question for you. It’s 2023, 30 years anniversary of the founding of Nvidia. If you were magically 30 years old again today in 2023, and you were going to Denny’s with your two best friends who are the two smartest people you know, and you’re talking about starting a company, what are you talking about starting?

Jensen: I wouldn’t do it. I know. The reason for that is really quite simple. Ignoring the company that we would start, first of all, I’m not exactly sure. The reason why I wouldn’t do it, and it goes back to why it’s so hard, is building a company and building Nvidia turned out to have been a million times harder than I expected it to be, any of us expected it to be.

At that time, if we realized the pain and suffering, just how vulnerable you’re going to feel, and the challenges that you’re going to endure, the embarrassment and the shame, and the list of all the things that go wrong, I don’t think anybody would start a company. Nobody in their right mind would do it.

I think that that’s the superpower of an entrepreneur. They don’t know how hard it is, and they only ask themselves how hard can it be? To this day, I trick my brain into thinking, how hard can it be? Because you have to.

On the importance of self-belief

David: I know how meaningful that is in any company, but for you, I feel like the Nvidia journey is particularly amplified on these dimensions. You went through two, if not three, 80%-plus drawdowns in the public markets, and to have investors who’ve stuck with you from day one through that, must be just so much support.

Jensen: It is incredible. You hate that any of that stuff happened. Most of it is out of your control, but 80% fall, it’s an extraordinary thing no matter how you look at it.

I forget exactly, but we traded down at about a couple of $2–$3 billion in market value for a while because of the decision we made in going into CUDA and all that work. Your belief system has to be really, really strong. You have to really, really believe it and really, really want it.

Otherwise, it’s just too much to endure because everybody’s questioning you. Employees aren’t questioning you, but employees have questions. People outside are questioning you, and it’s a little embarrassing.

It’s like when your stock price gets hit, it’s embarrassing no matter how you think about it. It’s hard to explain. There are no good answers to any of that stuff. The CEOs are humans and companies are built of humans. These challenges are hard to endure.

On how technology transforms and grows economic opportunities

Jensen: This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions.

One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That’s one way to think about that. However, if you build a system that, whenever needed, assisted in the driving of the car, what’s the value of an autonomous chauffeur every now and then?

Obviously, the problem becomes much larger, the opportunity becomes larger. What would it be like if we were to magically conjure up a chauffeur for everybody who has a car, and how big is that market? Obviously, that’s a much, much larger market.

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

When DCFF Models Fail

Investors may fall into the trap of valuing a company based on cash flow to the firm. But cash flow to the firm is different from cashflow to shareholders.

Investing is based on the premise that an asset’s value is the cash flow that it generates over its lifetime, discounted to the present. This applies to all assets classes.

For real estate, the cash flow generated is rent. For bonds, it’s the coupon. For companies, it is profits.

In the case of stocks, investors may use cash flow to the firm to value a company. Let’s call this the DCFF (discounted cashflow to the firm) model. But valuing a stock based on cash flow to the firm may not always be accurate for shareholders.

This is because free cash flow generated by the firm does not equate to cash returned to the shareholder.

Take for instance, two identical companies. Both generate $1 per share in free cash flow for 10 years. Company A  hoards all the cash for 10 years before finally returning it to shareholders. Company B, however, returns the $1 in free cash flow generated to shareholders at the end of each year. 

Investors who use a DCFF model will value both companies equally. But the actual cash returned to shareholders is different for the two companies. Company B should be more valuable to shareholders as they are receiving cash on a more timely basis. 

To avoid falling for this “valuation trap”, we should use a dividend discount model instead of a DCFF model.

Companies trading below net cash

The timing of cash returned to shareholder matters a lot to the value of a stock.

This is also why we occasionally see companies trading below the net cash on its balance sheet.

If you use a DCFF model, cash on the balance sheet is not discounted. As such, a company that will generate positive cash flows over its lifetime should technically never be valued below its net cash if you are relying on a DCFF model.

However, this again assumes that shareholders will be paid out immediately from the balance sheet. The reality is often very different. Companies may withhold payment to shareholders, leaving shareholders waiting for years to receive the cash.

Double counting

Using the DCFF model may also result in double counting.

For instance, a company may generate free cash flow but use that cash to acquire another company for growth. For valuation purposes, that $1 has been invested so should not be included when valuing the asset.

Including this free cash flow generated in a DCFF model results in double counting the cash.

Don’t forget the taxes

Not only is the DCFF model an inaccurate proxy for cash flow to shareholders, investors also often forget that shareholders may have to pay taxes on dividends earned.

This tax eats into shareholder returns and should be included in all models. For instance, non residents of America have to pay withholding taxes of up to 30% on all dividends earned from US stocks.

When modelling the value of a company, we should factor this withholding taxes into our valuation model.

This is important for long term investors who want to hold the stock for long periods or even for perpetuity. In this case, returns are based solely on dividends, rather than selling the stock.

The challenges of the DDM

To me, the dividend discount model is the better way to value a stock as a shareholder. However, using the dividend discount model effectively has its own challenges.

For one, dividends are not easy to predict. Many companies in their growth phase are not actively paying a dividend, making it difficult for investors to predict the pattern of future dividend payments.

Our best guess is to see the revenue growth trajectory and to make a reasonable estimate as to when management will decide to start paying a dividend.

In some cases, companies may have a current policy to use all its cash flow to buyback shares. This is another form of growth investment for the firm as it decreases outstanding shares.

We should also factor these capital allocation policies into our models to make a better guess of how much dividends will be paid in the future which will determine the true value of the company today.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q3 2023

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the third quarter of 2023.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings release and conference call – for the third quarter of 2023 – happened just last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: Consumer spending and the overall economic environment is solid, but there are substantial risks on the horizon.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. Consumer spending is stable, but consumers are now spending their cash buffers down to pre-pandemic levels

Consumer spend growth has now reverted to pre-pandemic trends with nominal spend for customer stable and relatively flat year-on-year. Cash buffers continue to normalize to pre-pandemic levels with lower income groups normalizing faster.

2. Auto loan originations and auto loan growth were strong

And in Auto, originations were $10.2 billion, up 36% year-on-year as we saw competitors pull back and we gained market share…

…In Auto, we’ve also seen pretty robust loan growth recently, both as a function sort of slightly more competitive pricing on our side as the industry was a little bit slow to raise rates. And so we lost some share previously, and that’s come back now. And generally, the supply chain situation is better, so that’s been supported. As we look forward there, it should be a little bit more muted.

3. Businesses have a healthy appetite for funding from capital markets…

In terms of the outlook, we’re encouraged by the level of capital markets activity in September, and we have a healthy pipeline going into the fourth quarter.

4. …although loan demand from businesses appears to be relatively muted

And I think generally in Wholesale, the loan growth story is going to be driven just by the economic environment. So depending on what you believe about soft landing, mild recession, no lending, we have slightly lower or slightly higher loan growth. But in any case, I would expect it to be relatively muted.

5. Loan losses (a.k.a net charge-off rate) for credit cards is improving, with prior expectation for 2023 Card net charge-off rate at 2.6% compared to the current expectation of 2.5%…

On credit, we now expect the 2023 Card net charge-off rate to be approximately 2.5%, mostly driven by denominator effects due to recent balance growth.

6. …and loan growth in credit cards is still robust, although it has tracked down somewhat

So we were seeing very robust loan growth in Card, and that’s coming from both spending growth and the normalization of revolving balances. As we look forward, we’re still optimistic about that, but it will probably be a little bit more muted than it has been during this normalization period.

7. The near-term outlook for the US economy has improved

I think our U.S. economists had their central case outlook to include a very mild recession with, I think, 2 quarters of negative 0.5% of GDP growth in the fourth quarter and first quarter of this year. And that then got revised out early this quarter to now have sort of modest growth, I think around 1% for a few quarters into 2024.

8. There is no weakness from both consumers and businesses in meeting debt-obligations

And I think your other question was, where am I seeing softness in credit? And I think the answer to that is actually nowhere, roughly, or certainly nowhere that’s not expected. Meaning we continue to see the normalization story play out in consumer more or less exactly as expected. And then, of course, we are seeing a trickle of charge-offs coming through the office space. You see that in the charge-off number of the Commercial Bank. But the numbers are very small and more or less just the realization of the allowance that we’ve already built there.

9. Demand for housing loans is constrained

And of course, Home Lending remains fairly constrained both by rates and market conditions.

10. Overall economic picture looks solid, but there are reasons for caution – in fact, JPMorgan’s CEO, Jamie Dimon, thinks the world may be in the most dangerous environment seen in decades 

And of course, the overall economic picture, at least currently, looks solid. The sort of immaculate disinflation trade is actually happening. So those are all reasons to be a little bit optimistic in the near term, but it’s tempered with quite a bit of caution…

…However, persistently tight labor markets as well as extremely high government debt levels with the largest peacetime fiscal deficits ever are increasing the risks that inflation remains elevated and that interest rates rise further from here. Additionally, we still do not know the longer-term consequences of quantitative tightening, which reduces liquidity in the system at a time when market-making capabilities are increasingly limited by regulations. Furthermore, the war in Ukraine compounded by last week’s attacks on Israel may have far-reaching impacts on energy and food markets, global trade, and geopolitical relationships. This may be the most dangerous time the world has seen in decades. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Crises and Stocks

Mankind’s desire for progress is ultimately what fuels the global economy and financial markets.

The past few years may seem especially tumultuous because of the crises that have occurred. 

For example, in 2020, there was the COVID pandemic and oil prices turned negative for the first time in recorded history. In 2021, inflation in the USA rose to a level last seen in the early 1980s. In 2022, Russia invaded Ukraine. This year, there were the high-profile collapses of Silicon Valley Bank and First Republic Bank in the USA, and Credit Suisse in Europe; and just a few days ago, Israel was attacked by Hamas and Hezbollah militants.

But without downplaying the human tragedies, it’s worth noting that crises are common. Here’s a (partial!) list of major crises in every year stretching back to 1990 that I’ve borrowed and added to (the additions are in square brackets) from an old Morgan Housel article for The Motley Fool:

[2023 (so far): Collapse of Silicon Valley Bank and First Republic Bank in the USA; firesale of Credit Suisse to UBS; Israel gets attacked by Hamas and Hezbollah militants

2022: Russia invades Ukraine

2021: Inflation in the USA rises to a level not seen since early 1980s

2020: COVID pandemic; oil prices turn negative for first time in history 

2019: Australia bush fires; US president impeachment; first sign of COVID

2018: US-China trade war

2017: Bank of England hikes interest rates for first time in 10 years; UK inflation rises to five-year high

2016: Brexit; Italy banking system crises

2015: Euro currency crashes against the Swiss franc; Greece defaults on loan to European Central Bank

2014: Oil prices collapse

2013: Cyprus bank bailouts; US government shuts down; Thai uprising

2012: Speculation of Greek exit from Eurozone; Hurricane Sandy]

2011: Japan earthquake, Middle East uprising.

2010: European debt crisis; BP oil spill; flash crash.

2009: Global economy nears collapse.

2008: Oil spikes; Wall Street bailouts; Madoff scandal.

2007: Iraq war surge; beginning of financial crisis.

2006: North Korea tests nuclear weapon; Mumbai train bombings; Israel-Lebanon conflict.

2005: Hurricane Katrina; London terrorist attacks.

2004: Tsunami hits South Asia; Madrid train bombings.

2003: Iraq war; SARS panic.

2002: Post 9/11 fear; recession; WorldCom bankrupt; Bali bombings.  

2001: 9/11 terrorist attacks; Afghanistan war; Enron bankrupt; Anthrax attacks.  

2000: Dot-com bubble pops; presidential election snafu; USS Cole bombed.  

1999: Y2K panic; NATO bombing of Yugoslavia.

1998: Russia defaults on debt; LTCM hedge fund meltdown; Clinton impeachment; Iraq bombing. 

1997: Asian financial crisis.

1996: U.S. government shuts down; Olympic park bombing.

1995: U.S. government shuts down; Oklahoma City bombing; Kobe earthquake; Barings Bank collapse.

1994: Rwandan genocide; Mexican peso crisis; Northridge quake strikes Los Angeles; Orange County defaults.

1993: World Trade Center bombing.

1992: Los Angeles riots; Hurricane Andrew.

1991: Real estate downturn; Soviet Union breaks up.

1990: Persian Gulf war; oil spike; recession.”

Yet through it all, the MSCI World Index, a good proxy for global stocks, is up by more than 400% in price alone (in US dollar terms) from January 1990 to 9 October this year, as shown in the chart below. 

Source: MSCI

To me, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. There are more than 8.0 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc, but I have faith that humanity can fix these problems. 

The trailing price-to-earnings (P/E) ratio of the MSCI World Index was roughly the same for the start and end points for the chart shown above. This means that the index’s rise over time was predominantly the result of the underlying earnings growth of its constituent-companies. This is a testament to how human ingenuity always finds a way and to how stocks do reflect this over the long run. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

7 Investing Mistakes to Avoid 

Investing is a negative art. It’s more important to avoid mistakes than it is to find ways to win.

From what I see, most investors are often on the lookout for ways to win in the stock market. But that may be the wrong focus, as economist Erik Falkenstein writes:

“In expert tennis, 80% of the points are won, while in amateur tennis, 80% are lost. The same is true for wrestling, chess, and investing: Beginners should focus on avoiding mistakes, experts on making great moves.”

In keeping with the spirit of Falkenstein’s thinking, here are some big investing blunders to avoid.

1. Not realising how common volatility is even with the stock market’s biggest long-term winners

From 1971 to 1980, the American retailer Walmart produced breath-taking business growth. Table 1 below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return.

Based on the earliest data I could find, Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the US stock market, represented by the S&P 500, was down by ‘only’ 40%. 

Table 1; Source: Walmart annual reports

But by the end of 1979, Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972. Still, the 2x-plus increase in Walmart’s stock price was far below the huge increase in earnings per share the company generated.

This is where the passage of time helped – as more years passed, the weighing machine clicked into gear (I’m borrowing from Ben Graham’s brilliant analogy of the stock market being a voting machine in the short run but a weighing machine in the long run). At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972. 

Volatility is a common thing in the stock market. It does not necessarily mean that anything is broken.

2. Mixing investing with economics

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. 

Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Chinese companies behaved the way they did? It turns out that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

Economic trends and investing results can at times be worlds apart. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. 

3. Anchoring on past stock prices

A 2014 study by JP Morgan showed that 40% of all stocks in the Russell 3000 index in the US from 1980 to 2014 suffered a permanent decline of 70% or more from their peak values.

There are stocks that fall hard – and then stay there. Thinking that a stock will return to a particular price just because it had once been there can be a terrible mistake to make. 

4. Think a stock is cheap based on superficial valuation metrics

My friend Chin Hui Leong from The Smart Investors had suffered through this mistake before and he has graciously shared his experience for the sake of letting others learn. In an April 2020 article, he wrote:

“The other company I bought in May 2009, American Oriental Bioengineering, has shrunk to such a tiny figure, making it a total loss…

…In contrast, American Oriental Bioengineering’s revenue fell from around $300 million in 2009 to about US$120 million by 2013. The company also recorded a huge loss of US$91 million in 2013…

…Case in point: when I bought American Oriental Bioengineering, the stock was only trading at seven times its earnings. And yet, the low valuation did not yield a good outcome in the end.”

Superficial valuation metrics can’t really tell us if a stock’s a bargain or not. Ultimately, it’s the business which matters.

5. Not investing due to fears of a recession

Many investors I’ve spoken to prefer to hold off investing in stocks if they fear a recession is around the corner, and jump back in only when the coast is clear. This is a mistake.

According to data from Michael Batnick, the Director of Research at Ritholtz Wealth Management, a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint. 

Said another way, history’s verdict is that avoiding recessions flawlessly would cause serious harm to your investment returns.

6. Following big investors blindly

Morgan Housel is currently a partner with the venture capital firm Collaborative Fund. Prior to this, he was a writer for The Motley Fool for many years. Here’s what Housel wrote in a 2014 article for the Fool (emphasis is mine):

I made my worst investment seven years ago.

The housing market was crumbling, and a smart value investor I idolized began purchasing shares in a small, battered specialty lender. I didn’t know anything about the company, but I followed him anyway, buying shares myself. It became my largest holding — which was unfortunate when the company went bankrupt less than a year later.

Only later did I learn the full story. As part of his investment, the guru I followed also controlled a large portion of the company’s debt and and preferred stock, purchased at special terms that effectively gave him control over its assets when it went out of business. The company’s stock also made up one-fifth the weighting in his portfolio as it did in mine. I lost everything. He made a decent investment.”

We may never be able to know what a famous investor’s true motives are for making any particular investment. And for that reason, it’s important to never follow anyone blindly into the stock market.

7. Not recognising how powerful simple, common-sense financial advice can be

Robert Weinberg is an expert on cancer research from the Massachusetts Institute of Technology. In the documentary The Emperor of All Maladies, Weinberg said (emphases are mine):

If you don’t get cancer, you’re not going to die from it. That’s a simple truth that we [doctors and medical researchers] sometimes overlook because it’s intellectually not very stimulating and exciting.

Persuading somebody to quit smoking is a psychological exercise. It has nothing to do with molecules and genes and cells, and so people like me are essentially uninterested in it — in spite of the fact that stopping people from smoking will have vastly more effect on cancer mortality than anything I could hope to do in my own lifetime.”

I think Weinberg’s lesson can be analogised to investing. Ben Carlson is the Director of Institutional Asset Management at Ritholtz Wealth Management. In a 2017 blog post, Carlson compared the long-term returns of US college endowment funds against a simple portfolio he called the Bogle Model.

The Bogle Model was named after the late index fund legend John Bogle. It consisted of three, simple, low-cost Vanguard funds that track US stocks, stocks outside of the US, and bonds. In the Bogle Model, the funds were held in these weightings: 40% for the US stocks fund, 20% for the international stocks fund, and 40% for the bonds fund. Meanwhile, the college endowment funds were dizzyingly complex, as  Carlson describes:

These funds are invested in venture capital, private equity, infrastructure, private real estate, timber, the best hedge funds money can buy; they have access to the best stock and bond fund managers; they use leverage; they invest in complicated derivatives; they use the biggest and most connected consultants…”

Over the 10 years ended 30 June 2016, the Bogle Model produced an annual return of 6.0%. But even the college endowment funds that belonged to the top-decile in terms of return only produced an annual gain of 5.4% on average. The simple Bogle Model had bested nearly all the fancy-pants college endowment funds in the US.

Simple advice can be very useful and powerful for many investors. But they’re sometimes ignored because they’re too simple, despite how effective they can be. Don’t make this mistake.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI. In it, I shared commentary in earnings conference calls for the second quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s second quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe is using its rich datasets to create foundation models in areas where the company has expertise; Firefly has generated >2 billion images in 6 months 

Our rich datasets enable us to create foundation models in categories where we have deep domain expertise. In the 6 months since launch, Firefly has captivated people around the world who have generated over 2 billion images.

Adobe will allow users to create custom AI models using their proprietary data as well as offer Firefly APIs so that users can embed Firefly into their workflows

Adobe will empower customers to create custom models using proprietary assets to generate branded content and offer access to Firefly APIs so customers can embed the power of Firefly into their own content creation and automation workflows.

Adobe is monetising its generative AI features through generative credits; the generative credits have limits to them, but the limits are set in a way where users can really try out Adobe’s generative AI functions and build the use of generative AI into a habit

We announced subscription offerings, including new generative AI credits with the goal of enabling broad access and user adoption. Generative credits are tokens that enable customers to turn text-based prompts into images, vectors and text effects, with other content types to follow. Free and trial plans include a small number of monthly fast generative credits that will expose a broad base of prospects to the power of Adobe’s generative AI, expanding our top of funnel. Paid Firefly, Express and Creative Cloud plans will include a further allocation of fast generative credits. After the planned specific number of generative credits is reached, users will have an opportunity to buy additional fast generative credits subscription packs…

…First of all, it was a very thoughtful, deliberate decision to go with the generative credit model. And the limits, as you can imagine, were very, very considered in terms of how we set them. The limits are, of course, fairly low for free users. The goal there is to give them a flavor of it and then help them convert. . And for paid users, especially for people in our Single Apps and All Apps plans, one of the things we really intended to do is try and drive real proliferation of the usage. We didn’t want there to be generation anxiety, put in that way. We wanted them to use the product. We wanted the Generative Fill and Generative Expand. We wanted the vector creation. We want to build the habits of using it. And then what will happen over time as we introduce 3D, as we introduce video and design and vectors, and as we introduce these Acrobat capabilities that Shantanu was talking about, the generative credits that are used in any given month continues to go up because they’re getting more value out of it. And so that’s the key thing. We want people to just start using it very actively right now and build those habits.

Brands around the world are using Adobe’s generative AI – through products such as Adobe GenStudio – to create personalised customer experiences at scale; management sees Adobe GenStudio as a huge new opportunity; Adobe itself is using GenStudio for marketing its own products successfully and it’s using its own success as a selling point

Brands around the globe are working with Adobe to accelerate personalization at scale through generative AI. With the announcement of Adobe GenStudio, we are revolutionizing the entire content supply chain by simplifying the creation-to-activation process with generative AI capabilities and intelligent automation. Marketers and creative teams will now be able to create and modify commercially safe content to increase the scale and speed at which experiences are delivered…

…Shantanu and David already talked about the Adobe GenStudio, and we’re really excited about that. This is a unique opportunity, as you said, for enterprises to really create personalized content and drive efficiencies as well through automation and efficiency. And when you look at the entire chain of what enterprises go through from content creation, production workflow and then activation through DX through all the apps we have on our platform, we have the unique opportunity to do that. We already have deployed it within Adobe for our own Photoshop campaign, and we’re working with a number of agencies and customers to do that. So this is a big net new opportunity for us with Adobe GenStudio…

…And if I could actually just add one quick thing at the GenStudio work that Anil team has been doing, we’ve actually been using that within the Digital Media business already to release some of the campaigns that we’ve released this quarter. So it’s one of these things that it’s great to see the impact it’s having on our business and that becomes a selling point for other businesses, too.

Inferencing costs for generative AI are expensive, but Adobe’s management is still confident of producing really strong margins for FY2023

[Question] We’ve been told generative AI is really expensive to run. The inference and training costs are really high. 

[Answer] Our customers have generated over 2 billion images. And I know it’s not lost on people, all this was done while we’re delivering strong margins. But when we take a step back and think about these technologies, we have investments from a COGS standpoint, inferencing, content; from an R&D standpoint, training, creating foundation models. And David alluded to it in his prepared comments, the image model for Firefly family of models is out, but we’re going to bring other media types to market as well so we’re making substantive investments. When I go back to the framing of my prepared comments, we really have a fundamental operating philosophy that’s been alive at the company for a long time: growth and profitability. We’re going to prioritize, we’re going to innovate and we’re going to execute with rigor…

…As we think about going — the profile going forward, what I’ll come back to is when we initially set fiscal 2023 targets, implicit in those targets was a 44.5% operating margin. If you think about how we just guided Q4… implicit in that guide is an operating margin of around 45.5%.

So as you think about us leading this industry, leading the inflection that’s unfolding in front of us, that mid-40s number, we think, is the right ballpark to think about the margin structure of the company as we continue to drive this technology and leadership. 

Adobe’s management thinks about generative AI’s impact on the company’s growth through two lenses: (1) acquiring new users, and (2) growing the spend of existing customers; for growing the spend of existing customers, Adobe has recently increased the pricing of its products

Yes, Shantanu said that we look at the business implications of this through those two lenses: new user adoption, first and foremost; and then sort of opportunity to continue to grow the existing book of business. On the new user side, we’ve said this for years: our focus continues to be on proliferation. We believe that there — we have a massive number of users in front of us. We continue to have our primary focus being net user adds and subscribers. And so the goal here in proliferation is to get the right value to the right audience at the right price…

…The second thing is going to be on the book of business. And here, we’re — basically, the pricing changes, just as a reminder, they have a rolling impact. 

Adobe’s management took a differentiated approach with Firefly when building the company’s generative AI capabilities, with a focus on using licensed content for training where Adobe has the rights to use the content 

So from the very beginning of Firefly, we took a very different approach to how we were doing generative. We started by looking at and working off the Adobe Stock base, which are contents that are licensed and very clearly we have the rights to use. And we looked at other repositories of content where they didn’t have any restrictions on usage, and we’ve pulled that in. So everything that we’ve trained on has gone through some form of moderation and has been cleared by our own legal teams for use in training. And what that means is that the content that we generate is, by definition, content that isn’t then stepping on anyone else’s brand and/or leveraging content that wasn’t intended to be used in this way. So that’s the foundation of what we’ve done.

Adobe is sharing the economic spoils with the creators of the content it has been training its generative AI models on

We’ve been working with our Stock contributors. We’ve announced, and in fact, yesterday, we had our first payout of contributions to contributors that have been participating and adding stock for the AI training. And we’re able to leverage that base very effectively so that if we see that we need additional training content, we can put a call to action, call for content, out to them, and they’re able to bring content to Adobe in a fully licensed way. So for example, earlier this quarter, we decided that we needed 1 million new images of crowd scenes. And so we put a call to action out. We were able to gather that content in. But it’s fully licensed and fully moderated in terms of what comes in. So as a result, all of the content we generate is safe for commercial use.

Adobe’s management is seeing that enterprise customers place a lot of importance on working with generated AI content that is commercially safe

The second thing is that because of that, we’re able to go to market and also indemnify customers in terms of how they’re actually leveraging that content and using it for content that’s being generated. And so enterprise customers find that to be very important as we bring that in not just in the context of Firefly stand-alone but we integrated into our Creative Cloud applications and Express applications as well. 

Adobe’s management has been very focused on generating fair (in population diversity, for example) and safe content in generative AI and they think this is a good business decision

We’ve been very focused on fair generation. So we look intentionally for diversity of people that are generated, and we’re looking to make sure that the content we generate doesn’t create or cause any harm. And all of those things are really good business decisions and differentiate us from others. 

One of the ways Adobe’s management thinks generative AI could be useful in PDFs is for companies to be able to have conversations with their own company-wide knowledge base that is stored in PDFs – Adobe is already enabling this through APIs

Some of the things that people really want to know is how can I have a conversational interface with the PDF that I have, not just the PDF that I have opened right now but the PDF that are all across my folder, then across my entire enterprise knowledge management system, and then across the entire universe. So much like we are doing in Creative, where you can start to upload your images to get — train your own models within an enterprise, well, it is often [ hard-pressed ]. The number of customers who want to talk to us now that we’ve sort of designed this to be commercially safe and say, “Hey, how do we create our own model,” whether you’re a Coke or whether you’re a Nike, think of them as having that. I think in the document space, the same interest will happen, which is we have all our knowledge within an enterprise associated with PDFs, “Adobe, help me understand how your AI can start to deliver services like that.” So I think that’s the way you should also look at the PDF opportunity that exists, just more people taking advantage of the trillions of PDFs that are out there in the world and being able to do things…

… So part of what we are also doing with PDFs is the fact that you can have all of this now accessible through APIs. It’s not just the context of the PDF, the semantic understanding of that to do specific workflows, we’re starting to enable all of that as well. 

When it comes to generative AI products, Adobe’s goal for enterprises and partners is to provide (1) API access, (2) ability to train their own models, and (3) core workflows that gel well with Adobe’s existing products; management is thinking about extending the same metering concepts as Adobe’s generative credits to API calls too

Our goal right now, for enterprises and third-parties that we work with, is to provide a few things. The first is this ability, obviously, to have API access to everything that we are building in, so that they can build it into their workflows and their automation stack. The second thing is to give them the ability to extend or train their own models as well. So if — as we mentioned earlier, our core model, foundation model is a very clean model. It generates great content and you can rely on it commercially. We want our customers and partners to be able to extend that model with content that is relevant to them so that Firefly is able to generate content in their brand or in their style. So we’ll give them the ability to train their own model as well. And then last, but certainly not least, we’ll give them some core workflows that will work with our existing products, whether it’s Express or whether it’s Creative Cloud or GenStudio as well, so that they can then integrate everything they’re doing onto our core platform.

And then from a monetization perspective, you can imagine the metering concepts that we have for generative credits extending to API calls as well. And of course, those will all be custom negotiated deals with partners and enterprises.

Adobe is its own biggest user of the AI products it has developed for customers – management thinks this is a big change for Adobe because the extent of usage internally of its AI products is huge, and it has helped improve the quality of the company’s AI products

So I think the pace of innovation internally of what we have done is actually truly amazing. I mean relative to a lot of the companies that are out there and the fact that we’ve gone from talking about this to very, very quickly, making it commercially available, I don’t want to take for granted the amount of work that went into that. I think internally, it is really galvanized because we are our own biggest user of these technologies. What we are doing associated with the campaigns and the GenStudio that we are using, as David alluded to it, our Photoshop Everyone Can Campaign or the Acrobat’s Got It campaign or how we will be further delivering campaigns for Express as well as for Firefly, all of this is built on this technology. And we use Express every day, much like we use Acrobat every day. So I think it’s really enabled us to say are we really embracing all of this technology within the company. And that’s been a big change because I think the Creative products, we’ve certainly had phenomenal usage within the company, but the extent to which the 30,000 employees can now use our combined offering, that is very, very different internally

DocuSign (NASDAQ: DOCU)

DocuSign has a new AI-powered feature named Liveness Detection for ID verification, which has reduced the time needed for document signings by 60%

Liveness Detection technology leverages AI-powered biometric checks to prevent identity spoofing, which results in more accurate verification without the signee being present. ID Verification is already helping our customers. Our data shows that it has reduced time to sign by about 60%.

DocuSign is already monetising AI features directly

Today, we’re already monetizing AI directly through our CLM+ product and indirectly through its use in our products such as search. 

DocuSign is partnering with AI Labs to build products in closer collaboration with customers

Our next step on that journey is with AI Labs. With AI Labs, we are co-innovating with our customers. We provide a sandbox where customers can share a select subset of agreements, try new features we’re testing. Our customers get early access to developing technology and re-receive early feedback that we will incorporate into our products. By working with our customers in the development phase, we’re further reinforcing the trusted position we’ve earned over the last 20 years. 

DocuSign’s management is excited about how AI – especially generative AI – can help the company across the entire agreement workflow

We think AI will impact practically all of our products at every step with the agreement workflow. So I don’t know that there’s a — just a one call out. But maybe to off a couple that I’m most interested in, I certainly think that the broader, should we say, agreement analytics category is poised to be completely revamped with generative AI. 

DocuSign has been an early investor in AI but had been held back by fundamental technology until the introduction of generative AI

We were an early investor in that category. We saw that coming together with CLM 4 or 5 years ago and made a couple of strategic investments and been a leader in that space, but have been held back by fundamental technology. And I think now with generative AI, we can do a substantially better job more seamlessly, lighter weight with less professional services. And so I’m very excited to think about how it transformed the CLM category and enables us to deliver more intelligent agreements. I think you mentioned IDV [ID Verification]. I agree 100%. Fundamentally, that entire category is AI-enabled. The upload and ingestion of your ID recognition of it and then that Liveness Detection where we’re detecting who you are and that you are present and matching that to ID, that would simply not be possible without today’s AI technology and really takes just dramatically reshapes the ability to trade off risk and convenience. So I think that’s a good one. 

MongoDB (NASDAQ: MDB)

There are 3 important things to focus on when migrating off a relational database, and MongoDB’s management thinks that generative AI can help with one of them (the rewriting of the application code)

So with regards to Gen AI, I mean, we do see opportunities essentially, the reason when you migrate off using relational migrator, there’s really 3 things you have to focus on. One is mapping the schema from the old relational database to the MongoDB platform, moving the data appropriately and then also rewriting some, if not all, of the application code. Historically, that last component has been the most manually intensive part of the migration, obviously, with the advance of cogeneration tools. These opportunities to automate the rewriting of the application code. I think we’re still in the very early days. You’ll see us continue to add new functionality to relational migrator to help again reduce the switching costs of doing so. And that’s obviously an area that we’re going to focus. 

MongoDB introduced Atlas Vector Search, its vector database which allows developers to build AI applications, and it is seeing significant interest; management hopes to bring Atlas Vector Search to general availability (GA) sometime next year, but some customers are already deploying it in production

We also announced Atlas Vector Search, which enables developers to store, index and query Vector embeddings, instead of having to bolt on vector search functionality separately, adding yet another point solution and creating a more fragmented developer experience. Developers can aggregate and process the vectorized data they need to build AI applications while also using MongoDB to aggregate and process data and metadata. We are seeing significant interest in our vector search offering from a large and sophisticated enterprise customers even though it’s only — still only in preview. As one example, a large global management consulting firm is using Atlas Vector Search for internal research applications that allows consultants to semantically search over 1.5 million expert interview transcripts…

…Obviously, Vector is still in public preview. So we hope to have a GA sometime next year, but we’re really excited about the early and high interest from enterprises. And obviously, some customers are already deploying it in production, even though it’s a public preview product.

MongoDB’s management believes that AI will lead developers to write more software and these software will be exceptionally demanding and will thus require high-performance databases

Over time, AI functionality will make developers more productive to the use of code generation and code assist tools that enable them to build more applications faster. Developers will also be able to enrich applicants with compelling AI experiences by enabling integration with either proprietary or open source large language models to deliver more impact. Now instead of data being used only by data scientists who drive insights, data can be used by developers to build smarter applications that truly transform a business. These AI applications will be exceptionally demanding, requiring a truly modern operational data platform like MongoDB. 

MongoDB’s management believes MongoDB has a bright future in the world of AI because (1) the company’s document database is highly versatile, (2) AI applications need a high-performant, scalable database and (3) AI applications have the same requirements for transactional guarantees, security, privacy etc as other applications

In fact, we believe MongoDB has even stronger competitive advantage in the world of AI. First, the document models inherent in flexibility and versatility renders it a natural fit for AI applications. Developers can easily manage and process various data types all in one place. Second, AI applications require high performance, parallel computations and the ability to scale data processing on an ever-growing base of data. MongoDB supports its features, with features like shorting and auto-scaling. Lastly, it is important to remember AI applications have the same demands as any other type of application: Transactional guarantees, security and privacy requirements, tech search, in-app analytics and more. Our developer data platform that gives developer a unified solution to smarter AI applications.

AI startups as well as industrial equipment suppliers are using MongoDB for their AI needs 

We are seeing these applications developed across a wide variety of customer types and use cases. For example, observe.ai is an AI start-up that leverages 40 billion parameter LLM to provide customers with intelligence and coaching that maximize performance of their frontline support and sales teams. Observe.ai processes and run models on millions of support touch points daily to generate insights for their customers. Most of this rich, unstructured data is stored in MongoDB. Observe.ai chose to build on MongoDB because we enable them to quickly innovate, scale to handle large and unpredictable workloads and meet their security requirements of their largest enterprise customers. On the other end of the spectrum is one of the leading industrial equipment suppliers in North America. This company relies on Atlas and Atlas Device sync to deploy AI models at the edge. To their field teams mobile devices to better manage and predict inventory in areas with poor physical network connectivity, they chose MongoDB because of our ability to efficiently handle large quantities of distributed data and to seamlessly integrate between network edge and their back-end systems.

MongoDB’s management sees customers saying that they prefer being able to have one platform handle all their data use-cases (AI included) rather than stitching point solutions together

People want to use one compelling, unified developer experience to address a wide variety of use cases of which AI is just one of them. And we’re definitely hearing from customers to being able to do that on one platform versus bolting on a bunch of point solutions is far more the preferable approach. And so we’re excited about the opportunity there.

MongoDB is working with Google on a number of AI projects

On the other thing on partners, I do want to say that we’re seeing a lot of work and activity with our partner channel on the AI front as well. We’re working with Google in the AI start-up program, and there’s a lot of excitement. Google had their next conference this week. We’re also working with Google to help train Codey, their code generation tool to help people accelerate the development of AI and other applications. And we’re seeing a lot of interest in our own AI innovators program. We’ve had lots of customers apply for that program. So we’re super excited about the interest that we’re generating.

MongoDB’s management thinks there’s a lot of hype around AI in the short term, but also thinks that AI is going to have a huge impact in the long-term, with nearly every application having some AI functionality embedded within over the next 3-5 years

I firmly believe that we, as an industry, tend to overestimate the impact of a new technology in the short term and underestimate the impact in the long term. So as you may know, there’s a lot of hype in the market right now, in the industry right around AI and in some of the early stage companies in the space, have the valuations to the roof. In some cases, almost — it’s hard to see how people can make money because the risk reward doesn’t seem to be sized appropriately. So there’s a lot of hype in the space. But I do think that AI will be a big impact for the industry and for us long term. I believe that almost every application, both new and existing, will have some AI functionality embedded into the application over the next — in your horizon 3 to 5 years.

MongoDB’s management thinks that vector search (the key distinguishing feature of vector databases) is just a feature and not a product, and it will eventually be built into every database as a feature

Vector Search is really a reverse index. So it’s like an index that’s built into all databases. I believe, over time, Vector Search functionality will be built into all databases or data platforms in the future. There are some point products that are just focused solely on Vector Search. But essentially, it’s a point product that still needs to be used with other technologies like MongoDB to store the metadata, the data to be able to process and analyze all that information. So developers have spoken loudly that having a unified and elegant developer experience is a key differentiator. It removes friction in how they work. It’s much easier to build and innovate on one platform versus learning and supporting multiple technologies. And so my strong belief is that, ultimately, Vector Search will be embedded in many platforms and our differentiation will be a — like it always has been a very compelling and elegant developer experience

MongoDB’s management thinks that having vector search as a feature in a database does not help companies to save costs, but instead, improves the overall developer experience

Question: I know that we’re talking about the developers and how they — they’re voting here because they want the data in a unified platform, a unified database that preserves all that metadata, right? But I would think there’s probably also a benefit to having it all in a single platform as well just because you’re lowering the TCO [total cost of ownership] for your customers as well, right? 

Answer: Vectors are really a mathematical representation of different types of data, so there is not a ton of data, unlike application search, where there’s a profound benefits by storing everything on one platform versus having an operational database and a search database and some glue to keep the data in sync. That’s not as much the case with Vector because you’re talking about storing essentially an elegant index. And so it’s more about the user experience and the development workflow that really matters. And what we believe is that offering the same taxonomy in the same way they know how to use MongoDB to also be able to enable Vector Search functionality is a much more compelling differentiation than a developer have to bolt on a separate vector solution and have to provision, configure and manage that solution along with all the other things they have to do.

MongoDB’s management believes developers will become more important in organisations than data scientists because generative AI will position AI in front of software

Some of the use cases are really interesting, but the fact is that we’re really well positioned because what generative AI does is really instantiate AI in front of — in software, which means developers play a bigger role rather than data scientists, and that’s where you’ll really see the business impact. And I think that impact will be large over the next 3 to 5 years.

Okta (NASDAQ: OKTA)

Okta has been using AI for years and management believes that AI will be transformative for the identity market

AI is a paradigm shift in technology that is transformative opportunities for identity, from stronger security and faster application development to better user experiences and more productive employees. Okta has been utilizing AI for years with machine learning models for spotting attack patterns and defending customers against threats, and we’ll have more exciting AI news to share at Oktane.

Okta’s management believes that every company must have an AI strategy, which will lead to more identities to be protected; a great example is how OpenAI is using Okta; Okta’s relationship with OpenAI started a few years ago and OpenAI is now a big customer, accounting for a significant chunk of the US$100m in TCV (total contract value) Okta had with its top 25 transactions in the quarter

Just like how every company has to be a technology company, I believe every company must have an AI strategy. More companies will be founded on AI, more applications will be developed with AI and more identities will need to be protected with a modern identity solution like Okta. A great example of this is how Okta’s Customer Identity Cloud is being utilized for the massive number of daily log-ins, in authentications by OpenAI, which expanded its partnership with Okta again in Q2…

…So OpenAI is super interesting. So they’re — OpenAI as a Customer Identity Cloud customer, which so when you log in, in ChatGPT, you log in through Okta. And it’s interesting because a developer inside of OpenAI 3 years ago picked our Customer Identity Cloud because it had a great developer experience and from the website and started using it. And this Chat — and at the time, it was the log-in for their APIs and then ChatGPT took off. And now, as you mentioned, we’ve had really pretty sizable transactions with them over the last couple of quarters. And so it’s a great testament to our strategy on Customer Identity, having something that appeals to developers.

And you saw they did something pretty interesting — and so this is really a B2C app, right, of ChatGPT but they — now they recently launched their enterprise offering, and they want to connect ChatGPT to enterprises. So this is — Okta is really good at this, too, because our customer identity cloud connects our customers to consumers, but also connects our customers to workforces. So then you have to start supporting things like Single Sign-On and SAML and Open ID and authorization. And so it’s just open API continues to get the benefits of being able to focus on what they want to focus on, which is obviously their models in the LLMs and the capabilities, and we can focus on the identity plumbing that wires it together.

So the transaction was — it was one of the top — I mentioned the top 25 transactions. The total TCV of all this transaction was — this quarter was $100 million. It was one of those top 25 transactions, but I don’t — I haven’t done the math on the TCV for how much of the $100 million it was. But it was one of our — it was on the larger side this quarter.

Okta’s management thinks that identity is a key building block in a number of digital trends, including AI

It’s always a good reminder that identity is a key building block for Zero Trust security, digital transformation, cloud adoption projects and now AI. These trends will continue in any macroeconomic environment as organizations look for ways to become more efficient while strengthening their security posture.

Salesforce (NYSE: CRM)

Salesforce is driving an AI transformation to become the #1 AI CRM (customer relationship management)

And last quarter, we told you we’re now driving our AI transformation. We’re pioneering AI for both our customers and ourselves leading the industry through this incredible new innovation cycle, and I couldn’t be happier with Srini and David and the entire product and technology team for the incredible velocity of AI products that were released to customers this quarter and the huge impact that they’re making in the market and showing how [ tran ] Salesforce is transforming from being not only the #1 CRM, but to the #1 AI CRM, and I just express my sincere gratitude to our entire [ TNP ] team.

Salesforce’s management will continue to invest in AI

We’re in a new AI era, a new innovation cycle that we will continue to invest into as we have over the last decade. As a result, we expect nonlinear quarterly margins in the back half of this year, driven by investment timing, specifically in AI-focused R&D.

Salesforce’s management believes the world is at the dawn of an AI revolution that will spark a new tech buying cycle and investment cycle

AI, data, CRM, trust, let me tell you, we are at the dawn of an AI revolution. And as I’ve said, it’s a new innovation cycle which is sparking amounts of tech buying cycle over the coming years. It’s also a new tech investment cycle…

…And when we talk about growth, I think it’s going to start with AI. I think that AI is about to really ignite a buying revolution. I think we’ve already started to see that with our customers and even some of these new companies like OpenAI. And we certainly see that in our customers’ base as well. 

Salesforce has been investing in many AI startups through its $500 million generative AI fund

 We’ve been involved in the earliest rounds many of the top AI start-ups. Many of you have seen that, we are in there very early…

… Now through our $500 million generative AI fund, we’re seeing the development of ethical AI with amazing companies like Anthropic, [ Cohere ], Hugging Face and some others,

Salesforce has been working on AI early on

But I’ll tell you, this company has pioneered AI, and not just in predictive, a lot of you have followed up the development and growth of Einstein. But also, you’ve seen that we’ve published some of the first papers on prompt engineering in the beginnings of generative AI, and we took our deep learning routes, and we really demonstrated the potential for generative AI and now to see so many of these companies become so successful.

Every CEO Salesforce’s leadership has met thinks that AI is essential to improving their businesses

So every CEO I’ve met with this year across every industry believes that AI is essential to improving both their top and bottom line, but especially their productivity AI is just augmenting what we can do every single day…

…I think many of our customers and ultimately, all of them believe they can grow their businesses by becoming more connected to their customers than ever before through AI and at the same time, reduce cost, increase productivity, drive efficiency and exceed customer expectations through AI. 

All management teams in Salesforce are using Einstein AI to improve their decision-making

Every single management team that we have here at Salesforce every week, we’re using our Einstein AI to do exactly the same thing. We go back, we’re trying to augment ourselves using Einstein. So what we’ll say is, and we’ve been doing this now and super impressive, we’ll say, okay, Brian, what do you think our number is and we’ll say, okay, that’s very nice, Brian. But Einstein, what do you really think the number is? And then Einstein will say, I think Brian is sandbagging and then the meeting continues. 

Salesforce’s management thinks that every company will undergo an AI transformation with the customer at the centre, and this is why Salesforce is well positioned for the future

The reality is every company will undergo an AI transformation with the customer at the center, because every AI transformation begins and ends with the customer, and that’s why Salesforce is really well positioned with the future.

Salesforce has been investing a lot in Einstein AI, and Einstein is democratising generative AI for users of Salesforce’s products; Salesforce’s management thinks that the real value Salesforce brings to the world is the ability to help users utilise AI in a low code or no code way 

And with this incredible technology, Einstein that we’ve invested so much and grown and integrated into our core technology base. We’re democratizing generative AI, making it very easy for our customers to implement every job, every business in every industry. And I will just say that in the last few months, we’ve injected a new layer of generative AI assistance across all of the Customer 360. And you can see it with our salespeople who are now using our Sales Cloud GPT, which has been incredible, what we’ve released this quarter to all of our customers and here inside Salesforce. And then when we see that, they all say to themselves, you know what, in this new world, everyone can now be in Einstein.

But democratizing generative AI at scale for the biggest brands in the world requires more than — that’s just these large language models and deep learning algorithms, and we all know that because a lot of our customers kind of think and they have tried and they go and they pull something off a Hugging Face, it is an amazing company. We just invested in their new round and grab a model and put some data in it and nothing happens. And then they don’t understand and they call us and say, “Hey, what’s happening here? I thought that this AI was so amazing and it’s like, well, it takes a lot to actually get this intelligence to occur. And that’s what I think that’s the value that Salesforce is bringing is that we’re really able to help our customers achieve this kind of technological superiority right out of the box just using our products in a low code, no code way. It’s really just democratization of generative AI at scale. And that is really what we’re trying to achieve that at the heart of every one of these AI transformations becomes our intelligent, integrated and incredible sales force platform, and we’re going to show all of that at Dreamforce

Salesforce is seeing strong customer momentum on Einstein generative AI (a customer – PenFed – used Einstein-powered chatbots to significantly improve their customer service)

We’re also seeing strong customer momentum on Einstein generative AI. PenFed is a great example of how AI plus data plus CRM plus Trust is driving growth for our customers. PenFed is one of the largest credit unions in the U.S., growing at a rate of the next 9 credit unions combined. They’re already using Financial Services Cloud, Experience Cloud and MuleSoft, and our Einstein-powered chatbots handling 40,000 customer service sessions per month. In fact, today, PenFed resolves 20% of their cases on first contact with Einstein-powered chatbots resulting in a 223% increase in chatbot activity in the past year with incredible ROI. In Q2, PenFed expanded with Data Cloud to unify all the customer data from its nearly 3 million members and increase their use of Einstein to roll out generative AI assistant for every single one of their service agents.

Salesforce’s management thinks that customers who want to achieve success with AI needs to have their data in order

But what you can see with Data Cloud is that customers must get their data together if they want to achieve success with AI. This is the critical first step for every single customer. And we’re going to see that this AI revolution is really a data revolution. 

Salesforce takes the issue of trust very seriously in its AI work; Salesforce has built a unique trust layer within Einstein that allows customers to maintain data privacy, security, and more

Everything Einstein does has also delivered with trust and especially ethics at the center, and I especially want to call out the incredible work of our office of ethical and humane use, pioneering the use of ethics and technology. If you didn’t read their incredible article in HBR this quarter. It was awesome. And they are doing incredible work really saying that it’s not just about AI, it’s not just about data, but it’s also about trust and ethics. And that’s why we developed this Einstein trust layer. This is completely unique in the industry. It enables our customers to maintain their data privacy, security, residency and compliance goals.

Salesforce has seen customers from diverse industries (such as Heathrow Airport and Schneider Electric) find success using Salesforce’s AI tools

Heathrow is a great example of transformative power of AI, data, CRM and trust and the power of a single source of truth. They have 70 million passengers who pass through their terminal annually, I’m sure many of you have been one of those passengers I have as well, Heathrow is operating in a tremendous scale, managing the entire airport experience with the Service Cloud, Marketing Cloud, Commerce Cloud, but now Heathrow, they’ve added Data Cloud also giving them a single source of truth for every customer interaction and setting them up to pioneer the AI revolution. And with Einstein, Heathrow’s service agents now have this AI-assisted generator applies to service inquiries, case deflection, writing case summaries, all the relevant data and business context coming from Data Cloud…

…Schneider Electric has been using Customer 360 for over a decade, enhancing customer engagement, service and efficiency. With Einstein, Schneider has refined demand generation, reduced close times by 30%. And through Salesforce Flow, they’ve automated order fulfillment. And with Service Cloud, they’re handling over 8 million support interactions annually, much of it done on our self-service offering. In Q2, Schneider selected Marketing Cloud to further personalize the customer experience.

Salesforce’s management thinks the company is only near the beginning of the AI evolution and there are four major steps on how the evolution will happen

And let me just say, we’re at the beginning of quite a ballgame here and we’re really looking at the evolution of artificial intelligence in a broad way, and you’re really going to see it take place over 4 major zones.

And the first major zone is what’s played out in the last decade, which has been predictive. That’s been amazing. That’s why Salesforce will deliver about [ 1 trillion ] transactions on Einstein this week. It’s incredible. 

These are mostly predictive transactions, but we’re moving rapidly into the second zone that we all know is generative AI and these GPT products, which we’ve now released to our customers. We’re very excited about the speed of our engineering organization and technology organization, our product organization and their ability to deliver customer value with generative AI. We have tremendous AI expertise led by an incredible AI research team. And this idea that we’re kind of now in a generative zone means that’s zone #2.

But as you’re going to see at Dreamforce, zone #3 is opening up with autonomous and with agent-based systems as well. This will be another level of growth and another level of innovation that we haven’t really seen unfold yet from a lot of companies, and that’s an area that we are excited to do a lot of innovation and growth and to help our customers in all those areas.

And then we’re eventually going to move into [ AGI ] and that will be the fourth area. And I think as we move through these 4 zones, CRM will become more important to our customers than ever before. Because you’re going to be able to get more automation, more intelligence, more productivity, more capabilities, more augmentation of your employees, as I mentioned.

Salesforce can use AI to help its customers in areas such as call summaries, account overviews, responding to its customers’ customers, and more

And you’re right, we’re going to see a wide variety of capability is exactly like you said, whether it’s the call summaries and account overviews and deal insights and inside summaries and in-product assistance or mobile work briefings. I mean, when I look at things like service, when we see the amount of case deflection we can do and productivity enhancements with our service teams not just in replies and answers, but also in summaries and summarization. We’ve seen how that works with generative and how important that is in knowledge generation and auto-responding conversations and then we’re going to have the ability for our customers to — with our product.

Salesforce has its own AI models, but Salesforce has an open system – it’s allowing customers to choose any models they wish

We have an open system. We’re not we’re not dictating that they have to use any one of these AI systems. We have an ecosystem. Of course, we have our own models and our own technology that we have given to our customers, but we’re also investing in all of these companies, and we plan to be able to offer them as opportunities for those customers as well, and they’ll be able to deliver all kinds of things. And you’ll see that whether it’s going to end up being contract digitization and cost generation or survey generators or all kinds of campaign assistance.

Slack is going to be an important component of Salesforce’s AI-related work; management sees Slack as an easy-to-use interface for Salesforce’s AI systems

Slack has become incredible for these AI companies, every AI company that we’ve met with is a Slack company. All of them make their agents available for Slack first. We saw that, for example, with Anthropic, where Cloud really appeared first and [ Cloud 2 ], first in Slack.

And Anthropic, as a company uses Slack internally and they have a — they take their technology and develop news digest every day and newsletters and they do incredible things with Slack — Slack is just a treasure trove of information for artificial intelligence, and you’ll see us deliver all kinds of new capabilities in Slack along these lines.

And we’re working, as I’ve mentioned, get Slack to wake up and become more aware and also for Slack to be able to do all of the things that I just mentioned. One of the most exciting things I think you’re going to see a Dreamforce is Slack very much as a vision for the front end of all of our core products. We’re going to show you an incredible new capability that we call Slack Sales Elevate, which is promoting our core Sales Cloud system running right inside Slack.

That’s going to be amazing, and we’re going to also see how we’re going to release and deliver all of our core services in sales force through Slack. This is very important for our company to deliver Slack very much as a tremendous easy-to-use interface on the core Salesforce, but also all these AI systems. So all of that is that next generation of artificial intelligence capability, and I’m really excited to show all of that to you at Dreamforce as well as Data Cloud as well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, and Salesforce. Holdings are subject to change at any time.

When Genius Failed (temporarily)*

Not even a business and investing genius can save us from short-term pain.

The late Henry Singleton was a bona fide polymathic genius. He had a PhD in electrical engineering and could play chess just below the grandmaster level. In the realm of business, Warren Buffett once said that Singleton “has the best operating and capital deployment record in American business… if one took the 100 top business school graduates and made a composite of their triumphs, their record would not be as good.”

Singleton co-founded Teledyne in 1960 and stepped down as chairman in 1990. Teledyne started life as an electronics company and through numerous acquisitions engineered by Singleton, morphed into an industrials and insurance conglomerate. According to The Outsiders, a book on eight idiosyncratic CEOs who generated tremendous long-term returns for their shareholders, Teledyne produced a 20.4% annual return from 1963 to 1990, far ahead of the S&P 500’s 8.0% return. Distant Force, a hard-to-obtain memoir on Singleton, mentioned that a Teledyne shareholder who invested in 1966 “was rewarded with an annual return of 17.9 percent over 25 years, or a return of 53 times his invested capital.” In contrast, the S&P 500’s return was just 6.7 times in the same time frame. 

Beyond the excellent long-term results, I also found another noteworthy aspect about Singleton’s record: It is likely that shareholders who invested in Teledyne in 1963 or 1966 would subsequently have thought, for many years, that Singleton’s genius had failed them. I’m unable to find precise historical stock price data for Teledyne during Singleton’s tenure. But based on what I could gather from Distant Force, Teledyne’s stock price sunk by more than 80% from 1967 to 1974. That’s a huge and demoralising decline for shareholders after holding on for seven years, and was significantly worse than the 11% fall in the S&P 500 in that period. But even an investor who bought Teledyne shares in 1967 would still have earned an annualised return of 12% by 1990, outstripping the S&P 500’s comparable annualised gain of 10%. And of course, an investor who bought Teledyne in 1963 or 1966 would have earned an even better return, as mentioned earlier. 

Just like how Buffett’s Berkshire Hathaway had seen a stomach-churning short-term decline in its stock price enroute to superb long-term gains driven by outstanding business growth, shareholders of Teledyne also had to contend with the same. I don’t have historical financial data on Teledyne from primary sources. But for the 1963-1989 time frame, based on data from Distant Force, it appears that the compound annual growth rates (CAGRs) for the conglomerate’s revenue, net income, and earnings per share were 19.8%, 25.3%, and 20.5%, respectively; the self-same CAGRs for the 1966-1989 time frame were 12.1%, 14.3%, and 16.0%. These numbers roughly match Teledyne’s returns cited by The Outsiders and Distant Force, once again demonstrating a crucial trait about the stock market I’ve mentioned in many earlier articles in in this blog (see here and here for example): What ultimately drives a stock’s price over the long run is its business performance.

Not every long-term winner in the stock market will bring its shareholders through an agonising fall mid-way. A notable example is the Canada-based Constellation Software, which is well-known in the investment community for being a serial acquirer of vertical market software businesses. The company’s stock price has risen by nearly 15,000% from its May 2006 IPO to the end of June 2023, but it has never seen a peak-to-trough decline of more than 30%. This said, it’s common to see companies suffer significant drawdowns in their stock prices while on their way to producing superb long-term returns. An unfortunate reality confronting investors who are focused on the long-term business destinations of the companies they’re invested in is that while the end point has the potential to be incredibly well-rewarding, the journey can also be blisteringly painful.

*The title of this section is a pun on one of my favourite books on finance, titled When Genius Failed. In the book, author Roger Lowenstein detailed how the hedge fund, Long-Term Capital Management (LTCM), produced breath-taking returns in a few short years only to then give it all back in the blink of an eye. $1 invested in LTCM at its inception in February 1994 would have turned into $4 by April 1998, before collapsing to just $0.30 by September in the same year; the fund had to be rescued via a bail-out orchestrated by the Federal Reserve Bank of New York. Within LTCM’s ranks were some of the sharpest minds in finance, including Nobel laureate economists, Robert Merton and Myron Scholes. Warren Buffett once said that LTCM “probably have as high an average IQ as any 16 people working together in one business in the country…[there was] an incredible amount of intellect in that room.” LTCM’s main trading strategy was arbitrage – taking advantage of price differentials between similar financial securities that are trading at different prices. The LTCM team believed that the price differentials between similar instruments would eventually converge and they set up complex trades involving derivatives to take advantage of that convergence. Because of the minute nature of the price differentials, LTCM had to take on enormous leverage in order to make substantial profits from its arbitrage trading activities. According to Roger Lowenstein’s account, leverage ratios of 20-to-1 to 30-to-1 were common. At its peak, LTCM was levered 100-to-1 – in other words, the hedge fund was borrowing $100 for every dollar of asset it had. Compounding the problem, LTCM’s partners, after enjoying startling success in the fund’s early days, started making directional bets in the financial markets, a different – and arguably riskier – activity from their initial focus on arbitrage. The story of LTCM’s downfall is a reminder of how hubris and leverage can combine into a toxic cocktail of financial destruction.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

Lessons From Two Polar Opposite Companies

The ultimate goal of management should be to maximise shareholder value. This means returning as much cash (discounted to the present) as possible to shareholders over time. 

Finding the right management team that can do this is key to good long-term returns.

Constellation Software

One of the best examples of a management team that is great at maximising shareholder value is that of Constellation Software. 

Headed by Mark Leonard, the team behind Constellation Software has been consistently finding ways to grow free cash flow per share for shareholders by using the cash it generates to acquire companies on the cheap. Constellation’s secret is that it buys companies with low organic but at really cheap valuations. Although growth is low, the investments pay off very quickly due to the low valuations they were acquired for. 

The consistent use of available cash for new investments mean that Constellation’s dividend payouts have been lumpy and relatively small. But this strategy should pay off over time and enable Constellation’s shareholders to receive a much bigger dividend stream in the future. 

Not only are Leonard and his team good allocators of capital and excellent operators, they are also careful with spending shareholders’ money. In his 2007 shareholders’ letter, Leonard wrote:

“I recently flew to the UK for business using an economy ticket. For those of you who have seen me (I’m 6’5”, and tip the non-metric scale at 280 lbs.) you know that this is a bit of a hardship. I can personally afford to fly business class, and I could probably justify having Constellation buy me a business class ticket, but I nearly always fly economy. I do this because there are several hundred Constellation employees flying every week, and we expect them to fly economy when they are spending Constellation’s money. The implication that I hope you are drawing, is that the standard we use when we spend our shareholders’ money is even more stringent than that which we use when we are spending our own.”

This attitude on safeguarding shareholders’ money is exactly what Constellation’s shareholders love. This reliability is also part of the reason why Constellation has been such a big success in the stock market. The company’s stock price is up by more than 14,000% since its May 2006 IPO.

Singapore Press Holdings

On the flip side, there are companies that have management teams that do not strive to maximise shareholder value. Some hoard cash, or use the cash a company generates for pet projects that end up wasting shareholders’ money. And then, there are some management teams that have other priorities that are more important than maximising shareholder value.

Singapore Press Holdings (SPH), for example, was a company that I think did not do enough to maximise shareholder value. SPH, which is based in Singapore but delisted from the country’s stock market in May 2022, was a company that published Singapore’s most widely-read newspapers, including The Straits Times. The company also owned the online news portal, straitstimes.com, as well as other local media assets such as radio channels and magazines. In addition, SPH owned real estate such as its print and news centre that were used for its media business. SPH also had investments in SPH REIT and other real estate.

In 2021, SPH spun off its entire media arm, including its print and news centre, to a new non-profit entity. Unlike normal spin-offs or sales, SPH shareholders did not receive any shares in the new entity, nor did SPH receive any cash. Instead, SPH donated its whole media segment to the new entity for just S$1. To rub salt into shareholders’ wounds, SPH donated S$80 million in cash, S$20 million in SPH REIT units, and another $10 million in SPH shares, to the new entity. 

After the spin-off, SPH’s net asset value dropped by a whopping S$238 million. The restructuring clearly was not designed to maximise shareholder value.

Management said that SPH had to give away its media segment as selling it off or winding up the media business was not a feasible option given the “critical function the media plays in providing quality news and information to the public.”

In other words, management was torn between the interests of the country the company is in, and its shareholders. Ultimately, shareholders’ hard-earned money was squandered in the process. This was possibly one of the more brazen mishandlings of shareholder money I’ve witnessed in the last decade.

Bottom line

As minority shareholders in public companies, we often have little to no say on how things are run within a company. Our votes during shareholder meetings are overshadowed by other major shareholders who may also have conflicting interests. As such we rely on the honesty and integrity of management to put minority shareholders’ interests as a priority. 

Unfortunately, conflicts of interest do occasionally occur. As an investor, you may want to consider only investing in companies that will protect shareholders’ interests fervently such as the example shown by Mark Leonard.

On the other hand, we should avoid situations where conflicts of interest may encourage the misuse of funds or even promote dishonest behaviour.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.