All articles

What We’re Reading (Week Ending 12 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 12 January 2025:

1. The art of outlasting: What we can learn from timeproof Japanese businesses – Eric Markowitz

Japan is home to an extraordinary number of shinise, or long-established businesses. A 2008 study found that Japan had over 21,000 companies older than 100 years, including more than 3,000 that had crossed the 200-year mark. These firms are not just historical artifacts — they are vibrant examples of how to endure and thrive in a rapidly changing world. Their strategies — balancing tradition with adaptability, patience with practicality — are a masterclass in long-term thinking that today’s entrepreneurs and executives would be wise to study…

…What ties these stories together is an approach to business that’s almost rebellious in its patience. While the modern world glorifies disruption and speed, Japan’s ancient companies remind us that longevity is often about playing the long game. It’s about building something so solid, so aligned with its environment, that it can weather any storm. But let’s not romanticize this too much. Strip away the poetry of water metaphors and ancient traditions, and you’ll find ruthless pragmatism at the core of these businesses’ survival.

When Japan’s post-war construction boom faded, Kongo Gumi didn’t just stick to temples — they pivoted hard into office buildings and apartments while maintaining their temple maintenance business as a hedge. During the lean years of the 1990s recession, Hōshi Ryokan cut costs to the bone while refusing to lay off staff, with family members taking deep pay cuts to keep their centuries-old workforce intact. Okaya transformed from selling samurai swords to becoming a global steel trader, making calculated bets on new technologies and markets while keeping their supply chain relationships rock solid.

These companies didn’t just drift through history — they clawed their way through wars, depressions, and cultural upheavals, making brutal choices about what to preserve and what to sacrifice. Their longevity wasn’t achieved through Zen-like detachment, but through gritted teeth and white-knuckled adaptability.

2. Notes on China – Dwarkesh Patel

I got quite mixed messages about the state of public opinion in China. This is to be expected in a society where you can’t establish common knowledge. One person told me that the new generation is quite nationalist, unlike the older reform generation which personally experienced the catastrophes of Mao and the tangible benefits of liberalization. He made the rather insightful point that this tilt in Chinese public opinion increasingly gives lie to the American talking point, “We’re against the CCP, not the Chinese people.” In fact, he went on to say that the current regime is way more liberal than what would result from an election in China.

Another person told me that these Chinese nationalists were only a vocal minority, similar to the wokes in America circa 2020. While they make up only about 10% of the population, they aggressively shout down others on Weibo (China’s Twitter equivalent). Most people find them annoying but feel uncomfortable confronting them directly. This matches what a student who graduated from a top university there told me – the vast majority of his classmates are simply apolitical. And in our own interactions with locals, we saw little evidence of widespread nationalism. In fact, when my Chinese-speaking trip mate (who could actually speak Chinese) would mention he was from the UK to taxi drivers, they would often respond enthusiastically: “Oh wonderful, we love the UK!”…

…We chatted up quite a lot of young people on night life streets. I was struck by how many young people expressed feeling stressed or overwhelmed. We met a musician in Chengdu who was writing songs about youth anxiety. We chatted up some modeling school students – even they complained about the intense pressure they felt. We met a guy who had studied in Australia but returned to China during COVID. He explained that many of his friends with prestigious degrees are moving away from Shanghai and Beijing – Yes, the pay there can be twice as high as in second or third tier cities. But the competitiveness is insane. And in order to actually land the high skilled positions, they have to work truly insane hours (9-9-6 is not a myth). He said that many of his friends were opting for these less ambitious lower-paying careers in smaller cities, where the rent is lower and the pressure is manageable…

…I’m still puzzled by how China can have both a demographic collapse and massive youth unemployment. You’d think with fewer young people being born, the ones who are around would be in high demand. One explanation I heard while there is that there are plenty of menial jobs available, but today’s educated youth – who’ve gone through high school and college – just won’t take the low-skilled positions their parents and grandparents did. Meanwhile, there’s a real shortage of the high-skilled jobs that would actually match their education and aspirations. It’s a mismatch between the jobs available and the jobs young people feel qualified for and willing to do…

…The biggest surprise from talking to Chinese VCs people at AI labs was how capital constrained they felt. Moonshot AI, one of China’s leading AI labs, raised $1 billion at a $3 billion valuation. Meanwhile, just xAI’s new cluster alone will cost $3-4 billion.

The tech ecosystem feels quite shell shocked from the 2021 crackdown. One VC half-jokingly asked if I could help him get his money out of China. If you keep your money in China, you’re basically stuck choosing between terrible options. You can either accept a measly 2% yield from state banks, or throw it into China’s perpetually struggling stock market. This helps explain why valuations for Chinese companies are chronically low – the exit opportunities just suck. Even if you build (or invest in) something great, there’s no guarantee the company will be able to raise the next round. And even if you do raise again and succeed, the government might randomly cancel your IPO. And even if you somehow make it to the public markets, Chinese equities have been performing terribly anyways. It’s a good reminder of how easy it is to completely wreck an innovation ecosystem that depends on risk-taking investors.

3. Is AI progress slowing down? – Arvind Narayanan and Sayash Kapoor

To be clear, there is no reason to doubt the reports saying that many AI labs have conducted larger training runs and yet not released the resulting models. But it is less clear what to conclude from it. Some possible reasons why bigger models haven’t been released include:

  • Technical difficulties, such as convergence failures or complications in achieving fault tolerance in multi-datacenter training runs.
  • The model was not much better than GPT-4 class models, and so would be too underwhelming to release.
  • The model was not much better than GPT-4 class models, and so the developer has been spending a long time trying to eke out better performance through fine tuning.

To summarize, it’s possible that model scaling has indeed reached its limit, but it’s also possible that these hiccups are temporary and eventually one of the companies will find ways to overcome them, such as by fixing any technical difficulties and/or finding new data sources…

…Industry leaders don’t have a good track record of predicting AI developments. A good example is the overoptimism about self-driving cars for most of the last decade. (Autonomous driving is finally real, though Level 5 — full automation — doesn’t exist yet.) As an aside, in order to better understand the track record of insider predictions, it would be interesting to conduct a systematic analysis of all predictions about AI made in the last 10 years by prominent industry insiders.

There are some reasons why we might want to give more weight to insiders’ claims, but also important reasons to give less weight to them. Let’s analyze these one by one. It is true that industry insiders have proprietary information (such as the performance of as-yet-unreleased models) that might make their claims about the future more accurate. But given how many AI companies are close to the state of the art, including some that openly release model weights and share scientific insights, datasets, and other artifacts, we’re talking about an advantage of at most a few months, which is minor in the context of, say, 3-year forecasts.

Besides, we tend to overestimate how much additional information companies have on the inside — whether in terms of capability or (especially) in terms of safety. Insiders warned for a long time that “if only you know what we know…” but when whistleblowers finally came forward, it turns out that they were mostly relying on the same kind of speculation that everyone else does.

Another potential reason to give more weight to insiders is their technical expertise. We don’t think this is a strong reason: there is just as much AI expertise in academia as in industry. More importantly, deep technical expertise isn’t that important to support the kind of crude trend extrapolation that goes into AI forecasts. Nor is technical expertise enough — business and social factors play at least as big a role in determining the course of AI. In the case of self-driving cars, one such factor is the extent to which societies tolerate public roads being used for experimentation. In the case of large AI models, we’ve argued before that the most important factor is whether scaling will make business sense, not whether it is technically feasible…

…As an example, Sutskever had an incentive to talk up scaling when he was at OpenAI and the company needed to raise money. But now that he heads the startup Safe Superintelligence, he needs to convince investors that it can compete with OpenAI, Anthropic, Google, and others, despite having access to much less capital. Perhaps that is why he is now talking about running out of data for pre-training, as if it were some epiphany and not an endlessly repeated point.

To reiterate, we don’t know if model scaling has ended or not. But the industry’s sudden about-face has been so brazen that it should leave no doubt that insiders don’t have any kind of crystal ball and are making similar guesses as everyone else, and are further biased by being in a bubble and readily consuming the hype they sell to the world…

…Inference scaling is useful for problems that have clear correct answers, such as coding or mathematical problem solving. In such tasks, at least one of two related things tend to be true. First, symbolic reasoning can improve accuracy. This is something LLMs are bad at due to their statistical nature, but can overcome by using output tokens for reasoning, much like a person using pen and paper to work through a math problem. Second, it is easier to verify correct solutions than to generate them (sometimes aided by external verifiers, such as unit tests for coding or proof checkers for mathematical theorem proving).

In contrast, for tasks such as writing or language translation, it is hard to see how inference scaling can make a big difference, especially if the limitations are due to the training data. For example, if a model works poorly in translating to a low-resource language because it isn’t aware of idiomatic phrases in that language, the model can’t reason its way out of this.

The early evidence we have so far, while spotty, is consistent with this intuition. Focusing on OpenAI o1, it improves compared to state-of-the-art language models such as GPT-4o on coding, math, cybersecurity, planning in toy worlds, and various exams. Improvements in exam performance seem to strongly correlate with the importance of reasoning for answering questions, as opposed to knowledge or creativity: big improvements for math, physics and LSATs, smaller improvements for subjects like biology and econometrics, and negligible improvement for English.

Tasks where o1 doesn’t seem to lead to an improvement include writing, certain cybersecurity tasks (which we explain below), avoiding toxicity, and an interesting set of tasks at which thinking is known to make humans worse…

…We think there are two reasons why agents don’t seem to benefit from reasoning models. Such models require different prompting styles than regular models, and current agentic systems are optimized for prompting regular models. Second, as far as we know, reasoning models so far have not been trained using reinforcement learning in a setting where they receive feedback from the environment — be it code execution, shell interaction, or web search. In other words, their tool use ability is no better than the underlying model before learning to reason…

…The furious debate about whether there is a capability slowdown is ironic, because the link between capability increases and the real-world usefulness of AI is extremely weak. The development of AI-based applications lags far behind the increase of AI capabilities, so even existing AI capabilities remain greatly underutilized. One reason is the capability-reliability gap — even when a certain capability exists, it may not work reliably enough that you can take the human out of the loop and actually automate the task (imagine a food delivery app that only works 80% of the time). And the methods for improving reliability are often application-dependent and distinct from methods for improving capability. That said, reasoning models also seem to exhibit reliability improvements, which is exciting.

Here are a couple of analogies that help illustrate why it might take a decade or more to build products that fully take advantage of even current AI capabilities. The technology behind the internet and the web mostly solidified in the mid-90s. But it took 1-2 more decades to realize the potential of web apps. Or consider this thought-provoking essay that argues that we need to build GUIs for large language models, which will allow interacting with them with far higher bandwidth than through text. From this perspective, the current state of AI-based products is analogous to PCs before the GUI.

4. Waymo still doing better than humans at preventing injuries and property damage – Andrew J. Hawkins

The study is the product of the collaboration between Waymo and insurer Swiss Re, which analyzed liability claims related to collisions from 25.3 million fully autonomous miles driven by Waymo in four cities: Phoenix, San Francisco, Los Angeles, and Austin. They then compared those miles to human driver baselines, which are based on Swiss Re’s data from over 500,000 claims and over 200 billion miles traveled.

They found that the performance of Waymo’s vehicles was safer than that of humans, with an 88 percent reduction in property damage claims and a 92 percent reduction in bodily injury claims. Across 25.3 million miles, Waymo was involved in nine property damage claims and two bodily injury claims. The average human driving a similar distance would be expected to have 78 property damage and 26 bodily injury claims, the company says.

Waymo’s vehicles also performed better when compared to new vehicles equipped with all the latest safety tech, including automatic emergency braking, lane-keep assist, and blind spot detection. When compared to this group, Waymo’s autonomous driving system showed an 86 percent reduction in property damage claims and a 90 percent reduction in bodily injury claims.

5. SITALWeek #454 – Brad Slingerlend

I think we are approaching the point where we can start to estimate the value of AI for developers and the companies/consumers who are going to buy the next wave of innovative applications. I think the salient question for AI (and, frankly, humanity!) is: How much AI reasoning can you get for a human-equivalent salary? In other words, for a certain salary, how much compute power will it take to match or outperform a human (assuming the AI can collaborate with other humans/AIs using the same methods and tools a human would)…

… LLMs are shifting from a pure token-in/token-out model to a test-time scaling model, which may offer us better inroads for estimating costs. Essentially, they are thinking harder before spitting out a reply; thus, rather than just predicting the next words in a response using a probability model (see You Auto-Complete Me), they are doing some deep thinking to arrive at more accurate, useful answers. This is a major leap in capability that comes with a major leap in cost. OpenAI raised prices for their o1 model to $200/mo (Pro subscription) from $20 (Plus subscription). For developers, use of o1’s advanced reasoning API comes at 3-4x the cost of their “general purpose” GPT-4o. If o1 were priced at a typical Western office worker wage of $40/hr, the reasoning of the model would equate to around 5 hours of work per month. We also don’t know if the $200/mo price point is profitable for OpenAI or if they are just relying on Microsoft to further subsidize their business model (which brings us back to the principal-agent problem I started this section off with). So, all of my hand waving here seems to imply you can get a decent amount of human-equivalent reasoning for an amount of money in the realm of human labor cost. If true, after a few more years of advancements in semiconductors and AI models, we should have markedly affordable “human reasoning as a service”, an explosion in demand, and a wide range of outcomes for how much human supervision of AI will be required (it may be that human jobs stay relatively flat, but each human is 2x productive, then 4x, etc.).

Following this logic, at current AI reasoning costs, companies would need to lay off one human for every AI human equivalent they hire and would probably lose more skill/knowledge than they gain. In other words, based on my attempts to guess the cost of replacing human reasoning, today’s AI offerings aren’t likely compelling enough. In a couple years, however, maybe you will be able to lay off one human and hire a handful of AIs, which, by collaborating with each other and humans, may yield superior results. Even today, extremely high-value tasks, such as in-depth research or stock market predictions, may be able to take advantage of the high-cost test-time scaling AI models. And, if any of this math is in the realm of reason, you can easily see that AI may not require such high-value-add applications to be cost effective in the near to medium future. The proof will come within the next couple of years as today’s entrepreneurs develop the next generation of apps leveraging LLMs and overtaking human capabilities: If these apps are at price points that outcompete human employees, a significant wave of change could come much faster to society. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Waymo) and Microsoft. Holdings are subject to change at any time.

Company Notes Series (#4): engcon

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first three editions in the series can be found here, here, and here. Please give us your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes for engcon

Data as of 31 December 2023

Background

  • Year founded: 1990
  • Listed in Stockholm Stock Exchange (Sweden) since 17 June 2022
  • Headquarters: Stromsund, Sweden

Business

  • engcon manufactures tiltrotator systems that turns excavators into tool carriers (see Figure 1). The hydraulic tools provided by the company include detachable grippers, stone and sorting grabs, combi grabs, and more. See engcon’s Youtube video for more.
Figure 1
  • engcon’s tiltrotator solutions are developed, manufactured and subsequently fitted on new or existing excavators. Dealers serve as a link between excavator manufacturers (OEMs, or original equipment manufacturers), tiltrotator manufacturers, and end-customers. End-customers are contractors, companies that own excavators, and excavator rental companies. engcon has partnerships with OEMs that increase the reach of the company’s products and prepare excavators for faster and easier installations of tiltrotators; the partnerships also provide valuable insight into which technologies OEMs are developing for the future, and engcon contributes with knowledge of end-customer requirements.  
  • engcon’s tiltrotator solutions are focused on excavators in the weight class of 2 tonnes to 33 tonnes.
  • The production of engcon’s tiltrotator solutions happens in the company’s production sites in Strömsund, Sweden and Niepruszewo, Poland. engcon’s tiltrotator solutions consist of various components designed by the company. Some of the components are also manufactured at engcon’s aforementioned production sites but most of the components are purchased from suppliers in Sweden and Northern Europe. 
  • engcon had sales in 16 markets across the globe in 2022 and its sales split by geographic region in 2022 and 9M 2023 is shown in Figure 2 below. The years in which engcon entered its various markets are:
    • Sweden: 1990
    • Finland and Norway: 1995
    • Denmark and Germany: 2003
    • UK: 2004
    • France: 2014
    • Netherlands: 2016
    • USA: 2017
    • Japan: 2018
    • South Korea and Australia: 2020
    • Canada, Belgium, Ireland, and Austria: 2021-2022
Figure 2
  • The majority of engcon’s sales take place through a global network of dealers. Sales also take place through collaboration with OEM dealer networks. A limited amount of products, mainly buckets and tools, are sold through engcon’s website in Sweden, Finland and Denmark. 
  • No single customer accounted for >10% of engcon’s sales in 2022, so there’s no customer concentration. But there may be supplier concentration for engcon: 10 of engcon’s largest suppliers in 2021 accounted for 58% of the company’s total purchases of raw materials and components.
  • A tiltrotator had an average price (including engcon and competitors) of SEK 176,000 (around US$19,000) in 2021. Dealers typically earn 30% of the price of a tiltrotator.
  • engcon released its 3rd-gen tiltrotator solution in May 2022. The 3rd-gen system is equipped with technology that has never been used on tiltrotators and that takes a clear step towards the electrified, connected and autonomous excavators of the future. The 3rd-gen’s load-sensing technology leads to reduced fuel consumption, improved precision, less wear and tear, and lower maintenance costs. The reduced energy need simplifies the use of alternative fuels for excavators, such as electricity and hybrid solutions. With help from a new sensor technology, the newly developed control system can precisely calculate the tilt and rotation of the tiltrotator, which means improved user-friendliness and greater potential for autonomous operations. Furthermore, the newly developed control system enables a more efficient remote connection, thereby improving remote support as well as the ability to remotely configure equipment.

Market opportunity

Newly manufactured excavator market for engcon

  • Globally, 665,000 excavators were sold in 2021. Of these 665,000 excavators, a total of 181,775 excavators belonging to the 2-33 tonne weight class (engcon’s focus is on excavators in that weight class) were sold in the Nordics, Europe, Americas, and Asia/Oceania; these regions are engcon’s key geographical markets as shown in Figure 2, and are named as the Focus Markets by the company. In the same year (2021), 12,934 tiltrotators for newly manufactured excavators, and 1,750 tiltrotators for existing excavators, were sold. The value of the tiltrotators sold was SEK 2.6 billion (around US$285 million). 
  • The number of excavators sold in the Focus Markets compounded at 6% per year for 2016-2019. COVID affected the market in 2020, but ultimately, the number of excavators sold in the Focus Markets still compounded at 2% per year for 2016-2021. The historical growth in the excavator market for each of engcon’s Focus Markets:
    • Nordic: 7,206 excavators sold in 2021, CAGR (compound annual growth rate) of 3% for 2016-2019, CAGR of 1% for 2016-2021,
    • Europe: 76,097 excavators sold in 2021, CAGR of 6% for 2016-2019, CAGR of 2% for 2016-2021
    • Americas: 62,972 excavators sold in 2021, CAGR of 10% for 2016-2019, CAGR of 4% for 2016-2021
    • Asia/Oceania: 35,481 excavators sold in 2021, CAGR of 2% for 2016-2019, CAGR of -1% for 2016-2021
  • The number of tiltrotators sold in the Focus Markets had a CAGR of 11% for 2016-2021, including a 15% decline in 2020 because of COVID. The value of tiltrotators sold in the Focus Markets had a CAGR of 15% for 2016-2021.
  • According to PwC, the value of the tiltrotators market is expected to compound at 19% from 2021 to 2026, driven by: (1) greater demand for productivity increases; (2) population growth and urbanisation; (3) lack of labour; (4) sustainability requirements; (5) excavators transitioning to becoming multi-purpose tool carriers and more autonomous; (6) and digitalisation and electrification of the construction market.
  • According to PwC: (1) Excavators equipped with tiltrotators are able to replace 2.2 other construction machines on average; (2) a tiltorator can increase the productivity of an excavator by 25%; (3) the use of a tiltrotator can save 6,000 litres of diesel annually, thus reducing 16,200 kg of CO2 emissions per year; (4) excavators with tiltrotators have a better safety profile as operators can exchange tools from within the cabin. 
  • The penetration rate of tiltrotators in newly manufactured excavators was 2% globally in 2021, 85% in the Nordics (92% in Sweden), and 7% in the Focus Markets. The penetration rate is closely connected to the maturity of the market, which can be divided into 3 phases: Development; acceleration; and mature. In the development phase, the penetration rate increases from 0% to 20%-25%. In the acceleration phase, the penetration rate has passed 20% and risen to 60%. The tipping point between the development phase and the acceleration phase is where the tiltrotator takes the step to becoming an established market standard. Authorities and clients, such as major construction and civil engineering companies, places requirements on excavators to be equipped with a tiltrotator for efficiency and safety reasons. Once the tipping point has been reached, the sales of tiltrotators to both new excavators and the aftermarket tends to gain momentum.
  • The market for tiltrotator manufacturers has 5 major operators (see Figure 3) that account for 95% of sales. engcon is the largest, with a market share of 45%. Tiltrotator manufacturers can be divided into 4 groups: global manufacturers, local manufacturers, other operators whose core operations are not tiltrotators, and excavator manufacturers (OEMs) with in-house manufactured tiltrotators. The 5 largest tiltrotator manufacturers are all global manufacturers, 4 of which are Swedish. All 5 collaborate with OEMs and the product portfolio includes quick couplers, tools, and other advanced attachments for excavators. engcon’s market share has increased from 42% in 2019 and 43% in 2020.
Figure 3

Existing excavator market for engcon

  • Number of newly-manufactured excavators in engcon’s Focus Markets that will not be equipped with tiltrotators for 2022-2026 is expected to be 960,000. This provides a large pool of retrofitting potential for engcon.

Management and major shareholders

  • engcon has Class A and Class B shares. Class A shares carry 10 votes per share while Class B shares have 1 vote per share. The Class B shares are public-listed. At end-2022, engcon had a total sharecount of 151.788 million (35.34 million Class A shares, 116.44 million Class B shares.
  • Stig Engstrom, 62, is the founder of engcon. He handed over the CEO role to Orjan Westerlund in 2003, and has been on the board of engcon since. Stig Engstrom controlled 29.04 million Class A shares and 24.74 million Class B shares at end-2022, which amounted to 35.4% of engcon’s total share count, but 67.1% of the total votes.
  • Stig Engstrom’s ex-wife, Monica Engstrom, has been on engcon’s board since 2004. Monica Engstrom controlled 6.31 million Class A shares and 42.21 million Class B shares at end-2022, which amounted to 32.0% of engcon’s total share count, but 22.4% of the total votes.
  • engcon’s CEO is Krister Blomgren, 58, who has been in the role since 2011. Blomgren controlled 1.259 million engcon Class B shares as of end-2022, which is 0.8% of the total share count. 
  • Other members of engcon’s management team are shown in Table 1 below (some of them have long tenures, which is good):
Table 1
  • Remuneration of Stig Engstrom and Krister Blomgren for 2019-2022 is shown in Table 2 below. Not much details are given on how they are compensated beyond the total amounts. The big jump in compensation for Blomgren in 2022 bears watching, but is only a tiny percentage of engcon’s profit and cash flow during the year.
Table 2

Financials

  • engcon’s revenue has CAGR of 16% from 2012 to 2022, and EBIT margin has doubled from 11% to 2022% in that period. See Figure 4
Figure 4
Table 3
  • From Table 3 above, engcon’s revenue CAGR for 2019 to 12 months ended 30 Sep 2023 is 16.7%. Net income CAGR is 25.6%, and FCF CAGR is 44.8%. Average net income margin is 15.9%, and average FCF margin is 14.0%.
  • engcon saw a large pull-forward of orders in 2021 Q4 and 2022 Q1, mainly in Nordic and Europe, due to price increases and uncertainty concerning delivery times, combined with an uncertain business environment and long lead times. So engcon expects 2023’s overall revenue growth to be just 8% (2023 Q1 growth was 55%, 2023 Q2 was -5%, and 2023 Q3 was -6%). Operating income also fell sharply in 2023 Q2 (12%) and 2023 Q3 (51%)

Valuation

  • Stock price on 31 December 2023: SEK 93.30
  • Trailing EPS = 2.33; Trailing PE = 40
  • Trailing FCF per share = 2.80; trailing PFCF = 33
  • For a company that is very likely going to post a further year-on-year decline in net income and FCF in 2023 Q4, those valuations look high.

Risks

  • engcon protects its business via patents, of which the most important relates to EC-Oil, which is a quick coupler system that allows for the replacement of hydraulic tools from the excavator’s cabin without the mounting of hoses and electrical cables. The patent, which has a maximum validity up to and including 2024, is not assessed to be business-critical, but it still helps to distinguish engcon’s tiltrotator systems and make it more difficult for competitors to copy. When the patent for EC-Oil expires, it may be difficult for engcon to have a distinctive product offering. 
  • The sale of excavators globally has been stronger than what I expected before researching engcon. But the overall construction market – including the sale of excavators – is probably sensitive to recessions. So future recessions are a risk.
  • There’s the possibility that someone comes up with a superior tiltrotator or similar solution to what engcon has.
  • In the shorter term, engcon has clearly been over-earning in 2021 and 2022, and is now suffering the hangover in 2023. Will the hangover last a long time? That’s a possibility, despite tiltrotators being a superior solution. 
  • In June 2022, Rototilt Group filed a lawsuit against engcon that alleged that the company had infringed upon a patent. The adjusted damages claimed amounted to approximately SEK 200 million. The alleged infringement relates to sensor technology in the Q-safe locking system. In May 2023, the Swedish Patent and Market Court announced its verdict regarding Rototilt’s lawsuit against engcon. The court determined that no infringement had taken place and therefore dismissed Rototilt’s action. The court determined that no infringement had taken place and therefore dismissed Rototilt’s action. At the same hearing, engcon claimed that Rototilt’s patent should be declared invalid. However, the court determined that the patent was valid. Following appeals, both parties were granted leave to appeal by the Swedish Patent and Market Court. A ruling in the higher court is expected in spring 2024 at the earliest.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 January 2025:

1. Mike Alkin – Talking Uranium (Transcript here) – Bill Brewster and Mike Alkin

Alkin: So coming to this market, I did that. I spent a good almost couple of years doing supply/demand on my own. There’s 430 reactors around the world. And understanding the country where they operate, the attitude towards nuclear, understanding the math involved. Often as investors, you look for heuristics. How many reactors are there? How many pounds per reactor would there be? You’re looking for rules of thumb. As you start peeling the onion back, I realize that rules of thumb don’t apply here because the amount of uranium needed for the reactor fleet around the world is not always the same. It depends upon enrichment capacity. We won’t go down that rabbit hole, but there’s a whole other segment you need to learn.

As I was doing that, I would go to these conferences and I would talk to nuclear fuel buyers, people who buy this stuff. It was hard for me at first to really understand what I was dealing with because as somebody at that time having well over 20 years of experience as a hedge fund investor, I talked to people in all industries that were on all sides of the equation. But the people buying it typically were curious as to what we were thinking when we were questioning them. If we were talking to a buyer at a company that was buying a product, they would say “What are you as an investor hearing? What are you hearing from the other side? What are my competitors saying? What are you hearing about inventories?” They were inquisitive. That was not this cohort. As I started speaking to nuclear fuel buyers, I was met with an enormous wall put in front of me telling me, “I’m an outsider, I’m not a nuclear engineer, I don’t know what I’m doing, I should basically stay away and they’ve got it.”

I thought it was that attitude that just said to me, “Something’s not right here because the numbers I’m coming up with, whether I’m looking at inventories or the amount of the cost of the supply, or the actual demand” – for context, at the time the price of uranium was $17, $18, $19 a pound. It would say what it was trading for in the market. As I did the analysis, I realized that the average cost was somewhere in the mid-$50s. I’m not that sharpest tool in the shed but I know that if something costs you mid-$50s to make, you can’t sell it for $17 for very long. So it was then that I had to peel back the onion saying, “Why are they producing it at that price?” Then you start to understand that the uranium market is one driven mostly by long term contracts. Well north of 80% on average will trade in a long-term window with contracts that cover 5, 7, 10, 12, 15 years depending on the contract. But that’s where most of the pounds trade. After the Fukushima event, a lot of these uranium producers, when the spot market had declined precipitously, were still selling into much higher prices. My understanding of that when I was talking to fuel buyers at these nuclear conferences, they were telling me that the price of uranium was $17 and $18, it was going to $10, it was going to $5. There was all this uranium out there.

That’s not what my math was showing me. What my math was showing me was that the model was that the long term contracts that had been signed before Fukushima melted down in 2011 were going to start to expire and rather rapidly. Uranium producers could not sell $17, $18, $20 uranium when it cost him 2.5 times that. At some point, production would have to start to shut down.

So you ask, “Do you think you’re crazy?” Yes, because as I’m talking to people who are obviously very sharp – they’re nuclear engineers – but it’s understanding, as you realize, as an investor, you have to understand incentives and you have to understand market structure. Charlie Munger would always say, “Show me the incentive, I’ll show you the outcome.” It was as I was starting to go and talk to these folks and realizing a couple of things. Number one is, they had no interest in what I was learning on my journey. Even though I’m not a nuclear engineer, I’m still somebody who’s a market participant. I’m still somebody that while I don’t speak their language, sitting at a dinner table or a lunch table or at a bar having a beer with them, I certainly could hold my own in supply/demand conversation. And as I would talk about what I was learning and uncovering, I was shot down at every step. I thought, “Wow, that’s interesting because I’m seeing a recency bias. What is now will always be.” So they were kind of latched onto that.

Then as I started peeling that, I’m thinking, “Why is this?” I’ve been doing this a very long time. Over the years, I’ve been wrong many times. I’ve been right more often than not. But you’re wrong and you try and understand where you’ve been wrong. I was thinking, “What is it? Why are they so uninterested in hearing what an outsider’s view is?” As I started to explore that more, you start to understand the makeup and the cost structure of a nuclear reactor, which I have known, but it really started to come into clear vision for me was the fuel. Uranium is just one part of the fuel cycle that goes in. You have uranium, they convert uranium from a powder into a gas. It then gets enriched, it then gets fabricated into pellets. That takes 18 to 24 months to do this stuff. There’s many different stages of the fuel cycle. As I was starting to think about what are the costs of that, all those stages are probably around 20% to 25%. What’s the cost of the uranium? That depends on the price. But it could be mid-single digits, high-single digits, somewhere around that. As you start talking to them about that, you realize it’s not a meaningful cost.

For comparative purposes, if I’m running a natural gas power plant or a coal power plant, my feedstock, the natural gas and the coal are 80% to 90% of the cost of operating it. Here, the uranium is single digits cost of operating it. The vision that started to come to me was uninterested market participants. They’re in the market very infrequently. Why are they uninterested? Because the cost is de minimis. Not to say it’s meaningless, but it’s de minimis. Then as I started to explore and ask questions, “Why are you not as concerned about this?” I was obviously met with a wall.

But what started to come to me was – and I asked flat out at a particular dinner at a World Nuclear Conference – I asked one, actually there were four fuel buyers at a dinner, I said, “If you all had a really enterprising fuel buyer that did the supply/demand work and said, “I think consensus is wrong. Here we are, $17, $18, $20 a pound. We should be buying uranium because the forecasts going out of the future are for deficits to be forming.” Let me ask you a question. Do you all, if the price were to go parabolic and you had all these great cost savings for your plant, do you participate that in any way, shape or form? Are you rewarded financially? Are you rewarded with a promotion?” The answer was I got laughed at. “What are you talking about? We’re paid to secure fuel.” These were buyers. As you come to a market as an investor, you think buyers are traders – they’re commercial creatures. These aren’t. These are really smart nuclear engineers that happen to buy a product that happens to not be a major cost component. There’s infrequent price discovery on their part and so it’s a lesson in understanding incentives and market structure…

Alkin: One of the things you see now is you have expert networks who provide hedge funds and mutual funds experts to speak to in any industry. If you’re a hedge fund wanting to get up to speed right now on the nuclear power industry, you’re going to say, “Get me three nuclear fuel buyers. I’d like to speak to them about uranium.” They’re going to get on the phone and they’re going to speak to them. For years – though I’m sure they’ve been doing this – they can get on the phone and speak to three fuel buyers and they say, “Yeah, there’s plenty of uranium out there.” Those are the same folks, when the price was $17 was telling me that, versus here you’re seeing floors and ceilings at $125 and $135. They are the gift that keep on giving. Yet the way the structure of the research process is, they’re going to expert networks. They find these people, and if you don’t understand how the sausage is made, you’re going to be misled. They’re not purposely misleading you. It’s just what their own beliefs are. For me, that’s a beautiful thing. I’ve been doing this a long time now, almost 30 years as a professional investor, and I’ve never seen a cohort of people who are so uninterested in hearing the other side of the story. So far I’ve seen them prices move up 4x in there against them and they still have the same attitude.

Brewster: To your point, it doesn’t sound like they’re very incentivized to care.

Alkin: There’s very little to no incentive to care, other than maybe you would think pride? I don’t know. But it doesn’t matter. It’s just not a thing. We actually chuckle because when we go to these conferences, you talk to them in a hallway or in a bar, it’s as though you’re an adversary. It’s very bizarre. They don’t have an incentive. It doesn’t matter what they pay. So that’s the bizarre thing.

2. Chip Cities Rise in Japan’s Fields of Dreams – Gearoid Reidy

In Chitose, a city of 100,000 in the northernmost main island of Hokkaido, billboards seek recruits for the Self-Defense Forces, which saw a 50% shortfall last year. When I arrived on a fully booked plane from Tokyo packed with salarymen in cheap suits and expensive watches, it was easy to see where the competition was coming from: a half-dozen towering cranes jutting into the sky, a jarring contrast against the surrounding countryside…

…Those cranes are building the first fab for Rapidus Corp., a public-private venture that aims to skip Japan to the head of the chip production queue. Founded just two years ago, it hopes to produce cutting-edge, 2-nanometer chips by 2027, in cooperation with IBM Corp. It’s fraught with risks, and the government’s record in promoting industry is spotty. But this is just the latest and most ambitious example of a series of bets on chips, with Prime Minister Shigeru Ishiba recently pledging an extra ¥10 trillion ($66 billion) on top of ¥3.9 trillion invested since 2021. Near the other end of the Japanese archipelago, 1,500 kilometers (930 miles) to the southwest, is another. In Kumamoto, on the island of Kyushu, mass production is soon set to begin at a $7 billion semiconductor plant.

Here, Taiwan Semiconductor Manufacturing Co., drawn by government subsidies and the region’s supply chain, opened its first Japanese plant in February. A second is in the works, with authorities lobbying for a third. It’s triggered an influx of Taiwanese workers into a city where until recently almost everyone was Japanese…

…As many as 6,000 laborers are employed to build Rapidus. But talk is of the arrival of permanent workers once test production begins. That’ll bring at least 1,000 high-earning jobs, along with their supply chains. On my visit, ASML Holding NV, the Dutch maker of chip-testing tools, had just opened offices, with 50 staff expected. Every second building seems to be being torn down and rebuilt…

…The scale of the ambition creates the risk of spectacular failure, one many in Japan’s media fully expect. Skepticism is warranted, considering previous government-led efforts, from DRAM maker Elpida Memory Inc., sold to Micron Technology Inc. after its 2012 bankruptcy, to troubled Japan Display Inc.

The economy was already doing well even before talk of Rapidus, Mayor Ryuichi Yokota told me, describing the fab as a “Big Bang” that has the city scrambling. Yet at night, when the construction crews leave, the silence is deafening. I couldn’t feel the billions I expected to find flowing, just a cold wind that would soon begin to turn to snow…

…The risk from disaster is unpredictable; but what if these experiments simply don’t work out? Japan has spent billions on subsidies to bring a foreign company in Kumamoto. And when it comes to Rapidus, the risks are immense. Even if the company can find the talent it needs (the country is expected to have a shortfall of 40,000 engineers), the technology succeeds and yields are acceptable, it still has to outcompete rivals — including TSMC — to attract customers with an unproven product.

Chitose mayor Yokota shrugged off these concerns. “I’m convinced it will succeed,” he said, resolute that researchers currently studying with IBM in the US will return, like Meiji-era scholars, with secrets Japan can use to rebuild.

3. Before Berkshire: Warren Buffett’s Tab Card Triumph – Kingswell and Alice Schroeder

He decided that he would come in and invest in this company — Mid-Continent Tab Card Co. — but, interestingly, he did not take Wayne and John’s word for it. The numbers they gave him were really enticing, but again he went through and he acted like a horse handicapper.

Here’s another point of departure from what almost anybody else would do. Everybody that I know — or knew as an analyst — would have created a model for this company and would have projected out its earnings and would have looked at its return on investment in the future. Warren didn’t do that. In fact, in going through hundreds of his files, I’ve never seen anything that resembled a model.

What he did is he did what you would do with a horse. He figured out the one or two factors that could make the horse succeed or fail — and, in this case, it was sales growth and making the cost advantage continue to work. Then, he took all of the historical data, quarter by quarter for every single plant, he got the similar information as best he could from every competitor they had, and he filled pages with little hen scratches of all this information and he studied that information.

And, then, he made a yes/no decision. He looked at it: They were getting 36% margins [and] they were growing over 70% a year on a million of sales. Those were the historic numbers. He looked at them in great detail — just like a horse handicapper studying the tip sheet — and then he said to himself, “I want a 15% return on $2 million of sales.” And then he said, “Yeah, I can get that.” And he came in as an investor.

So what he did is he incorporated his whole earnings model and compounding discounted cash flow into that one sentence. “I want 15% on $2 million of sales.”

Why 15%? Because Warren is not greedy. He always wants a mere 15% day one return on an investment and then it compounds from there. That’s all he has ever wanted. He’s happy with that. It’s a very simple thing. There’s nothing fancy about it…

…The $2 million of sales was pretty simple, too. It had $1 million [and] it was growing 70%. There was a big margin of safety built into these numbers. It had a 36% profit margin and he said, “I’ll take half that.”

He ended up putting $60,000 of his personal non-partnership money into this company, which was about 20% of his net worth at the time. He got 16% of the company’s stock, plus some subordinated notes.

4. China’s Bond Yields Scream the ‘D’ Word – Lingling Wei

Over the past week, just as Chinese leaders tried to get the public—and markets—excited with another round of stimulus talk, China’s 10-year sovereign yield kept falling to fresh lows. Now, the yield is around 1.7%, a full percentage-point plunge from a little over a year ago. The return on the 30-year government bond has also dropped below 2%.

The sovereign-debt yield still has a ways to go before falling to zero, but the speed of the drop is astonishing. The lower the yield falls, the deeper the market is signaling economic stress.

…In reality, Beijing is sticking to the formula of boosting demand through investment. The official thinking is, investment creates jobs, which would in turn create demand. That means more roads will be built, factories will be expanded and debts will continue to rise. Already, residents in some cities are complaining about the inconvenience from old roads being dredged up as authorities search for ways to invest.

One big irony is the source of bond buying—the force pushing down the yields.

State-owned banks, insurance firms and funds, the very institutions Beijing is counting on to support the economy, are the major purchasers of government bonds. These institutions would rather park their money in the safety of bonds than financing business projects or otherwise putting it to work.

“What’s good to invest in these days when demand is so low?” a Chinese banker told me, referring to weak business and consumer spending.

5. An Interview with Gregory Allen About the State of China Chip Export Controls – Ben Thompson and Gregory Allen

Here’s the question though. China doesn’t generally seem to be operating, and for good reason under the circumstances, under a real stringent return on invested capital calculation. I mean the 7nm chips that are being produced, we know with I think a pretty high degree of certainty, the yields are terrible.

GA: The yields are dreadful.

But they’re doing it anyway just because it needs to be done and this sort of ties into another thing. You referenced Dylan Patel and SemiAnalysis, who have been pretty strident critics of the enforcement of chip controls. But I think a good point he has made is that China, unlike the US, is not necessarily constrained in power or in the ability to build a ton of data centers, and so there’s a bit where they could just sort of — it’s not great, but they could just be way less efficient and accomplish similar things. Is there a bit where these expert controls are fashioned with Western/US constraints and concerns about how you go about building this stuff that might make them less impactful in the long run?

GA: Yeah, the export controls have not achieved their wildest dreams. There was a faction in the Biden administration that says, “Bwahaha, we found the secret weapon, and China’s AI dreams are gone” — that theory is just dead. Where we are now is at more of a cost imposition strategy. “We are going to make this as expensive and complicated as possible for you to do it, we’re going to try and slow you down, we’re going to try and increase your costs, and that is the race that we’re going to run”.

I mean, if you think about it, we’re switching from a mode in which the US AI ecosystem and the Chinese AI ecosystem were largely fused such that if we’re running a race, you can imagine there’s US people giving China Gatorade and those new Nike shoes that make you run faster. Now we’re moving to a moment where we’re trying to trip them in the race, that’s the change in mindset that we’ve experienced, and it’s not working to its most extreme form, but there is real cost imposition takes the form of the fact that SMIC has to operate at these dreadful yields. The economics are terrible, the fact that when they’re building all of these data centers, they’re having to use lousy chips, they’re having to buy more of them, and they’re having to deal with the higher energy costs of all of that.

It’s true that China does have just this extraordinary willingness to spend, but the point is we’re in this race, we’re in this competition, and it gives us an edge, not an infinite edge, but a meaningful edge.

This is a field, maybe you don’t have an answer to this, but there are some that argue that actually the better approach to some of these chips is a much more expensive, a much more high speed memory approach that has much lower latency using SRAM instead of High Bandwidth Memory. Is there a possibility that we actually pushed China down a different route towards developing these chips that maybe ends up being better because we thought HBM was the right way?

GA: I think that’s probably not what’s going to happen. It’s definitely worth saying that that could happen, a version of that kind of happened with YMTC and their NAND memory. There were multiple different approaches they could have taken technologically. All the Western and US allied Asian firms picked one way because it was obviously the best economics, and they held all the intellectual property, they held all the patents and so YMTC basically said, “Okay, we’re going to go down this other road and because we’re so heavily subsidized, it doesn’t really matter that it’s going to be more expensive”, and they did ultimately figure out how to get it work.

I think what you’re describing, the SRAM in massive quantities thing verges on the neuromorphic architecture, and it’s not that that’s impossible, and it’s not that that’s never going to happen, but it’s clearly not the right step for China right now. I think they have a path to domestic HBM production and that’s so much easier for them to chase than a SRAM revolution. I think traditionally they would just wait for somebody else to try and figure out and demonstrate that it’s possible and then they would throw infinite resources at it…

...For all of these chip controls, all this stuff that you’ve covered and written about, does any of it matter, if you add it all up, in comparison to that point that they don’t have EUV?

GA: EUV is the highest return on investment export control that we have had and are likely to have. It’s definitely the case that some of the other stuff hurts. If you talk about SMIC, for example, increasing their yields on their 7nm line and expanding the capacity of their 7nm line, they actually are bottlenecked by US equipment, a lot of US metrology equipment, etc. But if you want to talk about why they can’t—

But they do have the equipment, they just need to figure out how to duplicate it. The challenge with EUV is they don’t even have one, so duplicating it is that much harder.

GA: Yes exactly, it’s a lot harder to reverse engineer something that you don’t have a copy of, it really helps to have a copy of it. So I would say the EUV thing really matters, but there’s areas where China is facing headwinds that aren’t part of the EUV story.

So just to take one example, in DRAM, Micron still doesn’t use EUV in their production of DRAM, and they’re a globally competitive firm. So CXMT, the Chinese domestic champion of DRAM, the reason why they’re not currently globally competitive is not the absence of EUV, but I do think you could make a story that it is the absence of all this other stuff that we’ve been refusing to sell…

You’re not necessarily like a geopolitical analyst, but the thing that scares me about all this, I think I’ve asked you this every time, it still scares me, is we’re talking and saying the administration needs to do better at enforcing these laws that guarantee a power imbalance in the long run, that is usually very destabilizing. China might think, if we’re going to have a fundamental power imbalance, then how about we take Taiwan off the board because that will screw everyone? Now we’re equal again. Do you worry about this? You’re a strong advocate for doing this better.

GA: So. Number one is, I don’t know that I ever agree that the balance of power is the stable universe. In 1994, the Taiwanese defense budget was half of that of the Chinese defense budget, now the Chinese defense budget is infinity times that of the Taiwanese defense budget. And by contrast, in 1997, I think there was a single U.S aircraft carrier battle group that was more than capable of defeating the entire Chinese Navy and the entire Chinese Air Force, that was a massive power imbalance and it was a very stable relationship. And by the way, it was a relationship in which a lot of people got rich and had productive free trade and all these kinds of happy relationships. So the idea that power parity is the path to peace here, don’t know that I necessarily agree with that, I don’t think the historical record really bears that out.

Now, you could argue if we’re going to make bold moves and try and seize a decisive advantage, could those bold moves be destabilizing? Yeah, I think definitely think so.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and TSMC. Holdings are subject to change at any time.

Where Are US Stocks Headed In 2025?

Beware of bad forecasts (and the most honest forecast you can find)

We’re at the start of a new year in 2025, and there has been a deluge of forecasts in recent weeks for where the US stock market will end the year. 

For me, the most important thing to know about the forecasts is just how often they have been right. Unfortunately, the collective forecasting track record of even the USA’s biggest banks and investment firms have been poor.

Morgan Housel once studied their annual forecasts for the S&P 500 – a widely followed index for the US stock market – from 2000 to 2014. He found that a simple assumption of the S&P 500 going up by 9% a year (the 9% figure was chosen because it represented the index’s long-term annualised return) was more accurate than the forecasts provided by the banks and investment firms; the former was off by an average of 14.1 percentage points per year while the latter was off by 14.7 percentage points per year.

When thinking about the future return of stocks, Housel once wrote that it can be boiled down simply to the “dividend yield + earnings growth +/- change in the earnings multiple (valuations).” I agree, it really is that simple. The dividend yield and earnings growth can be estimated with a reasonable level of accuracy. What’s tricky here is the change in the earnings multiple. Housel explained:

“Earnings multiples reflect people’s feelings about the future. And there’s just no way to know what people are going to think about the future in the future. How could you?”

To compound the problem, over short periods of time, such as a year, it’s the change in the earnings multiple that has an overwhelming impact on how stock prices move. In Housel’s dataset when he was looking at market forecasts, 2002 was a year with one of the largest declines for the S&P 500 – it fell by more than 20%. According to data from economist and Nobel Laureate Robert Shiller, the S&P 500’s earnings actually grew by 12% in 2002. It was the decline in the index’s earnings multiple by 30% from 46 to 33 that led to the sharp drop in its price during the year. The forecasters were predicting that the S&P 500 would increase by a mid-teens percentage in price in 2002, which was close to the index’s earnings growth for the year – I believe what the forecasters failed to anticipate was the sharp drop in the earnings multiple. 

If you really need a forecast for where the US stock market will end up in 2025, check out the table below. It shows where the S&P 500 will be given various assumptions for its earnings growth and its earnings multiple. For reference, the index ended the year at a price level of 5,882 with a price-to-earnings (P/E) ratio of 28. If the S&P 500’s earnings fell by 20% in 2025 and the P/E ratio shrank to 5, we’d be looking at a price level of 840 and a disastrous 86% price decline; if earnings growth was 20%, and the P/E ratio expanded to 40, we’d be looking at a price level of 10,083, and a handsome gain of 71%. 

The table contains a wide range of outcomes. But it’s possible for the S&P 500’s actual performance in 2025 to exceed the boundaries of the table. It’s hard to say where the limits are when it comes to the feelings of market participants. Nonetheless, of all the forecasts you’ve seen and are going to see about the US stock market for 2025, I’m confident the table in this article will be the most honest forecast you can find.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time. 

What We’re Reading (Week Ending 29 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 29 December 2024:

1. Quantum Computers Cross Critical Error Threshold – Ben Brubaker

In the 1990s, researchers worked out the theoretical foundations for a way to overcome these errors, called quantum error correction. The key idea was to coax a cluster of physical qubits to work together as a single high-quality “logical qubit.” The computer would then use many such logical qubits to perform calculations. They’d make that perfect machine by transmuting many faulty components into fewer reliable ones…

…This computational alchemy has its limits. If the physical qubits are too failure-prone, error correction is counterproductive — adding more physical qubits will make the logical qubits worse, not better. But if the error rate goes below a specific threshold, the balance tips: The more physical qubits you add, the more resilient each logical qubit becomes.

Now, in a paper(opens a new tab) published today in Nature, Newman and his colleagues at Google Quantum AI have finally crossed the threshold. They transformed a group of physical qubits into a single logical qubit, then showed that as they added more physical qubits to the group, the logical qubit’s error rate dropped sharply…

…At first, many researchers thought quantum error correction would be impossible. They were proved wrong in the mid-1990s, when researchers devised simple examples of quantum error-correcting codes. But that only changed the prognosis from hopeless to daunting.

When researchers worked out the details, they realized they’d have to get the error rate for every operation on physical qubits below 0.01% — only one in 10,000 could go wrong. And that would just get them to the threshold. They would actually need to go well beyond that — otherwise, the logical qubits’ error rates would decrease excruciatingly slowly as more physical qubits were added, and error correction would never work in practice…

…That variation, called the surface code, is based on two overlapping grids of physical qubits. The ones in the first grid are “data” qubits. These collectively encode a single logical qubit. Those in the second are “measurement” qubits. These allow researchers to snoop for errors indirectly, without disturbing the computation.

This is a lot of qubits. But the surface code has other advantages. Its error-checking scheme is much simpler than those of competing quantum codes. It also only involves interactions between neighboring qubits — the feature that Preskill found so appealing.

In the years that followed, Kitaev, Preskill and a handful of colleagues fleshed out the details(opens a new tab) of the surface code. In 2006, two researchers showed(opens a new tab) that an optimized version of the code had an error threshold around 1%, 100 times higher than the thresholds of earlier quantum codes. These error rates were still out of reach for the rudimentary qubits of the mid-2000s, but they no longer seemed so unattainable…

…Fowler, Martinis and two other researchers wrote a 50-page paper(opens a new tab) that outlined a practical implementation of the surface code. They estimated that with enough clever engineering, they’d eventually be able to reduce the error rates of their physical qubits to 0.1%, far below the surface-code threshold. Then in principle they could scale up the size of the grid to reduce the error rate of the logical qubits to an arbitrarily low level. It was a blueprint for a full-scale quantum computer…

…When you put the theory of quantum computing into practice, the first step is perhaps the most consequential: What hardware do you use? Many different physical systems can serve as qubits, and each has different strengths and weaknesses. Martinis and his colleagues specialized in so-called superconducting qubits, which are tiny electrical circuits made of superconducting metal on silicon chips. A single chip can host many qubits arranged in a grid — precisely the layout the surface code demands.

The Google Quantum AI team spent years improving their qubit design and fabrication procedures, scaling up from a handful of qubits to dozens, and honing their ability to manipulate many qubits at once. In 2021, they were finally ready to try error correction with the surface code for the first time. They knew they could build individual physical qubits with error rates below the surface-code threshold. But they had to see if those qubits could work together to make a logical qubit that was better than the sum of its parts. Specifically, they needed to show that as they scaled up the code — by using a larger patch of the physical-qubit grid to encode the logical qubit — the error rate would get lower.

They started with the smallest possible surface code, called a “distance-3” code, which uses a 3-by-3 grid of physical qubits to encode one logical qubit (plus another eight qubits for measurement, for a total of 17). Then they took one step up, to a distance-5 surface code, which has 49 total qubits. (Only odd code distances are useful.)

In a 2023 paper(opens a new tab), the team reported that the error rate of the distance-5 code was ever so slightly lower than that of the distance-3 code. It was an encouraging result, but inconclusive — they couldn’t declare victory just yet…

…At the beginning of 2024, they had a brand-new 72-qubit chip, code-named Willow, to test out. They spent a few weeks setting up all the equipment needed to measure and manipulate qubits…

…Then a graph popped up on the screen. The error rate for the distance-5 code wasn’t marginally lower than that of the distance-3 code. It was down by 40%. Over the following months, the team improved that number to 50%: One step up in code distance cut the logical qubit’s error rate in half…

…The team also wanted to see what would happen when they continued to scale up. But a distance-7 code would need 97 total qubits, more than the total number on their chip. In August, a new batch of 105-qubit Willow chips came out…

…When the group returned the following morning, they saw that going from a distance-5 to a distance-7 code had once again cut the logical qubit’s error rate in half. This kind of exponential scaling — where the error rate drops by the same factor with each step up in code distance — is precisely what the theory predicts. It was an unambiguous sign that they’d reduced the physical qubits’ error rates well below the surface-code threshold…

…At the same time, researchers recognize that they still have a long way to go. The Google Quantum AI team only demonstrated error correction using a single logical qubit. Adding interactions between multiple logical qubits will introduce new experimental challenges.

Then there’s the matter of scaling up. To get the error rates low enough to do useful quantum computations, researchers will need to further improve their physical qubits. They’ll also need to make logical qubits out of something much larger than a distance-7 code. Finally, they’ll need to combine thousands of these logical qubits — more than a million physical qubits.

2. History: Kodak & Fujifilm – Find Value

Ultimately, Kodak couldn’t adapt to the changing world and filed for bankruptcy in 2012.

In the game for over 100 years, Kodak survived two World Wars and the Great Depression and helped humans photograph the moon and Mars. Like Coca-Cola and McDonald’s, it used to be one of the most recognized brands in the world…

…Faced with a sharp decline in sales from its cash cow product, Fujifilm acted swiftly and changed its business through innovation and external growth. Under Shigetaka Komori (President in 2000), Fujifilm quickly carried out massive reforms. In 2004, Komori came up with a six-year plan called VISION75.

The management restructured its film business by downscaling the production lines and closing redundant facilities. In the meantime, the R&D departments moved to a newly built facility to unify the research efforts and promote better communication and innovation culture among engineers.

Realizing that the digital camera business would not replace the lucrative film due to the low margins, Fujifilm performed a massive diversification based on capabilities and innovation.

Even before launching the VISION75 plan, Komori had taken stock of their technologies and compared them with the demand of the international market. After which the R&D team came up with a chart listing the all existing in-house technologies that could match future markets.

For instance, Fujifilm was able to predict the boom of LCD screens and invested heavily in this market. Leveraging on photo film technology, they created FUJITAC, a variety of high-performance films essential for making LCD panels for TV, computers, and smartphones. Today, FUJITAC owns 70% of the market for protective LCD polarizer films.

Fujifilm also targeted unexpected markets like cosmetics. The rationale behind cosmetics comes from 70 years of experience in gelatin, the chief ingredient of photo film which is derived from collagen. Human skin is 70% collagen. Fujifilm also possessed deep knowledge in oxidation, a process connected both to the aging of human skin and to the fading of photos over time.

When promising technologies didn’t exist internally, Fujifilm proceeded by mergers and acquisitions. Based on technological synergies, it acquired Toyoma Chemical in 2008 to enter the drug business. Delving further into the healthcare segment, Fujifilm also brought a radio-pharmaceutical company now called Fujifilm RI Pharma. It also reinforced its position in existing joint ventures such as Fuji-Xerox which became a consolidated subsidiary in 2001 after Fujifilm purchased an additional 25% share in this partnership.

Fast forward 9 years after the peak of film sales, in 2010, Fujifilm was a new company. In 2000, 60% of sales and 70% of profits came from the film ecosystem, compare this to 2010 where the “Imaging segment” accounted for less than 16% of sales. Fujifilm managed to emerge victorious through a restructuring and diversification strategy…

…Unlike Fujifilm which recognized early on that photography was a doomed business and tackled new markets with a completely different portfolio, Kodak made multiple wrong moves and persisted in the decaying film industry.

It was not that Kodak didn’t want to change, it tried hard, but it did it wrong. Kodak’s management didn’t fully recognize that the rise of digital imaging would have dire consequences for the future of photo printing. It tried to replicate the film print business model in the digital world. In 2004, Facebook was launched, and people are just not going to print pictures anymore.

Interestingly, Kodak understood the impact of digitalization and predicted that pictures would be shared online. They acquired a photo-sharing website called Ofoto in 2001. Unfortunately, the company used Ofoto to make people print digital pictures. They failed in realizing that online photo sharing was the new business, not just a way to expand printing sales…

…While Fujifilm invested heavily in the pharmaceutical and healthcare sector to reduce its exposure to the challenging photo industry, Kodak sold its highly profitable Healthcare Imaging branch in 2007 to put more resources into its losing consumer camera division.

3. One Bed, Two Dreams: Building Silicon Valley Bank in China with Ken Wilcox (Transcript here) – Bernard Leong and Ken Wilcox

Wilcox: In the US, banks sometimes fail. When I started my career 40 years ago in banking, we had 18,000 banks. Today we have about 5,000. What happened to all of them? Where did 13,000 banks go? Some of them got acquired, but many of them failed. When a bank makes too many bad loans, the Federal Reserve causes it to fail and it disappears. In China, banks don’t fail. First of all, banks are fundamentally owned by the government and when they make too many bad loans, they don’t typically fail. Usually the government, the regulators, come and somebody gets arrested and the government re-capitalizes the bank. It’s often very quiet – it’s not even necessarily announced to the world – and the bank keeps on going. What does that mean? That means that Chinese banks can take more risk than US banks can. In the US, we had almost no competitors because everybody thought “Lending to technology companies is way too risky, so we’ll just let Silicon Valley Bank do it. None of the rest of us will try.” In China, many, many, many banks want to copy us and do the same thing, because they’re not worried about what happens if we lose too much money. So that’s another big difference there…

…Wilcox: After I’d been there for several months, it occurred to me one day that my main conversation partner, the guy who is the Chairman, who was from Shanghai Pudong Development Bank, it occurred to me that he actually wears three hats. The only hat I wear is banker / businessman. But he had a banker / businessman hat, and he had a party hat, and he had a government hat. Then I started to wonder, when I’m talking with him, which hat is he wearing? It took me a long time before I figured out he doesn’t even think he has three hats. He thinks they’re all the same hat, so he’s not even thinking about it the same way I was. So I think that’s quite confusing. 

It’s also confusing when people find out, when a US company comes to China and finds out that it’s going to get a Party Committee in their organization. They get very confused because they don’t know what a Party Committee is. If you ask people in government or in the party, “What’s a Party Committee?” You say, “We’re going to have one , but I don’t understand what it is?” It’s hard for them to explain. You get multiple definitions and then you don’t know what is actually going to happen. Some people will tell me, “When you get a Party Committee, it’ll be so good because all the employees in your organization who are members of the party will have a place to gather once a month and discuss things.” Then somebody else says, “When you get a Party Committee, it’ll be so much easier because the Party Committee will help you put on social events for the employees, all of the employees.” But then somebody else told me, “No, when you get a Party Committee, it’ll be like another board, but a secret board. You won’t know who’s on it and they will influence what the real board does – or what I would call the real board.” Then other people told me, “Don’t pay any attention. That’s all silliness. There is no such thing as a Party Committee.” So it’s very, very confusing…

…Wilcox: I’ll give you the best example and that is that I believe based on the years I spent in China, that ultimately the main reason they wanted us in China – and they actually were very determined to get us to come to China. I remember that early on, a couple of years before my wife and I moved to China, I had a series of meetings with a very high-level government official who’s also got a lot of status in the party. He was saying to me, “Ken, we really want you to bring your bank to China. Your bank is more important than any bank we’ve ever met. You’re more important than – he explicitly said this – he says, You’re more important than Morgan Stanley and more important than Goldman Sachs. And by the way Ken, you’re one of the smartest Americans we’ve met.” So you think to yourself, “Well this is an exaggeration, but it does feel nice.” He obviously is going to help me get established in China. But what I didn’t realize is that the main reason they wanted us in China was so that they could study our business model and figure out how to copy it over time. That was something I wasn’t expecting, but I should have if I were less naive. If I were better prepared, I would have realized that was the intention. So the original title, the working title I had for my book, which I had to change because the publisher didn’t like it, my original title was, “One Bed, Two Dreams”, because that’s a phrase that most Chinese are familiar with. It explains why it didn’t work well, because my dream was working with all these Chinese technology companies and helping them do business with the rest of the world, and their dream was learning our business model.

The result was that when they gave us our license, they also told us that we would not be able to use Chinese currency for three years. That made it almost impossible to do business for the first three years. The people that said these things were both members of the government and members of the party. So I don’t know which one was talking. But they said, “We understand that you won’t be able to do much business for the first three years because the companies that you want to work with all want renminbi, they don’t want US dollars. But you can still be a good citizen. You can do what we would do, and that is we here in China help each other. So you can be helpful and prove that you care about China by teaching other banks your business model during the three years when you can’t really do much business. We’ll give you subsidy to help support you during the three years when you can’t earn much money because you can’t really do any business.” Then at the end of the three years when they gave us permission to use renminbi, they said to us, “We are so happy that you came to China and we really admire your business model and we admire it so much that we’re starting a bank of our own using your business model. Would you mind staying a little longer and being an advisor to this new bank that’s going to use your business model?” It felt like they were stealing my intellectual property but I’m not sure they thought of it that way…

…Wilcox: General Motors when it went over to China in 1985, the Chinese really didn’t have an auto industry. They wanted General Motors there not because they wanted General Motors to make a lot of money. It was because they wanted to learn about automobile manufacturing and because it took so long to build up the knowledge base, General Motors was welcome for about 30 years. But now General Motors is slowly losing market share and it’s probably going to withdraw from China. Then what will happen is China has made so much progress partially because they’re hardworking and smart, partially because they had General Motors there to learn from them, and then once General Motors retracts and goes back to the US, the auto industry in China will begin exporting and competing globally. I think actually the Chinese have done such a good job of first of all, learning from foreign automakers, but then on top of that, taking it further that the foreign automakers are in huge trouble. I think China’s automobile industry will dominate in the future. 

4. Weekend thoughts: crypto, mania, and reflexivity follow up – Andrew Walker

When I first saw the “BTC yield” metric, I thought it was pretty crazy. MSTR is trading for approaching 3x the value of their bitcoin; if they issue stock and use all the stock to buy bitcoin, of course it’s going to cause their bitcoin holdings per share to go up…. and even more so if they issue debt and use that to buy bitcoin and then judge themselves on a per share basis! Taken to its extreme2, if you thought BTC yield was truly the be all, end all of value creation, and the higher the BTC yield the better, then any company following a pure BTC yield strategy should lever themselves up to the maximum amount possible, no matter the terms, and use all of the proceeds to buy BTC. Obviously no one does that because it would be insanity and eventually banks would stop lending, but I illustrate that only to show that purely maximize BTC yield is clearly not value maximizing….

But, if you look at the fine print, BTC yield is even crazier than simply suggesting increasing BTC per share is the only value creation metric that matters. If you really look at the MSTR BTC yield table above or read their disclosures, you’ll notice that the BTC yield assumes that all of their convertible debt converts…

…So, go back to MSTR’s BTC yield table; they have a set of 2029 converts that convert at $672.40/share. Those are far, far out of the money (MSTR’s stock trades for ~$400/share as I write this)…. yet MSTR’s BTC yield assumes those converts are in the money / will convert for their BTC yield.

That is an insane assumption that casually assumes MSTR’s shares almost double3. And, again, by taking this assumption to its extreme, we can see how wild it is. Like all things, convert debt involves different trade offs; for example, you could get a higher strike price by taking on a higher interest rate (i.e. if your strike price is ~$670 at a 0% interest rate, you could probably push it up to $770 by taking on a 3% interest rate or $870 by taking on a 6% interest rate4). MSTR has issued all of this convert debt deals at 0% interest rates, which is a great pitch (“we’re borrowing for free, we don’t have to pay a carry to buy BTC, etc”)…. but if BTC yield is all that matters, MSTR could start issuing convertible debt with really high interest rates, which would jack that strike price of the convert up, thus decreasing dilution and increasing the BTC yield…

…MSTR fans would say “but raising converts with interest doesn’t make sense; it’s no longer free money / now it has a carry cost.” And I understand that argument…. but convertible debt isn’t free money either, and I just do this to highlight how insane BTC yield is as a be all / end all metric!…

…The BTC yield that all of these companies present assumes that their convert debt converts, and that is a big / crazy assumption…. but it’s interesting to think about what will happen in five years. There is, of course, a world where BTC goes to $250k (or higher) and all of these stocks moon. In that world, the converts will be well in the money, and all of this worry will sound silly…. but there is also a world where BTC stalls out or drops over the next few years, and that world is really interesting. All of these companies are raising converts with 5-7 year maturities, so if BTC doesn’t moon and the converts aren’t in the money, you’re going to have all of the BTC standard companies facing a maturity wall at the same time. What happens then? I doubt they can roll the converts at anything close to the same terms (remember, cheap converts require high volatility, and if the stocks have stalled out for five years vol is going to be a lot lower), so they’ll either need to sell a ton of equity to paydown the debt (which will be tough; there probably won’t be much enthusiasm for the stock, and I’m not sure the market would be able to absorb the hypothetical amount of stock they’d need to issue without some enthusiasm)…. or you’ll have a wave of BTC standard companies all looking to sell down some of their bitcoin to payoff converts at the exact same time.

5. Satya Nadella | BG2 (Transcript here)- Bill Gurley, Brad Gerstner, and Satya Nadella

Gerstner: Shifting maybe to enterprise AI, Satya. The Microsoft AI business has already reported to be about $10 billion. You’ve said that it’s all inference and that you’re not actually renting raw GPUs to others to train on, because your inference demand is so high. As we think about this, there’s a lot of skepticism out there in the world as to whether or not major workloads are moving. If you think about the key revenue products that people are using today and how it’s driving that inference revenue for you today, and how that may be similar or different from Amazon or Google, I’d be interested in that.

Nadella: The way for us this thing has played out is, you got to remember most of our training stuff with OpenAI is sort of more investment logic. It’s not in our quarterly results – it’s more in the other income, based on our investment.

Gerstner: Other income or loss right?

Nadella: That is right. That’s how it shows up. So most of the revenue or all the revenue is pretty much our API business or in fact, to your point, ChatGPT’s inference costs are there, so that’s a different piece. The fact is the big-hit apps of this era are ChatGPT, Co-Pilot, GitHub Co-Pilot, and the APIs of OpenAI and Azure OpenAI. In some sense, if you had to list out the 10 most hit apps, these would probably be in the four or five of them and so therefore that’s the biggest driver.

The advantage we have had, and OpenAI has had, is we’ve had two years of runway pretty much uncontested. To your point, Bill made the point about everybody’s awake and it might be. I don’t think there will be ever again maybe a two-year lead like this, who knows? It’s all you say that and somebody else drops some sample and suddenly blows the world away. But that said, I think it’s unlikely that that type of lead could be established with some foundation model. But we had that advantage, that was the great advantage we’ve had with OpenAI. OpenAI was able to really build out this escape velocity with ChatGPT.

But on the API side, the biggest thing that we were able to gain was.. Take Shopify or Stripe or Spotify. These were not customers of Azure, they were all customers of GCP or they were customers of AWS. So suddenly we got access to many, many more logos, who are all “digital natives” who are using Azure in some shape or fashion and so on. So that’s sort of one. When it comes to the traditional enterprise, I think it’s scaling. Literally it is people are playing with Co-Pilot on one end and then are building agents on the other end using Foundry. But these things are design wins and project wins and they’re slow, but they’re starting to scale. Again, the fact that we’ve had two years of runway on it, I think…

I like that business a lot more, and that’s one of the reasons why the adverse selection problems here would have been lots of tech startups all looking for their H100 allocations in small batches. Having watched what happened to Sun Microsystems in the dotcom, I always worry about that. You just can’t chase everybody building models. In fact, even the investor side, I think the sentiment is changing, which is now people are wanting to be more capital-light and build on top of other people’s models and so on and so forth. If that’s the case, everybody who was looking for H100 will not want to look for it more. So that’s what we’ve been selective on.

Gerstner: You’re saying for the others that training of those models and those model clusters was a much bigger part of their AI revenue versus yours? 

Nadella: I don’t know. This is where I’m speaking for other people’s results. It’s just I go back and say, “What are the other big-hit apps?” I don’t know what they are. What models do they run? Where do they run them? When I look at the DAU numbers of any of these AI products, there is ChatGPT, and then there is – even Gemini, I’m very surprised at the Gemini numbers, obviously I think it’ll grow because of all the inherent distribution. But it’s kind of interesting to say that they’re not that many. In fact, we talk a lot more about AI scale, but there is not that many hit apps. There is ChatGPT, Github Co-Pilot, there’s Co-Pilot, and there’s Gemini. I think those are the four I would say, in a DAU, is there anything else that comes to your mind?…

…Gurley: Satya, on the enterprise side, obviously the coding space is off to the races and you guys are doing well and there’s a lot of venture-backed players there. On some of the productivity apps, I have a question about the the Co-Pilot approach and I guess Marc Benioff’s been obnoxiously critical on this front, calling it Clippy 2 or whatever. Do you worry that someone might think first-principles AI from ground-up, and that some of the infrastructure, say in an Excel spreadsheet, isn’t necessary to know if you did a AI-first product. The same thing by the way could be said about the CRM right? There’s a bunch of fields and tasks that that may be able to be obfuscated for the user.

Nadella: It’s a very, very, very important question. The SaaS applications or biz apps, let me just speak of our own Dynamics thing. The approach at least we’re taking is, I think the notion that business applications exist, that’s probably where they’ll all collapse in the agent era. Because if you think about it, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents, and these agents are going to be multi-repo CRUD. They’re not going to discriminate between what the back-end is, they’re going to update multiple databases, and all the logic will be in the AI tier so to speak. Once the AI tier becomes the place where all the logic is, then people will start replacing the backends right? In fact it’s interesting, as we speak, I think we are seeing pretty high rates of wins on Dynamics backends and the agent use, an we are going to go pretty aggressively and try and collapse it all, whether it’s in customer service, whether it is in… 

By the way, the other fascinating thing that’s increasing is just not CRM, but even what we call finance and operations, because people want more AI-native biz app. That means the biz app, the logic tier, can be orchestrated by AI and AI agents. So in other words, Co-Pilot to agent to my business application should be very seamless.

Now in the same way, you could even say, “Why do I need Excel?” Interestingly enough, one of the most exciting things for me is Excel with Python, is like GitHub with Co-Pilot. So what we’ve done is, when you have Excel – by the way this would be fun for you guys – which is you should just bring up Excel, bring up Co-Pilot, and start playing with it. Because it’s no longer like – it is like having a data analyst, so it’s no longer just making sense of the numbers that you have. It will do the plan for you. It will literally – like how GitHub Co-Pilot Workspace creates the plan and then it executes the plan – this is like a data analyst who is using Excel as a sort of row/column visualization to do analysis scratch pad. So it kind of tools you. So the Co-Pilot is using Excel as a tool with all of its action space because it can generate and it has python interpreter. That is in fact a great way to reconceptualize Excel. At some point you could say, “I’ll generate all of Excel” and that is also true. After all, there’s a code interpreter, so therefore you can generate anything.\

So yes, I think there will be disruption. The way we are approaching, at least our M365 stuff is, one is build Co-Pilot as that organizing layer UI for AI, get all agents, including our own agents – you can say Excel is an agent to my Co-Pilot, Word is an agent, it’s kind of a specialized canvases, which is I’m doing a legal document, let me take it into Pages and then to Word and then have the Co-Pilot go with it, go into Excel and have the Co-Pilot go with it. That’s sort of a new way to think about the work in workflow…

…Gurley: Satya, there’s been a lot of talk about model scaling and obviously there was talk, historically about 10x-ing the cluster size that you might do, over and over again, not once and then twice. X.AI is still making noise about going in that direction. There was a podcast recently where they flipped everything on their head and they said “If we’re not doing that anymore, it’s way better because we can just move on to inference which is getting cheaper and you won’t have to spend all this capex. I’m curious, those are two views of the same coin. But what’s your view on LLM model scaling and training cost, and where we’re headed in the future?

Nadella: I’m a big believer in scaling laws I’ll first say. In fact, if anything, the bet we placed in 2019 was on scaling laws and I stay on that. In other words, don’t bet against scaling laws. But at the same time, let’s also be grounded on a couple of different things.

One is these exponentials on scaling laws will become harder, just because as the clusters become harder, the distributed computing problem of doing large scale training becomes harder. That’s one side of it. But I would just still say – and I’ll let the OpenAI folks speak for what they’re doing – but they are continuing to – pre-training I think is not over, it continues. But the exciting thing, which again OpenAI has talked about and Sam has talked about, is what they’ve done with o1. This Chain of Thought with autograding is just a fantastic. In fact, basically, it is test-time compute or inference-time compute as an another scaling law. You have pre-training, and then you have effectively this test-time sampling that then creates the tokens that can go back into pre-training, creating even more powerful models that then are running on your inference. So therefore, that’s I think a fantastic way to increase model capability.

The good news of test-time or inference-time compute is sometimes, running of those o1 models means… There’s two separate things. Sampling is like training, when you’re using it to generate tokens for your pre-training. But also customers, when they are using o1, they’re using more of your meters, so you are getting paid for it. Therefore, there is more of an economic model, so I like it. In fact, that’s where I said I have a good structural position with 60-plus data centers all over the world.

Gurley: It’s a different hardware architecture for one of those scaling versus the other, for the pre-training versus…

Nadella: Exactly. I think the best way to think about it is, it’s a ratio. Going back to Brad’s thing about ROIC, this is where I think you have to really establish a stable state. In fact, whenever I’ve talked to Jensen, I think he’s got it right, which is you want to buy some every year. Think about it, when you depreciate something over 6 years, the best way is what we have always done, which is you buy a little every year and you age it, you age it, you age it. You use the leading node for training and then the next year it goes into inference, and that’s sort of the stable state I think we will get into across the fleet for both utilization and the ROIC and then the demand meets supply.

Basically, to your point about everybody saying, “Have the exponentials stopped?” One of the other things is the economic realities will also stop, right? At some point everybody will look and say, “What’s the economically rational thing to do?” Which is, “Even if I double every year’s capability but I’m not able to sell that inventory,” and the other problem is the Winner’s Curse, which is – you don’t even have to publish a paper, the other folks have to just look at your capability and do either a distillation… It’s like piracy. You can sign off all kinds of terms of use, but like it’s impossible to control distillation. That’s one. Second thing is, you don’t even have to do anything, you just have to reverse engineer that capability and you do it in a more computer efficient way. So given all this, I think there will be a governor on how much people will chase. Right now a little bit of everybody wants to be first. It’s great, but at some point all the economic reality will set in on everyone and the network effects are at the app layer, so why would I want to spend a lot on some model capability with the network effects are all on the app?…

…Gurley: Does your answer to Brad’s question about the balancing of GPU ROI, does that answer the question as to why you’ve outsourced some of the infrastructure to Coreweave in that partnership that you have?

Nadella: That we did because we all got caught with the hit called ChatGPT. It was impossible. There’s no supply chain planning I could have done. None of us knew what was going to happen. What happened in November of ‘22, that was just a bolt from the blue, therefore we had to catch up. So we said, “We’re not going to worry about too much inefficiency.” That’s why whether it’s Coreweave or many others – we bought all over the place. That is a one time thing and then now it’s all catching up. That was just more about trying to get caught up with demand.

Gerstner: Are you still supply-constrained Satya?

Nadella: Power, yes. I am not chip supply-constrained. We were definitely constrained in ‘24. What we have told the street is that’s why we are optimistic about the first half of ‘25, which is the rest of our fiscal year and then after that I think we’ll be in better shape going into ‘26 and so on. We have good line of sight.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

6 Things I’m Certain Will Happen In The Financial Markets In 2025

There are so many things that can happen, but here are six things that I’m certain will happen in the financial markets in 2025.

There are no guarantees in the world of investing… or are there? Here are six things I’m certain will happen in 2025.

1. There will be something huge to worry about in the financial markets.

Peter Lynch is the legendary manager of the Fidelity Magellan Fund who earned a 29% annual return during his 13-year tenure from 1977 to 1990. He once said:

“There is always something to worry about. Avoid weekend thinking and ignore the latest dire predictions of the newscasters. Sell a stock because the company’s fundamentals deteriorate, not because the sky is falling.”

Imagine a year in which all the following happened: (1) The US enters a recession; (2) the US goes to war in the Middle East; and (3) the price of oil doubles in three months. Scary? Well, there’s no need to imagine: They all happened in 1990. And what about the S&P 500? It has increased by more than 1,600% from the start of 1990 to today, even without counting dividends.

There will always be things to worry about. But that doesn’t mean we shouldn’t invest.

2. Individual stocks will be volatile.

From 1997 to today, the maximum peak-to-trough decline in each year for Amazon.com’s stock price ranged from 12.6% to 83.0%. In other words, Amazon’s stock price had suffered a double-digit fall every single year for 27 years. Meanwhile, the same Amazon stock price had climbed by an astonishing 233,924% (from US$0.098 to more than US$229) over the same period. 

If you’re investing in individual stocks, be prepared for a wild ride. Volatility is a feature of the stock market – it’s not a sign that things are broken. 

3. US-China relations will either remain status quo, intensify, or blow over.

“Seriously!?” I can hear your thoughts. But I’m stating the obvious for a good reason: We should not let our views on geopolitical events dictate our investment actions. Don’t just take my words for it. Warren Buffett himself said so. In his 1994 Berkshire Hathaway shareholders’ letter, Buffett wrote (emphases are mine):

“We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen. 

Thirty years ago, no one could have foreseen the huge expansion of the Vietnam War, wage and price controls, two oil shocks, the resignation of a president, the dissolution of the Soviet Union, a one-day drop in the Dow of 508 points, or treasury bill yields fluctuating between 2.8% and 17.4%.

But, surprise – none of these blockbuster events made the slightest dent in Ben Graham’s investment principles. Nor did they render unsound the negotiated purchases of fine businesses at sensible prices. 

Imagine the cost to us, then, if we had let a fear of unknowns cause us to defer or alter the deployment of capital. Indeed, we have usually made our best purchases when apprehensions about some macro event were at a peak. Fear is the foe of the faddist, but the friend of the fundamentalist.

A different set of major shocks is sure to occur in the next 30 years. We will neither try to predict these nor to profit from them. If we can identify businesses similar to those we have purchased in the past, external surprises will have little effect on our long-term results.”

From 1994 to the third quarter of 2024, Berkshire Hathaway’s book value per share, a proxy for the company’s intrinsic value – albeit a flawed measure – grew by 13.5% annually. Buffett’s disciplined focus on long-term business fundamentals – while ignoring the distractions of political and economic forecasts – has worked out just fine.

4. Interest rates will move in one of three ways: Sideways, up, or down.

“Again, Captain Obvious!?” Please bear with me. There is a good reason why I’m stating the obvious again.

Much ado has been made about what central banks have been doing, and would do, with their respective economies’ benchmark interest rates. This is because of the theoretical link between interest rates and stock prices.

Stocks and other asset classes (bonds, cash, real estate etc.) are constantly competing for capital. In theory, when interest rates are high, the valuation of stocks should be low, since the alternative to stocks – bonds – are providing a good return. On the other hand, when interest rates are low, the valuation of stocks should be high, since the alternative – again, bonds – are providing a poor return. 

But what does reality say? Here’re important historical data on the actual relationship between interest rates and stocks in the US:  

  • Rising interest rates have been met with rising valuations. According to data from economist and Nobel Laureate Robert Shiller, the US 10-year Treasury yield was 2.3% at the start of 1950. By September 1981, it had risen to 15.3%, the highest rate recorded in Shiller’s dataset. In that same period, the S&P 500’s price-to-earnings (P/E) ratio moved from 7 to 8. In other words, the P/E ratio for the S&P 500 increased slightly despite the huge jump in interest rates. It’s worth noting too that the S&P 500’s P/E ratio of 7 at the start of 1950 was not a result of earnings that were temporarily inflated. Yes, there’s cherry picking with the dates. For example, if I had chosen January 1946 as the starting point, when the US 10-year Treasury yield was 2.2% and the P/E ratio for the S&P 500 was 19, then it would be a case of valuations falling alongside rising interest rates. But this goes to show that while interest rates have a role to play in the movement of stocks, it is far from the only thing that matters.
  • Stocks have climbed in rising interest rate environments. In a September 2022 piece, Ben Carlson, Director of Institutional Asset Management at Ritholtz Wealth Management, showed that the S&P 500 climbed by 21% annually from 1954 to 1964 even when the yield on 3-month Treasury bills (a good proxy for the Fed Funds rate, which is the key interest rate set by the Federal Reserve) surged from around 1.2% to 4.4% in the same period. In the 1960s, the yield on the 3-month Treasury bill doubled from just over 4% to 8%, but US stocks still rose by 7.7% per year. And then in the 1970s, rates climbed from 8% to 12% and the S&P 500 still produced an annual return of nearly 6%.
  • Stocks have done poorly in both high and low interest rate environments, and have also done well in both high and low interest rate environments. Carlson published an article in February 2023 that looked at how the US stock market performed in different interest rate regimes. It turns out there’s no clear link between the two. In the 1950s, the 3-month Treasury bill (which is effectively a risk-free investment, since it’s a US government bond with one of the shortest maturities around) had a low average yield of 2.0%; US stocks returned 19.5% annually back then, a phenomenal gain. In the 2000s, US stocks fell by 1.0% per year when the average yield on the 3-month Treasury bill was 2.7%. Meanwhile, a blockbuster 17.3% annualised return in US stocks in the 1980s was accompanied by a high average yield of 8.8% for the 3-month Treasury bill. In the 1970s, the 3-month Treasury bill yielded a high average of 6.3% while US stocks returned just 5.9% per year. 
  • A cut in interest rates by the Federal Reserve is not guaranteed to be a good or bad event for stocks. Josh Brown, CEO of Ritholtz Wealth Management, shared fantastic data in an August 2024 article on how US stocks have performed in the past when the Federal Reserve lowered interest rates. His data, in the form of a chart, goes back to 1957 and I reproduced them in tabular format in Table 1; it shows how US stocks did in the next 12 months following a rate cut, as well as whether a recession occurred in the same window. I also split the data in Table 1 according to whether a recession had occurred shortly after a rate cut, since eight of the 21 past rate-cut cycles from the Federal Reserve since 1957 took place without an impending recession. Table 2 shows the same data as Table 1 but for rate cuts with a recession; Table 3 is for rate cuts without a recession. What the data show is that US stocks have historically done well, on average, in the 12 months following a rate-cut. The overall record, seen in Table 1, is an average 12-month forward return of 9%. When a recession happened shortly after a rate-cut, the average 12-month forward return is 8%; when a recession did not happen shortly after a rate-cut, the average 12-month forward return is 12%. A recession is not necessarily bad for stocks. As Table 2 shows, US stocks have historically delivered an average return of 8% over the next 12 months after rate cuts that came with impending recessions. It’s not a guarantee that stocks will produce good returns in the 12 months after a rate cut even if a recession does not occur, as can be seen from the August 1976 episode in Table 3.
Table 1; Source: Josh Brown
Table 2; Source: Josh Brown
Table 3; Source: Josh Brown

It turns out that the actual relationship between interest rates and stocks is not as clear-cut as theory suggests. There’s an important lesson here, in that one-factor analysis in finance – “if A happens, then B will occur” – should be largely avoided because clear-cut relationships are rarely seen.  

I also think that time that’s spent watching central banks’ decisions regarding interest rates will be better spent studying business fundamentals. The quality of a company’s business and the growth opportunities it has matter far more to its stock price over the long run than interest rates. 

Sears is a case in point. In the 1980s, the US-based company was the dominant retailer in the country. Morgan Housel wrote in a blog post, Common Plots of Economic History :

“Sears was the largest retailer in the world, housed in the tallest building in the world, employing one of the largest workforces.

“No one has to tell you you’ve come to the right place. The look of merchandising authority is complete and unmistakable,” The New York Times wrote of Sears in 1983.

Sears was so good at retailing that in the 1970s and ‘80s it ventured into other areas, like finance. It owned Allstate Insurance, Discover credit card, the Dean Witter brokerage for your stocks and Coldwell Banker brokerage for your house.”

US long-term interest rates fell dramatically from around 15% in the early-to-mid 1980s to around 3% in 2018. But Sears filed for bankruptcy in October 2018, leaving its shareholders with an empty bag. In his blog post mentioned earlier, Housel also wrote:

“Growing income inequality pushed consumers to either bargain or luxury goods, leaving Sears in the shrinking middle. Competition from Wal-Mart and Target – younger and hungrier – took off.

By the late 2000s Sears was a shell of its former self. “YES, WE ARE OPEN” a sign outside my local Sears read – a reminder to customers who had all but written it off.” 

If you’re investing for the long run, there are far more important things to watch than interest rates.

5. There will be investors who are itching to make wholesale changes to their investment portfolios for 2025.

Ofer Azar is a behavioural economist. He once studied more than 300 penalty kicks in professional football (or soccer) games. The goalkeepers who jumped left made a save 14.2% of the time while those who jumped right had a 12.6% success rate. Those who stayed in the centre of the goal saved a penalty 33.3% of the time.

Interestingly, only 6% of the keepers whom Azar studied chose to stay put in the centre. Azar concluded that the keepers’ moves highlight the action bias in us, where we think doing something is better than doing nothing. 

The bias can manifest in investing too, where we develop the urge to do something to our portfolios, especially during periods of volatility. We should guard against the action bias. This is because doing nothing to our portfolios is often better than doing something. I have two great examples. 

First is a paper published by finance professors Brad Barber and Terry Odean in 2000. They analysed the trading records of more than 66,000 US households over a five-year period from 1991 to 1996. They found out that the most frequent traders generated the lowest returns – and the difference is stark. The average household earned 16.4% per year for the timeframe under study but the active traders only made 11.4% per year.

Second, finance professor Jeremy Siegel discovered something fascinating in the mid-2000s. In an interview with Wharton, Siegel said:

“If you bought the original S&P 500 stocks, and held them until today—simple buy and hold, reinvesting dividends—you outpaced the S&P 500 index itself, which adds about 20 new stocks every year and has added almost 1,000 new stocks since its inception in 1957.”

Doing nothing beats doing something. 

6. There are nearly 8.2 billion individuals in the world today, and the vast majority of us will wake up every morning wanting to improve the world and our own lot in life.

This motivation is ultimately what fuels the global economy and financial markets. There are miscreants who appear occasionally to mess things up, but we should have faith in the collective positivity of humankind. We should have faith in us. The idiots’ mess will be temporary.

Mother Nature threw us a major global health threat in 2020 with COVID-19. But we – mankind – managed to build a vaccine against the disease in record time; Moderna even managed to design its vaccine in just two days. This is a great example of the ingenuity of humanity at work.

To me, investing in stocks is the same as having the long-term view that we humans are always striving, collectively, to improve the world.      

A final word

This article is a little cheeky, because it describes incredibly obvious things, such as “interest rates will move in one of three ways: sideways, up, or down.” But I wrote it in the way I did for a good reason. A lot of seemingly important things in finance – things with outcomes that financial market participants obsess over and try to predict – actually turn out to be mostly inconsequential for long-term investors. Keep this in mind when you read the next “X Things That Will Happen To Stocks in 2025” article.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Amazon. Holdings are subject to change at any time. 

What We’re Reading (Week Ending 22 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 December 2024:

1. Meet Willow, our state-of-the-art quantum chip – Hartmut Neven

Errors are one of the greatest challenges in quantum computing, since qubits, the units of computation in quantum computers, have a tendency to rapidly exchange information with their environment, making it difficult to protect the information needed to complete a computation. Typically the more qubits you use, the more errors will occur, and the system becomes classical.

Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes…

…This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits…

…There are other scientific “firsts” involved in this result as well. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system — crucial for any useful computation, because if you can’t correct errors fast enough, they ruin your computation before it’s done. And it’s a “beyond breakeven” demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits do, an unfakable sign that error correction is improving the system overall.

As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built…

…As a measure of Willow’s performance, we used the random circuit sampling (RCS) benchmark. Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today…

…Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch…

…Willow was fabricated in our new, state-of-the-art fabrication facility in Santa Barbara — one of only a few facilities in the world built from the ground up for this purpose. System engineering is key when designing and fabricating quantum chips: All components of a chip, such as single and two-qubit gates, qubit reset, and readout, have to be simultaneously well engineered and integrated. If any component lags or if two components don’t function well together, it drags down system performance…

…The next challenge for the field is to demonstrate a first “useful, beyond-classical” computation on today’s quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.

2. X (previously Twitter) thread on quantum computing and Google’s Willow – Jeffrey Scholz

Like a regular computer, a quantum computer keeps bits in groups. So a 64 bit quantum computer would have a vector of 64 2d vectors serving as it’s “word.”

Here is where the speedup happens: in a regular computer, each of the 64 bits don’t know anything about the value of any of the other 64 bits.

If we want one bit to affect another bit, we have to explicilty combine them with a logic gate.

However, in a quantum computer, each of the 64 qbits can “talk to each other” via “quantum entanglement.”

Running a quantum circuit means you plug in a quantum vector, run it through a bunch of matrix multiplications, then collapse the output.

The final vector will be the correct answer. Technically, quantum computers can give wrong answers, but if you run the computation multiple times, then you will get the correct answer on average…

…The current problem with quantum computers is that as the circuit gets bigger, they become less correct on average. All of the “talking to each other” creates so much noise the system stops working.

Once your probability of being correct drops below a certain threshold your quantum computer becomes useless. This is a major blocker for current quantum compute.

Let’s look at a specific (oversimplified but helpful) example. Suppose you shine a laser beam into an ice cube.

Actually simulating what the laser will do when it exits the ice cube is very hard to predict because some quantum phenomena is involved.

To actually compute what the laser will do means you have to explicilty compute quantum entanglement, which is slow for classical computers but “built in” to a quantum computer.

However, you can *estimate* the distribution of how the laser will scatter without a quantum computer, so you can have at least a rough idea if your answer might be correct…

…By analogy, this is what Google was doing. The computation Google was doing was a “pseudo-random quantum circuit” (think pseudoranom ice cube) but we know a quantum circuit is just matrix multiplications (on crack). Therefore, it is a bunch of random matrix multiplications with an output that looks right.

Google’s actual breakthrough was that the output of the circuit “looks correct” — which sounds underwhealming — and compared to the headlines, it definitely is. The academic breakthrough is that Google was able to use a larger circuit and notice an apparent *increase* in accuracy when modeling how a laser shines through an ice cube. That is noteworthy.

You can definitely tell if a computation has failed, and it seemed to be failing less as the circuit got bigger…

…However, note that the problem is “rigged” in favor of quantum computers. The benchmark is explicitly modeling a quantum phenomenon, so *of course* we get a speedup.

In other words, Google created a random distribution on the output that “seems correct.” Why does it “seem correct?” well because by design, the computation cannot be run on a classical computer. But if we can’t run it on a classical computer, how do we know the quantum computer is actually giving the right answer? The answer is we don’t, and this is a serious gap…

…Quantum computing is kind of at the stage right now where some smart teenager wired a few logic gates together in a random fashion and said “hey look, my circuit made a random output and didn’t explode!” Compared to previous attempts, it is an improvement. But he is still a long way from training an LLM.

3. Volatility: A Double-Edged Sword for Long-Term Equity Investors – Daniel Crowley

The ability to measure risk in a portfolio has long been a puzzle for the financial world. When Harry Markowitz introduced Modern Portfolio Theory in 1952, he revolutionized how institutions approached risk and return. His use of standard deviation as a proxy for volatility offered a clean, mathematical way to quantify the unpredictability of markets. It gave investors a seemingly precise tool to compare assets and assess portfolio risk. Over time, this approach became gospel, with concepts like beta and the Sharpe ratio reinforcing volatility as the core measure of risk.

But here’s the problem: volatility tells only part of the story. Financial markets don’t follow the neat patterns of a normal distribution, which is what these models assume. Extreme events occur far more often than traditional models predict. We’ve seen this play out time and again—from the collapse of Long-Term Capital Management to the Great Financial Crisis. The models couldn’t account for the market’s tendency to behave irrationally and with far greater extremes than the math suggested. That’s why I’ve come to view volatility not as risk itself but as a signal, an invitation to investigate further…

…Volatility is often misunderstood because it treats upward and downward price movements as equal. A stock with erratic upward swings may have high volatility but poses little risk if the business fundamentals are sound. Conversely, a stock that steadily declines might appear “safe” on paper but can quietly destroy wealth.

The market’s reliance on volatility as a measure of risk often misses these nuances.

This misunderstanding creates a divide among investors. On one side are those who cling to volatility as the ultimate arbiter of risk, building models that rely on neat equations and assumptions about market behavior. On the other are those who dismiss it entirely, treating volatility as irrelevant noise.

My view lies somewhere in the middle. Volatility is neither good nor bad—it’s just a clue. It’s a signal to dig deeper and assess whether the market’s movements are justified by changes in a business’s intrinsic value.

What I’ve come to appreciate about volatility is its ability to surface opportunity. Markets are emotional, driven by fear, greed, and short-term thinking. Prices frequently diverge from reality, creating moments where high-quality businesses are available at steep discounts. When markets panic, as they did during the COVID-19 pandemic or the Great Financial Crisis, those who can stay calm and look beyond the noise can identify extraordinary opportunities.

Volatility, far from being a risk, is often the price of admission for outsized returns.

4. The AI nuclear renaissance – SMRs role – Rihard Jarc

The global nuclear power market is about 10% of global electricity (about $350-$400B annually) and around 32% of zero-carbon electricity generation.

As of 2023, nuclear energy accounted for about 18.6% of total electricity generation in the United States. The International Energy Agency (IEA) highlights that global nuclear power output must more than double by 2050 to meet net-zero emission targets. Most of the U.S.’s nuclear power plants are over 50 years old and nearing the end of their operational lives. While their lifespans have been extended to support the grid, they will need to be replaced in the coming decades…

…The introduction of ChatGPT and the AI boom that we have experienced in the last 2 years have only accelerated as AI workloads and AI chips consume much more energy than traditional data center workloads. This Nuclear Energy expert gives a good example:

» If you provide a simple search in Google, you consume 0.3 W per hour of electricity. If you do the same with ChatGPT or Alexa or Gemini, any AI that we can imagine, this 0.3 W transforms into 2.9 W, so it means 10X the consumption.«…

…Driven by artificial intelligence (AI), cloud computing, and digital transformation, U.S. data centers consumed an estimated 150 TWh of electricity in 2023, equivalent to around 3% of the nation’s power demand. According to Goldman Sachs estimates, data center demand hovered at 340 TWh in 2023 globally, which is about 1.3% of worldwide electricity use. U.S. data center power use is expected to triple between 2023 and 2030 roughly and will require about 47 gigawatts of new generation capacity…

…Nuclear energy has become very attractive because companies want to be carbon-neutral and have stable power. An additional benefit of nuclear power is that it can provide more stable long-term contracts that are less sensitive to inflation and supply chain problems…

…Interest in nuclear energy, particularly Small Modular Reactors (SMRs), is growing as they have been heralded as a solution to streamline nuclear power production, offering flexibility, lower upfront costs, and modular deployment. The simplest way to imagine SMR is that it is a smaller version of the traditional nuclear reactor. One of their most significant benefits is that they are modular. They are designed to be built in factories, not on-site. Because they are built in factories, they are easier to assemble and control. From quality checks to a more predictable supply chain and quality of workers. When assembled, they are then shipped to the site of the nuclear plant, where they are stacked together to form the whole plant. In terms of energy output, traditional nuclear plants have outputs between 1,000-1,600 megawatts of electric (MWe) per reactor, while SMRs are around 50-300 MWe per module. Some SMRs are also said to be safer due to passive safety features, which rely on natural processes like convection to prevent meltdowns in emergencies. But they also come with cons. The primary one is that they are much smaller than traditional nuclear plants, so they do not have the cost benefits of economy of scale. Because of that, producing the same amount of energy is more expensive than on a traditional nuclear plant…

…Over 25 countries, according to the International Atomic Energy Agency (IAEA), are investing in SMRs. In March, Wood Mackenzie estimated the pipeline of SMR projects was worth more than $176 billion and that SMRs could account for as much as 30% of the global nuclear fleet by 2050…

…We can look at the example of NuScale, which has its Pressurised Water Reactor design. Their levelized cost of electricity ranges from $89-135/MWh, while traditional nuclear plants are in the $110-160/MWh. However, looking at the most traditional alternative in data centers, which is combined solar and gas, gas costs $45-70/MWh, and solar plus storage costs $30-60/MWh…

…State-backed projects in countries like China and Russia have made more progress, leveraging integrated supply chains, controlled costs, and assured revenue streams. But even for them, the costs to build these reactors compared to first estimates are still much bigger…

…We must also face reality, which says that only 2 SMRs are operational right now, one of which is in Russia and the other one in China.

Another important topic when assessing nuclear energy is the problem of nuclear waste and its storage. Most SMR designs produce a similar amount of nuclear waste on a unit production basis than traditional nuclear plants, so the problem of storing nuclear waste stays.

5. How to invest without relying on target prices – Chin Hui Leong

The US stock market is soaring to new heights. But what does that mean for your stock returns in 2025? I would like to give you a definite answer but if I did so, I would be lying to you. In fact, you should view anyone who gives you target prices with suspicion.

Here’s the hard truth: No one can control where the market is headed in the short term. Yet, the allure of target prices persists…

…The answer lies in the inherent difficulty in predicting the future of rapidly evolving technologies.

The best example is Amazon.com. In mid-2010, when I first invested in the company, it had just reported US$24.5 billion in annual revenue, primarily from its online retail business. Here is the twist: it was impossible to know what the business would look like a decade later…

…Fast forward to 2023, and AWS had become a financial cash cow with nearly US$90 billion in annual revenue and an impressive US$24.6 billion in operating income. In other words, AWS, an insignificant division back in 2009, had generated more operating income in 2023 than the entire company’s revenue in 2009…

…I like to go back to the reason why valuation is used in the first place: to reduce your investment risk. The way I see it, valuation is one of the many ways you can employ to manage risk. But valuation is not the only risk in investing.

A weak, shrinking business can pose risks that no amount of stock valuation can solve. Hence, starting with high-quality businesses is my preferred approach.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Amazon. Holdings are subject to change at any time.

Company Notes Series (#3): Golden Throat Holdings Group Company

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first two editions in the series can be found here and here. Please give us your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes for Golden Throat Holdings

Data as of 16 January 2023

History of Golden Throat Holdings and current management/major shareholders

  • Current HQ: Guangxi Zhuang, China
  • IPO date: July 2015, on Hong Kong Stock Exchange
  • Golden Throat Holdings’ history dates back to 1956 when Liuzhou No.2 Sweet Factory (柳州市糖 果二廠), the predecessor of Golden Throat Company (also known as Guangxi Golden Throat), was established. Golden Throat Company today manufactures and sells lozenges and other pharmaceutical and food products.
  • Golden Throat Holdings’ flagship product is Golden Throat Lozenges (OTC), which was launched in 1994. Wang Yao Fa contributed to the creation of the formula for the Golden Throat Lozenges (OTC) product and his portrait was historically used by Golden Throat Holdings on the product packaging; the portrait was changed to Jiang Peizhen in 2015.
  • Golden Throat Company (the main operating entity in China of Golden Throat Holdings) was established in Liuzhou, Guangxi Zhuang, China, on 18 September 1998 by Jiang Peizhen as the original controlling shareholder. She has been involved with Golden Throat Holdings for over 60 years, since 1956.
  • Jiang and her son, Zeng Yong, control 69.79% of Golden Throat’s shares (the 69.79% equates to 516.0137 million shares) as of 30 June 2022. At the 11 January 2023 share price of HK$1.98, their stake equates to HK$1.02 billion.
  • Jiang, 76, is currently chairman and non-executive director of Golden Throat Holdings, while Zeng, 48, is an executive director and vice chairman of the board. Zeng has been involved with Golden Throat Holdings since 1995. Both Jiang and Zeng have been in their respective roles since February 2015.

Golden Throat Holdings’ business

  • Revenue in 2021 was RMB 820.5 million, of which 99.6% was from Mainland China.
  • The company reports its revenue by three product categories, which include Golden Throat Lozenges (OTC), Golden Throat Lozenge Series Products, and other products.
  • Golden Throat Lozenge (OTC): A type of lozenge mainly designed to relieve symptoms of sore and dry throat and hoarse voice caused by acute pharyngitis. Golden Throat Lozenges (OTC) was approved as over-the-counter medicine by the National Medical Products Administration (NMPA), China’s version of the FDA in the USA. As such, Golden Throat Lozenges (OTC) can be purchased by the public in pharmacies without requiring the prescription of a qualified medical professional. As of 31 December 2021, Golden Throat Lozenges (OTC) were exported to the United States, Canada, Russia, the European Union, Australia, Southeast Asia, Middle East, Mexico and Africa, and Mongolia, a newly explored export country in 2019. For the year ended 31 December 2021, Golden Throat Lozenges (OTC) accounted for 90.1% of Golden Throat Holdings’ total revenue.
  • Golden Throat Lozenge Series Products: Includes seven products comprising of Dule Lozenges (都樂含片), sugar-free Dule Lozenges, and five other sugar-free flavours of this series, namely orange (香橙), fructus momordicae (羅漢果), chrysanthemum (桑菊), American ginseng (西洋參) and hawthorn (山楂). A major difference between Golden Throat Lozenges (OTC) and Golden Throat Lozenge Series Products is that the former is approved as over-the-counter medicine, whereas the latter is approved as food products. The sugar-free series of Golden Throat Lozenge Series Products was launched in 2013, which supplements the company’s original sales channel and provides consumers with more diversified choices. As of 31 December 2021, Golden Throat Lozenge Series Products were exported to 17 countries and regions, and accounted for 8.7% of Golden Throat Holdings’ total revenue in 2021.
  • Other products: Accounted for approximately 1.2% of Golden Throat Holdings’ total revenue in 2021. Includes: (1) Yinxingye Tablet ( 銀杏葉片), which is designed to facilitate blood circulation, remove blood stasis and dredge energy channels and was approved as a prescription medicine by the NMPA; (2) a new product, Golden Throat Intestinal Series (金嗓子腸寶), which is an exclusive nutrition for probiotics, also known as prebiotics; and (3) Golden Throat Compound Probiotic Lozenges, which was launched in June 2022 and was developed by Golden Throat Holdings and the scientific research team of “Food Microbial Function Development” of Beijing Agricultural College. Golden Throat Compound Probiotic Lozenges addresses the lack of self-developed probiotics in China. Golden Throat Holdings has developed six kinds of proprietary probiotic bacteria in three new flavors and the company is committed to using “Chinese bacteria” to improve the physique of Chinese citizens. Golden Throat Compound Probiotics adopts the internationally leading three-layer embedding technology, 360-degree thermal radiation freeze drying technology, and automatic ingredient fermentation and cultivation system.
  • Golden Throat Holdings has established an extensive and structured sales and distribution network throughout China for its (i) over-the-counter medicines, (ii) food products, and (iii) prescription medicines. As of 31 December 2021 and 30 June 2022, substantially all of the company’s revenue was generated from sales to distributors. In 2021, there was only one customer that accounted for more than 10% of Golden Throat Holdings’ revenue (11.7%); there was no such customer in 2020.
  • Golden Throat Holdings has a well-established brand in China: 
    • In October 2021, in the 2021 ranking of China nonprescription medicines enterprises and product brands, Golden Throat Lozenges (OTC) was recognised as No. 1 amongst Chinese traditional medicines (Throat) by the China Nonprescription Medicines Association.
    • Golden Throat Holdings was ranked 43rd amongst the nonprescription manufacturing enterprises in the 2021 ranking of China non-prescription medicines enterprises and product brands.
    • Golden Throat Holdings was listed in the Top 500 Chinese Brands at the 14th China Brand Festival in August 2020.
    • In August 2020, Golden Throat Holdings claimed the title of “2019 China Traditional Medicines Pharmaceutical Industry Top 100 Enterprise” at the China Pharmaceutical Industry Top 100 Annual Assembly.
    • In 2019, Golden Throat was awarded the Best Brand Value Award at the China Financial Market Awards 2019, and won the Huapu Award at the 13th China Brand Festival in August.
    •  In 2017, the Golden Throat (金嗓子) brand was selected as a world famous brand by the China America Branding Strategy Forum and also ranked amongst the listed companies on the Forbes China Up-and-Comers List.

Golden Throat Holdings’ market and future expansion

  • According to a 2015 Euromonitor Report, retail sales value of lozenges in China increased 10.4% per year from RMB 2.09 billion in 2009 to RMB 3.42 billion in 2014, and was expected to increase to RMB 5.46 billion in 2019, at a CAGR of 9.7%. Lozenges accounted for 72% of the total throat remedies market in China in 2014; the throat remedies market primarily includes over-the-counter medicines and medicated confectionery (which are food).
  • In 2021, plants and office buildings of a new medicine production and research and development base for Golden Throat Holdings, located at Luowei Industrial Concentration Area, Liuzhou, Guangxi Zhuang Autonomous Region, as well as the commissioning of product lines and trial production were completed. Golden Throat Holdings completed the overall relocation in the second half of 2021. The new production base covers a usable area of about 60,000 square metres, including research and development centres, production plants, warehouses and administrative office buildings. “The fully automated production line in the production plant will improve the efficiency of the production process. A brand-new modern production enterprise will be formed with the new production and research and development base, new factories, new workflow and new production lines, which will completely upgrade the management platform and manufacturing platform of the factories, comprehensively improving the manufacturing quality and technology content of the products, enhancing the comprehensive competitiveness of the Company, and will lay a solid foundation for expanding and strengthening the Company.The new production base increased Golden Throat’s production capacity for its main products by 57% to 198.5 million boxes of Golden Throat Lozenges. See video of the new production base: https://news.gxtv.cn/article/detail_567c4b49e6924346917643b221fe9555.html
  • Also in 2021, Golden Throat Holdings selected a 48 mu (~32,000 square metres) piece of land in the south of the new drug production and R&D base as the site for the second phase of the new Golden Throat Base, which is expected to have a usable area of approximately 50,000 square metres after completion. The second phase will house a food production plant and a food research and development centre. After completion, a high-tech R&D team, smart manufacturing and smart sales will be introduced to develop more comprehensive health products. The second phase of the Golden Throat new base will form the core of Golden Throat Doctor Workstation, the Golden Throat Professor Workstation, the Golden Throat Research Institute, the Golden Throat Gastrointestinal Research Institute, and the Golden Throat Heart and Brain Research Institute. It will also facilitate the development of new products such as genetic medicines, traditional Chinese medicine prescriptions, specialty medical devices, and specialty health foods. As of 30 June 2022, the second phase of the Golden Throat new base is in the initial stage of construction.
  • The Golden Throat WeChat Mini Program Mall was launched in early 2020. “We will continue to expand online sales channel in 2022, and we believe there would be breakthroughs in our online business in the future.”

Golden Throat’s sales volumes and pricing of products

  • There was a change in packaging-configuration in August 2013, so numbers for 2012 and 2013 are not like-for-like comparisons with numbers in later years.
  • Golden Throat Holdings has managed to raise the prices for its Golden Throat Lozenges (OTC) products over time, while keeping  gross margin steady, keeping sales volume steady (although less steady then gross margin), and increasing revenue → signs of pricing power for Golden Throat Lozenges (OTC) product
  • Golden Throat Holdings has managed to raise the prices for its Golden Throat Lozenge Series Products over time, while increasing gross margin, increasing sales volume, and increasing revenue → signs of pricing power for Golden Throat Lozenge Series Products
  • Golden Throat Holdings’ sales volume was hurt in 2020 because of COVID, but the company still maintained or increased its product prices.
  • Golden Throat’s sales volume for Golden Throat Lozenge (OTC) products did not increase much over time because the volume was already near the company’s capacity – prior to the expansion mentioned in Point 3, Golden Throat’s annual production capacity was ~126 million boxes of the Golden Throat Lozenge (OTC) product.

Golden Throat financial performance

Annual numbers

  • Revenue has grown over time but had some ups and downs – same with net profit
  • Was always generating positive operating cash flow and free cash flow (with exception of 2017), although there’s no clear growth in cash flows.
  • Balance sheet was always in a strong net-cash position
  • No history of dilution (IPO happened in 2015 – immediately after IPO, there was around 726.36 million shares)
  • There was a dividend paid in every year since the company’s IPO, and it has increased over time; the dividend also looks fairly sustainable

Half-yearly numbers

  • Revenue growth in H1 2022 was affected by resurgence of COVID in China, and so was net-income
  • But cash flows have improved tremendously and balance sheet remains rock-solid
  • Worth noting that Golden Throat’s borrowings are all on fixed rates, so there’s no danger of rising interesting rates negatively affecting the company’s profit and/or cash flow 

Management’s integrity and kindness

  • There are related party transactions (RPTs), but they are minimal. In 2021, Golden Throat Holdings incurred RMB 9.576 million in expenses to procure raw ingredients (such as liquid isomalt, isomalt AG, syrup, and probiotics) from a related entity, Changbao; in 2020, the amount was RMB 4.388 million. These amounts make up only a single-digit percentage of total net profit (and even much smaller percentage of total revenue) in their respective years.
  • The remuneration of Jiang Peizhen and Zeng Yong has largely increased at a faster rate than Golden Throat Holdings’ revenue, net income, and FCF over the years, especially after the company’s IPO. But their remuneration levels only make up a single-digit percentage of Golden Throat Holdings’ net income (see table below).
  • Golden Throat Holdings ended 2021 with 937 full-time employees, of which 100 are disabled persons. In August 2020, Golden Throat Holdings provided electric vehicles for employees commuting to work. The EVs are produced by Liuzhou SGMW (柳州上汽通用五菱) and Golden Throat Holdings ordered over 700 of them from SGMW. Management thinks the EVs “would not only solve the transportation problem of employees with long commuting distance, but also effectively stimulate domestic demand and help economic growth and recovery.”

Valuation

  • Valuation numbers based on 11 January 2023 share price of HK$1.98
  • Trailing PE (price-to-earnings) of 7.8, trailing PFCF (price-to-free cash flow) of 7.7
  • Net-cash per share of HK$0.88
  • Trailing PE net of cash of 5.0, trailing PFCF ratio net of cash of 4.9
  • Trailing dividend yield of a massive 9.1%
  • Management wanted to acquire the company in August 2021 at HK$2.80 per share together with Affirma (emerging market private equity firm owned and operated by former senior leadership team of Standard Chartered Private Equity; managed over US$ 3.5 billion in assets at the time of the announcement).I think this price could be seen as a floor on the value of Golden Throat holdings. Golden Throat’s trailing earnings per share and free cash flow per share was RMB 0.30 (~HK$ 0.36 ) and RMB 0.18 (~HK$ 0.21), respectively, based on the company’s financials for the first half of 2021, meaning the acquisition price valued the company at a trailing PE and trailing PFCF ratio of just 7.8 and 13.1. Net of cash, the PE and PFCF ratios would be 5.3 and 8.8

Final thoughts (as of 16 January 2023)

  • Very cheap valuation right now
  • Possibility of much higher revenue in 2023 (compared to 2022 and 2021) as China has reopened and Chinese citizens depend on the Golden Throat Lozenge (OTC) product to soothe their ailments from COVID or otherwise; 2022’s overall numbers may be lower than in 2021 as China was in lockdown mode for most of 2022 and only opened up late in the year.
  • Selling prices for Golden Throat Lozenge (OTC) products on Tmall are currently easily more than RMB 10 per box, and more commonly around RMB 12-14 per box (see screenshots below, taken on 16 Jan 2023 from Tmall app – sidenote: Tmall has better reputation than Taobao). The unit sale price to distributors reported by the company in H1 2022 was just RMB 7.0 per box; I think it’s reasonable to expect the unit sale price to distributors for 2023 – as well as overall volume – to be materially higher than 2022 and 2021, thereby boosting profit and cash flow margins for Golden Throat Holdings.
  • Golden Throat Holdings had expanded production capacity in 2021, and is building a new plant right now.
  • Golden Throat Holdings has also received strong government support for the production of its products. See the following English translations of a Mandarin article from the Guangxi government website:
    • On January 4, Wei Guanghui, a member of the party group and deputy director of the Food and Drug Administration of the Autonomous Region, led a team to Guangxi Liangmianzhen Yikang Pharmaceutical Co., Ltd. and Guangxi Golden Throat Pharmaceutical Co., Ltd. The production of Golden Throat Lozenges provides door-to-door service guidance, and pays close attention to ensuring the supply of drugs for the prevention and control of the new crown epidemic.”
    • Golden Throat Lozenges were selected into the “Catalogue of Drugs for New Coronary Virus Infection (First Edition)” issued by the Beijing Municipal Health and Health Commission. In order to meet the clinical needs of the general public, the company has expanded its capacity and production at full capacity, and the Food and Drug Administration of the Autonomous Region has followed up the whole process.”
    • “The working time of Golden Throat Lozenges has been extended from the original 8 hours to 12 hours, and the daily production has increased from 7.37 million tablets to 9.21 million tablets, which strongly supports the anti-epidemic needs of the people across the country.
  • For now, I see Golden Throat Holdings as a deep-value stock, but it could also change into a growth stock if its plans for new products such as genetic medicines, traditional Chinese medicine prescriptions, specialty medical devices, and specialty health foods succeed.
  • One risk to the company’s future business prospects is if its Golden Throat Lozenge (OTC) product price gets controlled by the government. According to the IPO prospectus, “there had been no fixed or maximum prices promulgated by any authorities in China on Golden Throat Lozenges (OTC).” There’s been no update on the matter that I could find in subsequent annual reports.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 15 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 15 December 2024:

1. SpaceX: Rocket Ship – Matt Reustle and Luke Ward

Luke

So if we take the CapEx part of that first, NASA estimated that the cost to develop the Falcon 9 from scratch would be about $4 billion. But SpaceX ended up doing it for about a tenth of that price. So to begin with, that’s an order of magnitude improvement in the level of investment required.

SpaceX gives you the prices for launches on their website. So $70 million per launch of a Falcon 9 flight—that’s already 20 times cheaper than the Space Shuttle was per kilogram into orbit. But the real kicker, as you point out, is the operating leverage that comes from having partial reusability…

…Starship is designed to be fully and rapidly reusable. So unlike Falcon 9, which is only partially reusable but also able to fly multiple times every day, it’s going to have a payload capacity that’s about 100 tons to orbit at the beginning, but probably rising to closer to 200 tons to orbit over time.

And Musk has suggested that a variable cost of around $10 million per launch is the ballpark figure which they’d be aiming for at scale in a steady state, ambitiously maybe even falling to $2 million—a figure which has been touted. If you believe those kinds of performance levels are feasible, that gets the cost down to around $10 per kilogram. That’s over 100 times cheaper than the Falcon 9 we’re talking about at the moment. And that would have a dramatic effect on what’s economically feasible for humanity to do in space…

…Matt

Satellites in Low Earth Orbit—there is quite a bit of history in terms of that being the obvious space use case, that having an existing economy. I think Starlink is an extension of that. Different, absolutely, but an extension of what was going on.

Are there brand new industries being unlocked or obvious things with line of sight that open up from a space economy perspective that you see either today or, when I say near future, you could extend that out however far you think is reasonable.

Luke

A lot of these options which SpaceX has to develop, brand new markets that don’t exist already, are a function ultimately of the cost curve. Take semiconductor manufacturing on Earth; at the moment, we spend billions of dollars per fab to recreate the conditions which are readily accessible in space for free, if you can get there.

And so there’s some point on the cost curve intersecting between the cost of building a fab and the cost of launching a fab or the equipment of a fab into orbit and operating there instead. Same can be said of pharmaceutical research. The crystallization structures which are able to happen in space are different from the ones which are able to happen under the influence of gravity.

So if you think about pricing on pharmaceuticals, extending patent lives, etc., if you can move the manufacturing or the research lab for cutting-edge pharmaceuticals into space, you could make high-value, low-volume products. Something which would really make sense to do and doesn’t require a huge technological innovation to happen.

The list can go on and on—artificial organs, for example, being able to manufacture perfectly spherical lenses. There’s lots and lots of things which could be made.

Maybe the way to think about that is that space-based manufacturing could be the next large market for this if the costs can continue to come down. Starship having the volume of an A380 or a 747—think of the equivalent size of factory that represents. And if that can be launched every single day and recovered every single day for $10 per kilogram, that could be a really compelling way to do quite a lot of manufacturing.

Incidentally, that’s something that Jeff Bezos really focuses on in his vision for space as opposed to Mars per se, is where we can move a lot of the heavy-polluting industry off the planet. And why don’t we turn Earth into this perfect nature reserve, and all these polluting aspects of manufacturing can go into orbit, which again is very compelling.

Probably needs a lot more innovation to deliver communications from orbit, but I’d say it’s maybe an inevitability if the cost gets to a low enough point. You think how much solar energy is available without the atmospheric attenuation, for example—you know, 24/7. There’s lots of compelling reasons why if it’s cheap enough, at some point a lot of these things probably should happen, not just could happen.

Matt

The solar energy point, great example of something that is an entirely different dynamic in space than on Earth. What would the other things be? Just out of curiosity, when you mentioned semiconductors or pharmaceuticals, is it just purely gravity? Are there other things that are happening in space or not happening in space that happen on Earth that would drive that difference?

Luke

There’s the vacuum conditions—so there isn’t an atmosphere—so the level of impurities which you need to get rid of for a vapor deposition machine, for example. You don’t have the same kind of challenges there of having to have this deep vacuum.

Then, arguably, in space, because you don’t have gravity, you could construct much larger structures there rather than construct them on the ground and then launch them.

So again, that volume constraint which we were talking about earlier, in terms of how big your payload is—if you’re able to get enough stuff up there and assemble it in space, as we did with the International Space Station, things can be much, much larger given the payload bay of Starship than they could with the Space Shuttle.

Matt

When you think about low Earth orbit versus geosynchronous orbit versus something like Mars—which I think was the original vision with Elon and SpaceX—how much does that change the economics when you extend out?

Is it orders of magnitude where it’s an exponential cost curve to go further out? Even just if we focus on the launch and use a satellite for an example, before we get into all the manufacturing dynamics, is there any way to contextualize that from a cost perspective?

Luke

The really good news here is that gravitational force decreases with the square of distance. So the biggest challenge is getting off the surface and into orbit. Once you’re there, from an energy point of view, it’s a lot easier to go anywhere else in the solar system.

So if you were to take Falcon 9 again as the example, for the same price, it can place 20 tons into low Earth orbit, or it can place 4 tons into Martian orbit. That’s despite the latter being over a million times further away. Now, this feeds into what I think is probably the biggest misconception about SpaceX and its Mars ambitions.

I’d say for most people, the idea of a commercial entity pursuing exploration is naive at best. But I’d argue that long-term investors should be absolutely ecstatic about SpaceX having this mission as a forcing function. Firstly, it’s the key to getting the best people in the world to come and work for the organization and allow it to innovate in a manner and speed that others simply can’t match. That’s a huge competitive advantage.

Secondly, the way to get more cargo to Mars is actually about figuring out how to get more cargo into orbit around Earth, because that’s where the cost is all concentrated. It’s all in that first initial leap off the surface of our planet. So rather than framing Starship as a system that makes it possible to get to other planets, think about it instead being a system that could make it enormously more profitable to operate a business in Earth orbit and unlock brand new commercial use cases there as well…

…Luke

When we talk to SpaceX, they’re still very much focused on the here and now in the next couple of years. They have ambitions for things which they could do, but the focus is very much on the core business: serving the core customers, serving Starlink, getting Starship to launch status. We’ll deal with the next things next.

They’ve got so many things which they could be doing at the moment. When we come to this, a lot of this is us hypothesizing of how that could evolve beyond information which they’ve given us. The trend which you’ve seen of them to be vertical integrators could be quite informative. It might be that they end up being the ones who are commercializing a lot of these other services.

Rather than having a customer paying them for it at substantial scale, it would make more sense for them to do it. Could you start seeing some of these aspects? If they get into space-based manufacturing, for example, could that be priced on a value-added basis rather than a subscription basis or a volume basis? Certainly seems possible. If you start running data centers in space because it’s easier to power or cool them, etc., could you start offering data storage and machine learning alongside Starlink connectivity?

The further you look out, the more and more wacky it can get, but it’s also potentially financially plausible as well. You maybe have to take a bit of inspiration from science fiction here, but it’s quite a common trope in some of these movies of these large mega-corporations—the Weyland-Yutani Corporation from the Alien movies, or the Resources Development Administration from the Avatar films—where one mega-corporation was able to dominate access to space early on and then ends up controlling the entire extrasolar economy because of the advantages it had at that really early stage…

…Luke

The human spaceflight at the moment definitely has been the preserve of the rich and famous, but at scale that becomes cheaper and cheaper. And if we are talking about launching, Starship could be used as much for sending cargo and people to other points on the planet rather than other points in space. And so one option that the government’s looking into is this notion of rocket cargo delivery. Starship would be able to deliver 200,000 kg anywhere on the planet within 40 minutes.

What does that do for sort of a rapid reaction force, and what does that do for next-day delivery? At some stage, it’s going to be feasible to put a lot of astronauts or paying passengers on something like that, and it will be a quicker and potentially more efficient way to do long-distance travel. These things really could get quite wild, but it could be plausible at some stage. Again, that’s not the reason to invest in the company today; that’s not the basis of what they’re doing, and it’s a lot of people getting excited about things.

But come back in 10 years, I’d be disappointed if you or I weren’t able to go into space at some point in our lifetime for the cost of a premium economy ticket or something like that.

2. Japan vs Big Tech – Daye Deng

Put simply, US big tech has grown so dominant that it’s singlehandedly blowing a hole in the trade balance of a nation as large as Japan…

…In 2023, Japan recorded JPY 5.5 trillion in so-called digital trade deficit. The Ministry of International Trade and Industry (MITI) projects this to grow to JPY 8 trillion by 2030, at which point it could surpass Japan’s annual import of crude oil.

Japan’s total goods and services trade deficit in 2023 was JPY 6 trillion, with the digital deficit accounting for JPY 5.5 trillion…

…Japan has been in a structural deficit for goods trade over the past two decades. This may come as a surprise to those who have held onto the old idea that Japan is an export powerhouse.

There are several reasons for the shift:

  • Japanese firms have moved production overseas. This isn’t entirely negative since Japanese firms (and their profits) continue to grow, but it has contributed to a widening trade deficit.
  • Japan’s loss of global competitiveness in certain industries, like chips and appliances, to rivals such as South Korea.
  • Rising cost of imports driven by energy shocks, rising overseas inflation, and weak yen.

The third point deserves elaboration. Japan’s reliance on imported energy has long been a critical structural weakness. For example, following 2011 Fukushima nuclear disaster, Japan significantly reduced domestic nuclear energy production and increased its reliance on imported LNG, becoming a major contributor to trade deficit.

A similar pattern emerged post-Covid. Global oil and commodity prices surged. This was compounded by high rates of overseas inflation on general imports. On top, a historically weak yen made imports even more expensive…

…Since 2014, the Japanese government has been disclosing the digital deficit, which has grown 2.6-fold from 2014 to JPY 5.5 trillion in 2023. This is a net figure derived from JPY 9.2 trillion paid for digital services and JPY 3.7 trillion received from abroad…

…The picture is quite clear: on the services side, Japan is taking its hard-earned surplus from tourism and spending it all on paying for digital services.

How will this play out? While I’m personally bullish on the Japanese tourism industry, it still has natural growth constraints. However, there is no ceiling on how much Japan can continue to spend on digital services. In fact, digital services spend could accelerate given:

  • Japan is already playing catch-up in the digital realm, and is behind other major countries in many key digital metrics.
  • AI is poised to make Japan’s digital dependency crisis even worse, in a world where firms like Nvidia and those that are able to scale AI services (e.g. hyperscalers) dominate AI economics.

Without an AI champion of its own, Japan has few options if it wants to avoid being left behind in the new digital paradigm…

…Based on our discussion so far, does it surprise you that the Japanese yen has been weak?

“According to an analysis by Mizuho Research & Technologies, if the digital deficit doubles from the 2023 level by the end of March 2026, it will add another 5 to 6 yen of depreciation in the Japanese currency’s value against the dollar.”

– Nikkei Asian Review

Or let me put it another way — would you feel bullish about the currency of a country that relies on tourism as its primary growing surplus, while ultimately funneling all those earnings (and more) into paying for essential energy imports and ever-increasing digital spend on big tech?…

…In recent years we’ve seen how hard Japan has been trying to reclaim its position in the semiconductor industry. But do they only care about hardware and not its digital sovereignty? Will Japan continue to sit back and let US tech giants profit endlessly, or will it finally confront its position as a digital colony?

3. Guyana and the mystery of the largest ranch in the Americas – Swen Lorenz

Many mistakenly believe that Guyana is located in Africa – when it’s actually nestled right next to Venezuela…

…In 2015, ExxonMobil discovered oil off the coast of Guyana.

The discovery changed the course of the country. Long one of the poorest nations of the Western hemisphere, Guyana has since become the world’s fastest growing economy.

Since 2015, its GDP per capita has more than quintupled. In 2022 and 2023, its economy grew by 67% and 33%, respectively. Another stunner of a year is forecast for 2024, with 34% GDP growth.

The former British colony benefits from a large amount of oil wealth spread around a relatively small population of 800,000 people. Per head, there is twice as much oil as in Saudi Arabia. To put things in perspective, Guyana’s landmass is nearly as big as the UK, but it only has 1.2% of the UK’s population…

…Just a week ago, ExxonMobil reported that it had reached 500m barrels of oil produced in Guyana since output began in 2019. The goal is to lift production to 1.3m barrels per day by 2027, up from currently 650,000 barrels. In comparison, the UK’s North Sea produces just 1m barrels per day…

…Supporters of the country’s energy projects claim that they will bring untold riches to the population. Indeed, Guyana recently started to hand out cheques to its citizens, including the Guyanese diaspora of 400,000 people, who the government encourages to come back as it needs more labour to support the strong economic growth.

4. Capital, Compute & AI Scaling – Patrick O’Shaughnessy, Chetan Puttagunta, and Modest Proposal

Modest

Everyone knows the Mag 7 represent a larger percent of the S&P 500 today. But beyond that, I think thematically AI has permeated far broader into industrials, into utilities and really makes up, I would argue, somewhere between 40 and 45% of the market cap as a direct play on this. And if you even abstract to the rest of the world, you start bringing in ASML, you bring in TSMC, you bring in the entire Japanese chip sector. And so if you look at the cumulative market cap that is a direct play on artificial intelligence right now, it’s enormous…

… I think at the micro level this is a really powerful shift if we move from pre-training to inference time and there are a couple big ramifications.

One, it better aligns revenue generation and expenditures. I think that is a really, really beneficial outcome for the industry at large, which is in the pre-training world you were going to spend 20, 30, $40 billion on CapEx, train the model over 9 to 12 months, do post-training, then roll it out, then hope to generate revenue off of that in inference. In a test time compute scaling world you are now aligning your expenditures with the underlying usage of the model. So just from a pure efficiency and scalability on a financial side, this is much, much better for the hyperscalers.

I think a second big implication, again we have to say we don’t know that pre-training scaling is going to stop. But if you do see this shift towards inference time, I think that you need to start to think about how do you re-architect the network design? Do you need million chip super clusters in energy low-cost land locations or do you need smaller, lower-latency, more efficient inference-time data centers scattered throughout the country? And as you re-architect the network, the implications on power utilization, grid design?

A lot of the, I would say, narratives that have underpinned huge swaths of the investment world I think have to be rethought and I would say today because this is a relatively new phenomenon, I don’t believe that the public markets have started to grapple with what that potential new architecture looks like and how that may impact some of the underlying spend…

Chetan

But at the moment, at this plateauing time, we’re starting to see these small teams catch up to the frontier. And what I mean by frontier is where are the state-of-the-art models, especially around text performing? We’re seeing these small teams of quite literally two to five people jumping to the frontier with spend that is not one order, but multiple orders of magnitude less than what these large labs were spending to get there.

I think part of what’s happened is the incredible proliferation of open-source models. Specifically, what Meta’s been doing with LLaMA has been an extraordinary force here. LLaMA 3.1 comes in three flavors, 405 billion, 70 billion, 8 billion. And then LLaMA 3.2 comes in 1 billion, 3 billion, 11 billion, and 90 billion.

And you can take these models, download them, put them on a local machine, you can put them in a cloud, you can put them on a server, and you can use these models to distill, fine-tune, train on top of, modify, et cetera, et cetera, and catch up to the frontier with pretty interesting algorithmic techniques.

And because you don’t need massive amounts of compute, or you don’t need massive amounts of data, you could be particularly clever and innovative about a specific vertical space, or a specific technique, or a particular use case to jump to the frontier very, very quickly…

…Chetan

The force of Llama today has been two things, and I think this has been very beneficial to Meta is one. The transformer architecture that Llama is using is a sort of standard architecture, but it has its own nuances.

And if the entire developer ecosystem that’s building on top of Llama is starting to just assume that that Llama 3 transformer architecture is the foundational and sort of standard way of doing things, it’s sort of standardizing the entire stack towards this Llama way of thinking, all the way from how the hardware vendors will support your training runs to the hyperscalers and on and on and on. And so standardizing on Llama itself is starting to become more and more prevalent.

And so if you were to start a new model company, what ends up happening is starting with Llama today is not only great because Llama is open source, it’s also extraordinarily efficient because the entire ecosystem is standardizing on that architecture…

…Modest

So I think the interesting part for OpenAI was because they just raised the recent round and there was some fairly public commentary around what the investment case was. You’re right, a lot of it oriented around the idea that they had escape velocity on the consumer side and that ChatGPT was now the cognitive reference and that over time they would be able to aggregate an enormous consumer demand side and charge appropriately for that and that it was much less a play on the enterprise API and application building.

And that’s super interesting if you actually play out what we’ve talked about when you look at their financials, if you take out training runs, if you take out the need for this massive upfront expenditure, this actually becomes a wildly profitable company quite quickly in their projections. And so in a sense it could be better.

Now then the question becomes what’s the defensibility of a company that is no longer step function advancing on the frontier?…

…Chetan

These products are truly, as a software investor, absolutely amazing.

They require a total rethinking from first principles on how these things are architected. You need unified data layers, you need new infrastructure, you need new UI and all this kind of stuff. And it’s clear that the startups are significantly advantaged against incumbent software vendors. And it’s not that the incumbent software vendors are standing still, it’s just that innovator’s dilemma in enterprise software is playing out much more aggressively in front of our eyes today than it is in consumer.

I think in consumer, the consumer players recognize it, are moving it, and are doing stuff about it. Whereas I think in enterprise, even if you recognize it, even if you have the desire to do something, the solutions are just not built in a way that is responsive to dramatic re-architecture. Now could we see this happening? Could a giant SaaS company just pause selling for two years and completely re-architect their application stack?

Sure, but I just don’t see that happening. And so if you just look at any sort of analysis on what’s happening on AI software spend, something like it’s 8x year-over-year growth between 2023 and 2024 on just pure spend. It’s gone from a couple of hundred million dollars to well over a billion in just a year’s time…

…Modest

If you listen to AWS, one of the fascinating things they say is they call AWS a logistics business.

I don’t think anyone externally would sort of look at cloud computing and say, oh yeah, that’s a logistics business. But their point is essentially what they have to do is they have to forecast demand and they have to build supply on a multi-year basis to accommodate it.

And over 20 years they’ve gotten extraordinarily good at what has happened in the last two years, and I talked about this last time, is you have had an enormous surge in demand hitting inelastic supply because you can’t build data center capacity in three weeks. And so if you get back to a more predictable cadence of demand where they can look at it and say, okay, we know now where the revenue generation is coming from.

It’s coming from test time, it’s coming from Chetan and his companies rolling out. Now we know how to align supply with that. Now it’s back to a logistics business. Now it’s not grab every mothballed nuclear site in the country and try to bring it online.

And so instead of this land grab, I think you get a more reasonable, sensible, methodical rollout of it maybe. And I actually would guess that if this path is right, that inference overtakes training much faster than we thought and gets much bigger than we may have suspected.

But I think the path there in the network design is going to look very different and it’s going to have very big ramifications for the people who were building the network, who were powering the network, who were sending the optical signals through the network. And all of that, I think, has not really started to come up in the probability-weighted distributions of a huge chunk of the public market.

And look, I think most people overly fixate on NVIDIA because they are sort of the poster child of this, but there are a lot of people downstream from NVIDIA that will probably suffer more because they have inferior businesses. NVIDIA is a wonderful business doing wonderful things. They just happen to have seen the largest surge in surplus. I think that there are ramifications far, far beyond who is making the bleeding edge GPU, even though I do think there will be questions about, okay, does this new paradigm of test time compute allow for customization at the chip level much more than it would have if we were only scaling on pre-train…

…Modest

If you think about a training exercise, you’re trying to utilize them at the highest possible percent for a long period of time. So you’re trying to put 50, 100,000 chips in a single location and utilize them at the highest rate possible for nine months. What’s left behind is a hundred thousand chip cluster that if you were to repurpose for inferencing is arguably not the most efficient build because inference is peaky and bursty and not consistent.

And so this is what I’m talking about that I just think from first principles you are going to rethink how you want to build your infrastructure to service a much more inference focused world than a training focused world. And Jensen has talked about the beauty of NVIDIA is that you leave behind this in place infrastructure that can then be utilized.

And in a sunk cost world you say, sure, of course if I’m forced to build a million chip supercluster in order to train a $50 billion model, I might as well sweat the asset when I’m done. But from first principles it seems clear you would never build a 350,000 chip cluster with 2 1/2 gigawatts of power in order to service the type of request that Chetan’s talking about.

And so if you end up with much more edge computing with low latency and high efficiency, what does that mean for optical networking? What does that mean for the grid? What does that mean for the need for on site power versus the ability to draw from the local utility?…

…Chetan

Semiconductor company called Cerebras, and they recently announced that inference on Llama 3.1 405 billion for Cerebras is it can generate 900-plus tokens per second, which is a dramatic order-of-magnitude increase. I think it’s like 70 or 75 times faster than GPUs for inference as an example. And so as we move to the inference world, the semiconductor layer, the networking layer, et cetera, there’s tons of opportunities for startups to really differentiate themselves…

…Modest

On a less sort of dramatic view, the way I think about this, there’s AlphaGo, which famously did that move that no one had ever seen, and I think it’s like move 37, everybody was super confused about, ended up winning. And another example I love is Noam Brown, because I like poker, talked about his poker bot confused—it was playing high stakes, no limit, and it continually over-bet dramatically larger sizes than pros had ever seen before.

And he thought the bot was making a mistake. And ultimately it destabilized the pros so much. Think about that. A computer destabilized humans in their approach that they have to some extent taken on over-betting now into their game.

And so those are two examples where if we think about pre-training being bounded by the data set that we’ve given it, if we don’t have synthetic data generation capabilities, here you have two examples where algorithms did something outside of the bounds of human knowledge. And that’s what’s always been confusing to me about this idea that LLMs on their own could get to superintelligence, is functionally they’re bounded by the amount of data we give them up front.

5. Will China Take Over the Global Auto Industry? – Brad Setser

China has, according to the New York Times, the capacity to produce over 40 million internal combustion engine (ICE) cars a year.

Goldman Sachs thinks China will also have the capacity to produce around 20 million electric vehicles by the end of 2024…

…China’s internal market is around 25 million cars, and not really growing —so rising domestic EV sales progressively frees up internal combustion engine capacity for export.   Domestic demand for traditional cars is likely to be well under 10 million cars next year given the enormous shift toward EVs now underway inside China…

…Historically, the autos market has been largely regional (setting aside trade in luxury cars, where volumes are smaller). Most cars sold in China were made in China, most cars sold in Europe are produced in Europe, most cars sold in the North America are produced in North America, and so on. The U.S. did import a few million cars, on net, from Asia, and China imported a million or so luxury cars from Europe, but those were the exceptions rather than the rule.

That could change, absent hefty restrictions on Chinese auto imports (like the 100 percent tariff the U.S. now levies on EVs imported from China).

The global market—with massive overcapacity in China’s internal combustion engine (ICE) sector, massive capacity expansion in China’s EV sector, effectively unlimited credit for Chinese manufacturing firms from China’s state banks, and a Chinese yuan that is weaker against the dollar than it was back in 2008—is pushing for global auto manufacturing to become more like global electronics manufacturing, with a concentration of global production in a single region and, for that matter, a single country…

…Overcapacity in China’s automotive sector is not, in fact, all that new.

China’s traditional automotive sector was dominated by the joint ventures (“JVs”) formed by the large foreign firms and their (typically state-owned) Chinese partners. Chinese auto demand took off after the global financial crisis, and global firms responded by massively expanding their Chinese production capacity – as only the German luxury markets were interested in paying the 25 percent tariff and supplying the Chinese market from abroad.

But demand growth eventually slowed, and by 2018, the Wall Street Journal was reporting that the Chinese market was oversupplied…

…China’s EV industry—like EV industries in the U.S. and Europe—initially received substantial state backing. Chinese EV manufactures benefitted from downstream subsidies that built out China’s battery and battery chemical industry, as well as access to the world’s cheapest steel.
EV firms benefitted from cheap state financing—both equity injections from a myriad of state-backed funds and loans from state banks who (still) have to meeting lending quotas.

Moreover, China was quite explicitly protectionist in the application of its “consumer” EV subsidies.

Only EVs that were on state lists of qualifying vehicles were eligible for the subsidy, and the subsidy was only provided to cars that were made in China…

…And initially, only cars that were made in China with a battery made in China by a Chinese firm qualified for the lists…

…The only exception to the basic rule that qualifying for the list required using a battery made in China by a Chinese firm only confirmed the broad pattern of discrimination: Chinese-owned Volvo was allowed to use a Korean battery in one of its early EVs.

State support has not disappeared in any way as China’s EV industry took off.   Looking at direct cash subsidies from the central government to the manufacturers misses the myriad of ways China, Inc helps out firms producing in China…

…Nio received a significant ($1.9 billion) equity investment from the City of Hefei and the Province of Anhui, helping to offset ongoing losses. That equity injection was on top of state support for a factory in Hefei, which The New York Times reports was effectively a gift from the local government.

“‘The local government provided the land and the building’, said Ji Huaqiang, Nio’s vice president for manufacturing. ‘Nio does not own the factory or the land — it is renting, but the factory was custom built for Nio’”

That kind of support explains how Nio managed to build out its EV capacity even when its existing factories weren’t really being used that much:

“Nio’s two factories give it the capacity to assemble 600,000 cars a year, even though its annual rate of sales this autumn [2023] is only about 200,000 cars. Nio is nonetheless already building a third plant.”…

...What’s even more striking is that the investments that built out China’s EV capacity came in a market that was already saturated with modern auto production capacity.  That kind of investment wouldn’t have taken place without state guidance and support, support that was intended both to develop an indigenous Chinese industry (See China 2025) and to support a green transition that would reduce Chinese dependence on import fossil energy. It was the result of policy driven by the central government and backed financially by all levels of government. It also worked, China is now the world leader in EVs and batteries…

…If the world’s global firms can only compete with Chinese firms by using Chinese batteries and Chinese parts, that will hollow out much of the automotive industries of Europe and North America—a European brand on a Chinese-made car with a Chinese battery and drive train won’t sustain the current European auto supply chain or current European employment in the auto industry.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML, Meta, and TSMC. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2024 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q3 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management introduced multiple generative AI models in the Firefly family in 2024 and now has a generative video model; Adobe’s generative AI models are designed to be safe for commercial usage; the Firefly models are integrated across Adobe’s software products, which brings value to creative professionals across the world; Firefly has powered 16 billion generations (12 billion in 2024 Q2) since its launch in March 2023 and each month in 2024 Q3 has set a new record in generations; the new Firefly video model is in limited beta, but has already gathered massive customer interest (the model has driven a 70% increase in Premier Pro beta users since its introduction) and will be generally available in early-2025; recent improvements to the Firefly models include 4x faster image generation; enterprises such as Tapestry and Pepsi are using Firefly Services to scale content production; Firefly is the foundation of Adobe’s AI-related innovation; management is using Firefly to drive top-of-funnel user-acquisition for Adobe

2024 was also a transformative year of product innovation, where we delivered foundational technology platforms. We introduced multiple generative AI models in the Adobe Firefly family, including imaging, vector design and, most recently, video. Adobe now has a comprehensive set of generative AI models designed to be commercially safe for creative content, offering unprecedented levels of output quality and user control in our applications…

…The deep integration of Firefly across our flagship applications in Creative Cloud, Document Cloud, and Experience Cloud is driving record customer adoption and usage. Firefly-powered generations across our tools surpassed 16 billion, with every month this past quarter setting a new record…

…We have made major strides with our generative AI models with the introduction of Firefly Image Model 3 enhancements to our vector models, richer design models, and the all-new Firefly Video Model. These models are incredibly powerful on their own and their deep integration into our tools like Lightroom, Photoshop, Premiere, InDesign and Express have brought incredible value to millions of creative professionals around the world…

…The launch of the Firefly Video Model and its unique integration in Premier Pro and limited public beta garnered massive customer interest, and we look forward to making it more broadly available in early 2025.  This feature drove a 70% increase in the number of Premier Pro beta users since it was introduced at MAX. Enhancements to Firefly image, vector, and design models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Premiere Pro and Adobe Express…

…Firefly Services adoption continued to ramp as enterprises such as Pepsi and Tapestry use it to scale content production, given the robust APIs and ease of creating custom models that are designed to be commercially safe…

…This year, we introduced Firefly Services. That’s been — that’s off to a great start. We have a lot of customers that are using that. A couple we talked about on the call include Tapestry. They’re using it for scaled content production. Pepsi, for their Gatorade brand, is enabling their customers to personalize any merchandise that they’re buying in particular, starting with Gatorade bottles. And these have been very, very productive for them, and we are seeing this leveraged by a host of other companies for everything from localization at scale to personalization at scale to user engagement or just raw content production at scale as well…

…You’re exactly right in terms of Firefly is a platform and a foundation that we’re leveraging across many different products. As we talked about, everything from Express and Lightroom and even in Acrobat on mobile for a broad-based but then also in our core Creative products, Photoshop, Illustrator, Premiere. And as we’ve alluded to a number of times on this call, with the introduction of video, even a stand-alone offer for Firefly that we think will be more valuable from a tiering perspective there. And then into Firefly Services through APIs in connection to GenStudio. So we are looking at leveraging the power of this AI foundation in all the activities…

…We see that when we invest in mobile and web, we are getting some very positive signals in terms of user adoption and user conversion rate. So we’re using Firefly very actively to do that.

Adobe’s management has combined content and data in Adobe GenStudio to integrate content creation with marketing, leading to an end-to-end content supply chain solution; the Adobe GenStudio portfolio has a new addition in Adobe GenStudio for Performance Marketing, which has seen strong customer demand since becoming generally available recently; management is expanding the go-to-market teams to sell GenStudio solutions that cut across the Digital Media and Digital Experience segments and early success has been found, with management expecting acceleration in this pipeline throughout FY2025 and beyond

We set the stage to drive an AI content revolution by bringing content and data together in Adobe GenStudio integrating high-velocity creative expression with enterprise activation. The release of Adobe GenStudio for performance marketing integrates Creative Cloud, Express, and Experience Cloud and extends our end-to-end content supply chain solution, empowering freelancers, agencies, and enterprises to accelerate the delivery of content, advertising and marketing campaigns…

…We have brought our Creative and Experience Clouds together through the introduction of Firefly Services and GenStudio, addressing the growing need for scaled content production in enterprises…

… GenStudio enables agencies and enterprises to unlock new levels of creativity and efficiency across content creation and production, workflow and planning, asset management, delivery and activation and reporting and insights. 

Adobe GenStudio for Performance Marketing is a great addition to the GenStudio portfolio, offering an integrated application to create paid social ads, display ads, banners, and marketing e-mails by leveraging preapproved on-brand content. It brings together creative teams that define the foundational requirements of a brand, including guidelines around brand voice, channels, and images with marketing teams that need to deliver numerous content variations with speed and agility. We are seeing strong customer demand for Adobe GenStudio for Performance Marketing since its general availability at MAX…

… We’re expanding our enterprise go-to-market teams to sell these integrated solutions that cut across Digital Media and Digital Experience globally under the new GenStudio umbrella. We have seen early success for this strategy that included Express and Firefly Services in Q4. As we enable our worldwide field organization in Q1, we anticipate acceleration of this pipeline throughout the rest of the year and beyond.

Adobe’s management introduced AI Assistant in Acrobat and Reader in FY2024; users of AI Assistant completed their document-tasks 4x faster on average; AI Assistant is now available across desktop, web, and mobile; management introduced specialised AI for specific document-types and tasks in 2024 Q3 (FY2024 Q4); management saw AI Assistant conversations double sequentially in 2024 Q3; AI Assistant is off to an incredibly strong start and management sees it continuing to accelerate; AI Assistant allows users to have conversations with multiple documents, some of which are not even PDFs, and it turns Acrobat into a general-purpose productivity platform; the rollout of AI Assistant in more languages and documents gives Acrobat’s growth more durability

We took a major step forward in FY ’24 with the introduction of AI Assistant in Acrobat and Reader. AI Assistant and other AI features like Liquid Mode and Firefly are accelerating productivity through faster insights, smarter document editing and integrated image generation. A recent productivity study found that users leveraging AI Assistant completed their document-related tasks 4x faster on average. AI Assistant is now available in Acrobat across desktop, web, and mobile and integrated into our Edge, Chrome, and Microsoft Teams extensions. In Q4, we continued to extend its value with specialized AI for contracts and scanned documents, support for additional languages, and the ability to analyze larger documents…

… We saw AI Assistant conversations double quarter-over-quarter, driving deeper customer value…

… AI Assistant for Acrobat is off to an incredibly strong start and we see it continuing to accelerate…

…One of the big things that I think has been unlocked this year is moving, not just by looking at a PDF that you happen to be viewing, but being able to look at and have a conversation with multiple documents, some of which don’t even have to be PDF. So that transition and that gives us the ability to really take Acrobat and make it more of a general purpose productivity platform…

…The thing I’ll add to that is the durability of that, to your point, in languages, as we roll that out in languages, as we roll it out across multiple documents and as we roll it out in enterprises and B2B specifically. So again, significant headroom in terms of the innovation agenda of how Acrobat can be made even more meaningful as a knowledge tool within the enterprise.  

Adobe’s management will soon introduce a new higher-priced Firefly offering that includes the video models; management thinks the higher-priced Firefly offering will help to increase ARPU (average revenue per user); management sees video generation as a high-value activity, which gives Adobe the ability to introduce higher subscription tiers that come with video generation; management sees consumption of AI services adding to Adobe’s ARR (annual recurring revenue) in 2 ways in FY2025, namely, (1) pure consumption-based pricing, and (2) consumption leading to a higher pricing-tier; management has learnt from pricing experiments for AI services and found that the right model for Adobe is a combination of access to features and usage-limits

We will soon introduce a new higher-priced Firefly offering that includes our video models as a comprehensive AI solution for creative professionals. This will allow us to monetize new users, provide additional value to existing customers, and increase ARPU…

…Video generation is a much higher-value activity than image generation. And as a result, it gives us the ability to start to tier Creative Cloud more actively there…

…You’re going to see “consumption” add to ARR in 2 or maybe 3 ways more so in ’25 than in ’24. The first, and David alluded to this, is if you have a video offering and that video offering, that will be a pure consumption pricing associated with it. I think the second is in GenStudio and for enterprises and what they are seeing. With respect to Firefly Services, which, again, I think David touched on how much momentum we are seeing in that business. So that is, in effect, a consumption business as it relates to the enterprise so I think that will also continue to increase. And then I think you’ll see us with perhaps more premium price offering. So the intention is that consumption is what’s driving the increased ARR, but it may be as a result of a tier in the pricing rather than a consumption model where people actually have to monitor it. So it’s just another way, much like AI Assistant is of monetizing it, but it’s not like we’re going to be tracking every single generation for the user, it will just be at a different tier…

… What we’ve done over the last year, there’s been a bit of experimentation, obviously, in the core Creative applications. We’ve done the generative credits model. What we saw with Acrobat was this idea of a separate package and a separate SKU that created a tier that people were able to access the feature through. And as we learn from all of these, we think, as Shantanu had mentioned earlier, that the right tiering model for us is going to be a combination of feature, access to certain features and usage limits on it. So the higher the tier, the more features you get and the more usage you get of it.

The Adobe Experience Platform (AEP) AI Assistant helps marketers automate tasks and generate new audiences and journeys

Adobe Experience Platform AI Assistant empowers marketers to automate tasks and generate new audiences and journeys. Adobe Experience Manager generates variations, provides dynamic and personalized content creation natively through AEM, enabling customers to deliver more compelling and engaging experiences on their websites.

Adobe’s management thinks there are 3 foundational differences in the company’s AI models and what the rest are doing, namely, (1) commercially safe models, (2) incredible control of the models, and (3) the integration of the models into products

The foundational difference between what we do and what everyone else does in the market really comes down to 3 things: one is commercially safe, the way we train the models; two is the incredible control we bake into the model; and three is the integration that we make with these models into our products, increasingly, of course, in our CC flagship applications but also in Express and Legroom and these kinds of applications but also in Anil’s DX products as well. So that set of things is a critical part of the foundation and a durable differentiator for us as we go forward.

Adobe’s management is seeing that users are onboarded to products faster when using generative AI capabilities; management is seeing that users who use generative AI features have higher retention rates

We are seeing in the core Creative business, when people try something like Photoshop, the onboarding experience is faster to success because of the use of generative AI and generative capabilities. So you’ll start to see us continuing to drive more proliferation of those capabilities earlier in the user journeys, and that has been proven very productive. But we also noticed that more people use generative AI. Again, we’ve always had good retention rates, but the more people use generative AI, the longer they retain as well. 

MongoDB (NASDAQ: MDB)

MongoDB’s management is seeing a lot of large customers want to run workloads, even AI workloads, in on-premise format

We definitely see lots of large customers who are very, very committed to running workloads on-prem. We even see some customers want who are on to run AI workloads on-prem…

… I think you have some customers who are very committed to running a big part of the estate on-prem. So by definition, then if they’re going to build an AI workload, it has to be run on-prem, which means that they also need access to GPUs, and they’re doing that. And then other customers are leveraging basically renting GPUs from the cloud providers and building their own AI workloads.    

MongoDB’s initiative to accelerate legacy app modernisation with AI (Relational Migrator) has seen a 50% reduction in the cost to modernise in its early days; customer interest in this initiative is exceeding management’s expectations; management expects modernisation projects to include large services engagements and MongoDB is increasing its professional services delivery capabilities; management is building new tools to accelerate future monetisation of service engagements; management has growing confidence that the monetisation of modernisation capabilities will be a significant growth driver for MongoDB in the long term; there are a confluence of events, including the emergence of generative AI to significantly reduce the time needed for migration of databases, that make the modernisation opportunity attractive for MongoDB; the buildout of MongoDB’s professional services capabilities will impact the company’s gross margin

We are optimistic about the opportunity to accelerate legacy app modernization using AI and are investing more in this area. As you recall, we ran a few successful pilots earlier in this year, demonstrating that AI tooling combined with professional services and our relational migrator product, can significantly reduce the time, cost and risk of migrating legacy applications on to MongoDB. While it’s early days, we have observed a more than 50% reduction in the cost to modernize. On the back of these strong early results, additional customer interest is exceeding our expectations. 

Large enterprises in every industry and geography are experiencing acute pain from their legacy infrastructure and are eager for more agile performance and cost-effective solutions. Not only our customers excited to engage with us, they also want to focus on some of the most important applications in their enterprise further demonstrating the level of interest and size of the long-term opportunity.

As relational applications encompass a wide variety of database types, programming languages, versions and other customer-specific variables, we expect modernization projects to continue to include meaningful services engagements in the short and medium term. Consequently, we are increasing our professional services delivery capabilities, both directly and through partners. In the long run, we expect to automate and simplify large parts of the modernization process. To that end, we are leveraging the learnings from early service engagements to develop new tools to accelerate future monetization efforts. Although it’s early days and scaling our legacy app monetization capabilities will take time, we have increased conviction that this motion will significantly add to our growth in the long term…

…We’re so excited about the opportunity to go after legacy applications is that, one, it seems like there’s a confluence of events happening. One is that the increasing cost and tax of supporting and managing these legacy apps are just going up enough. Second, for many customers who are in regulated industries, the regulators are calling their the fact that they’re running on these legacy apps a systemic risk, so they can no longer kick the can down the road. Third, also because they no longer kick the can around, some vendors are going end of life, So they have to make a decision to migrate those applications to a more modern tech stack. Fourth, because Gen AI is so predicated on data and to build a competitive advantage, you need to leverage your proprietary data. People want to access that data and be able to do so easily. And so that’s another reason for them to want to modernize…

…we always could help them very easily move the data and map the schema from a relational schema to a document schema. The hardest part was essentially rewriting the application. Now with the advent of GenAI, you can now significantly reduce the time. One, you can use GenAI to analyze the existing code. Two, you can use GenAI to reverse engineer tests to test what the code does. And then three, you can use GenAI to build new code and then use this test to ensure that the new code produce the same results as the old code. And so all that time and effort is suddenly cut in a meaningful way…

…We’re really building out that capacity in order to meet the demand that we’re seeing relative to the opportunity. We’re calling it in particular because it has a gross margin impact because that’s where that will typically show up. 

MongoDB’s management thinks that the company’s database is uniquely suited for the query-rich and complex data structures commonly found in AI applications; AI-powered recommendation systems have to consider complex data structures, beyond a customer’s purchase history; MongoDB’s database unifies source data, metadata, operational data and vector data in all 1 platform, providing a better developer experience; management thinks MongoDB is well-positioned for AI agents because AI agents that perform tasks need to interact with complex data structures, and MongoDB’s database is well-suited for this

MongoDB is uniquely equipped to query-rich and complex data structures typical of AI applications. The ability of a database to query-rich and complex data structures is crucial because AI applications often rely on highly detailed, interrelated and nuanced data to make accurate predictions and decisions. For example, a recommendation system doesn’t just analyze a single customer’s purchase but also considers their browsing history, peer group behavior and product categories requiring a database that can query and ensuring these complex data structures. In addition, MongoDB’s architecture unified source data, metadata, operational data and vector data in all 1 platform, updating the need for multiple database systems and complex back-end architectures. This enables a more compelling developer experience than any other alternative…

…When you think about agents, there’s jobs, there’s sorry, there’s a job, this project and then this task. Right now, the agents that are being rolled out are really focused on task, like, say, something from Sierra or some other companies that are rolling out agents. But you’re right, what they deem to do is to deal with being able to create a rich and complex data structure.

Now why is this important for in AI is that AI models don’t just look at isolated data points, but they need to understand relationships, hierarchies and patterns within the data. They need to be able to essentially get real-time insights. For example, if you have a chat bot where someone’s clearing customers kind of trying to get some update on the order they placed 5 minutes ago because they may have not gotten any confirmation, your chatbot needs to be able to deal with real-time information. You need to be able to deal with basically handling very advanced use cases, understanding like do things like fraud detection, to understand behaviors on supply chain, you need to understand intricate data relationships. All these things are consistent with MongoDB offers. And so we believe that at the end of the day, we are well positioned to handle this.

And the other thing that I would say is that we’ve embedded in a very natural way, search and vector search. So we’re just not an OLTP [online transaction processing] database. We do tech search and vector search, and that’s all one experience and no other platform offers that, and we think we have a real advantage. 

In the AI market, MongoDB’s management is seeing most customers still being in the experimental stage, but the number of AI apps in production is increasing; MongoDB has thousands of AI apps on its platform, but only a small number have achieved enterprise-scale; there’s one AI app on MongoDB’s platform that has grown 10x since the start of 2024 and is a 7-figure workload today; management believes that as AI technology matures, there will be more AI apps that attain product-market fit but it’s difficult to predict when this will happen; management remains confident that MongoDB will capture its share of successful AI applications, as MongoDB is popular with developers building sophisticated AI apps; there are no compelling AI models for smartphones at the moment because phones do not have sufficient computing power

From what we see in the AI market today, most customers are still in the experimental stage as they work to understand the effectiveness of the underlying tech stack and build early proof-of-concept applications. However, we are seeing an increasing number of AI apps in production. Today, we have thousands of AI apps on our platform.  What we don’t yet see is many of these apps actually achieving meaningful product-market fit and therefore, significant traction. In fact, as you take a step back and look at the entire universe of AI apps, a very small percentage of them have achieved the type of scale that we commonly see with enterprise-specific applications. We do have some AI apps that are growing quickly, including one that is already a 7-figure workload that has grown 10x since the beginning of the year.

Similar to prior platform shifts as the usefulness of AI tech improves and becomes more cost-effective we will see the emergence of many more AI apps that do nail product market fit, but it’s difficult to predict when that will happen more broadly. We remain confident that we will capture our fair share of these successful AI applications as we see that our platform is popular with developers building more sophisticated AI use cases…

…Today, we don’t have a very compelling model designed for our phones, right? Because today, the phones don’t have the computing horsepower to run complex models. So you don’t see a ton of very, very successful consumer apps besides, say, ChatGPT or Claude.

MongoDB’s management is building enterprise-grade Atlas Vector Search functionality into the company’s platform so that MongoDB will be in an even better position to win AI opportunities; management is bringing vector search into MongoDB’s community and EA (Enterprise Advance, which is the company’s non-Atlas business) offerings

We continue investing in our product capabilities, including enterprise-grade Atlas Vector Search functionality to build on this momentum and even better position MongoDB to capture the AI opportunity. In addition, as previously announced, we are bringing search and vector service to our community and EA offerings, leveraging our run-anywhere competitive advantage in the world of AI…

…We are investing in our what we call our EA business. First, we’re starting by investing with Search and Vector Search and a community product. That does a couple of things for us. One, whenever anyone starts with MongoDB with the open source product, they need get all the benefits of that complete and highly integrated platform. Two, those capabilities will then migrate to EA. So EA for us is an investment strategy.

MongoDB’s management is expanding the MongoDB AI Applications Program (MAAP); the MAAP has signed on new partners, including with Meta; management expects more of the MAAP workloads to happen on Atlas initially

We are expanding our MongoDB AI Applications program, or MAAP, which helps enterprise customers build and bring AI applications into production by providing them with reference architectures, integrations with leading tech providers and coordinated services and support. Last week, we announced a new cohort of partners, including McKinsey, Confluent, CapGemini and Instructure as well as the collaboration with Meta to enable developers to build arenrich applications on MongoDB using Llama…

…[Question] On the MAAP program, are most of those workloads going to wind up in Atlas? Or will that be a healthy combination of EA and Atlas?

[Answer] I think it’s, again, early days. I would say — I would probably say more on the side of Atlas than EA in the early days. I think once we introduce Search and Vector Search into the EA product, you’ll see more of that on-prem. Obviously, people can use MongoDB for AI workloads using other technologies as well in conjunction with MongoDB for on-prem AI use cases. But I would say you’re probably going to see that happen first in Atlas.

Tealbook consolidated from Postgres, PG Vector, and Elastic Search to MongoDB; Tealbook has seen cost efficiencies and increased scalability with Atlas Vector Search for its application that uses generative AI to collect, verify and enrich supplier data across various sources

Tealbook, a supplier intelligence platform migrated from [ Postgres ], [ PG Vector ] and Elastic Search to MongoDB to eliminate technical debt and consolidate their tech stack. The company experienced workload isolation and scalability issues in PG vector, and we’re concerned with the search index inconsistencies, which were all resolved with the migration to MongoDB. With Atlas Vector search and dedicated search notes, Tealbook has realized improved cost efficiency and increase scalability for the supplier data platform, an application that uses GenAI to collect, verify and enrich supplier data across various sources.

MongoDB’s partnerships with all 3 major cloud providers – AWS, Azure, and GCP – for AI workloads are going well; management expects the cloud providers to bundle their own AI-focused database offerings with their other AI offerings, but management also thinks the cloud providers realise that MongoDB has a better offering and it’s better to partner with the company

With AWS, as you said, they just had their Reinventure last week. It remains very, very strong. We closed a ton of deals this past quarter, some of them very, very large deals. We’re doing integrations to some of the new products like Q and Bedrock and the engagement in the field has been really strong.

On Azure, I think we — as I’ve shared in the past, we start off with a little bit of a slower start. But in the words of the person who runs their partner leadership, the Azure MongoDB relationship has never been stronger. — we closed a large number of deals, we’re part of what’s called the Azure-native IC service program and have a bunch of deep integrations with Azure, including Fabric, Power BI, Visual Studio, Symantec Kernel and Azure OpenAI studio. And we’re also one of Azure’s largest marketplace partners.

And GCP does — we’ve actually seen some uptick in terms of co-sales that we’ve done this past quarter. GCP made some comp changes where that were favorable to working with MongoDB that we saw some results in the field and we’re focused on closing a handful of large deals with GCP in Q4. So in general, I would say things are going quite well.

And then in terms of, I guess, implying your question was like the hyperscalers and are they potentially bundling things along with their AI offerings? I mean, candidly, since day 1, the hyperscalers have been bundling their database offerings with every offering that they have. And that’s been their pretty predominant strategy. And we’ve — I think we’ve executed well against strategy because databases are not like a by-the-way decision. It’s an important decision. And I think the hyperscalers are seeing our performance and realize it’s better to partner with us. And as I said, customers understand the importance of the data layer, especially by our applications. And so the partnership across all 3 hyperscalers is strong.

A new MongoDB AI-related capability called Atlas Search Nodes is seeing very high demand; Atlas Search is being used by one of the world’s largest banks to provide a Google-like Search experience on payments data for customers; an AI-powered accounting software provider is using Atlas Search to allow end-users to perform ad-hoc analysis

On search, we introduced a new capability called Atlas Search nodes, which where you can asymmetrically scale your search nodes because if you have a search intensive use case, you don’t have to scale all your nodes because that have become quite expensive. And we’ve seen that this kind of groundbreaking capability really well received. The demand is quite high. And because customers like they can tune the configuration to the unique needs of their search requirements.

One of the world’s largest banks is using Atlas Search to provide like a Google-like search experience on payments data for massive corporate customers. So there’s a customer-facing application, and so performance and scalability are critical. A leading provider of AI-powered accounting software uses Atlas Search to Power’s invoice analytics future, which allows end users on finance teams to perform ad hoc analysis and easily find past due invoices and voices that contain errors.

Vector Search is only in its first full year of being generally available; uptake of Vector Search has been very high; MongoDB released a feature on Atlas Vector Search in 2024 Q3 that reduces memory requirements by up to 96% and this helps Atlas Vector Search support larger vector workloads at a better price-performance ratio; a multinational news organisation used Vector Search to create a generative AI tool to help producers and journalists sift through vast quantities of information; a security firm is using Vector Search for AI fraud; a global media company replaced Elastic Search with Vector Search for a user-recommendation engine

On Vector Search, again, and it’s been our kind of our first full year since going generally available and the product uptake has been actually very, very high. In Q3, we released quantization for Atlas Vector Search, which reduces the memory requirements by up to 96%, allowing us to support larger Vector workloads with vastly improved price performance.

For example, a multinational news organization created a GenAI powered tool designed to help producers and journalists efficiently search, summarize and verify information from vast and varied data sources. A leading security firm is using Atlas Vector certified AI fraud and a leading global media company replaced elastic search with hybrid search and vector search use case for a user recommendation engine that’s built to suggest that’s building to suggest articles to end users.

MongoDB’s management thinks the industry is still in the very early days of shifting towards AI applications

I do think we’re in the very, very early days. They’re still learning experimenting…  I think as people get more sophisticated with AI as the AI technology matures and becomes more and more useful, I think applications will — you’ll start seeing these applications take off. I kind of chuckle that today, I see more senior leaders bragging about the chips they are using versus the Appstore building. So it just tells you that we’re still in the very, very early days of this big platform shift.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue again had incredibly strong growth in 2024 Q3, driven by demand for the Hopper GPU computing platform; Nvidia’s H200 sales achieved the fastest ramp in the company’s history

Another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional, and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest prod ramp in our company’s history.

Nvidia’s H200 product has 2x faster inference speed, and 50% lower total cost of ownership (TCO)

The H200 delivers up to 2x faster inference performance and up to 50% improved TCO. 

Cloud service providers (CSPs) were half of Nvidia’s Data Centre revenue in 2024 Q3, and up more than 2x year-on-year; CSPs are installing tens of thousands of GPUs to meet rising demand for AI training and inference; Nvidia Cloud Instances with H200s are now available, or soon-to-be-available, in the major CSPs

Cloud service providers were approximately half of our Data Center sales with revenue increasing more than 2x year-on-year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave and Microsoft Azure with Google Cloud and OCI coming soon.

North America, India, and Asia Pacific regions are ramping up Nvidia Cloud Instances and sovereign clouds; management is seeing an increase in momentum of sovereign AI initiatives; India’s CSPs are building data centers containing tens of thousands of GPUs and increasing GPU deployments by 10x in 2024 compared to a year ago; Softbank is building Japan’s most powerful AI supercomputer with Nvidia’s hardware 

Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year-on-year as North America, India, and Asia Pacific regions ramped NVIDIA Cloud instances and sovereign cloud build-outs…

…Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs include product communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x…

…In Japan, SoftBank is building the nation’s most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA.

Nvidia’s revenue from consumer internet companies more than doubled year-on-year in 2024 Q3

Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. 

Nvidia’s management sees Nvidia as the largest inference platform in the world; Nvidia’s management is seeing inference really starting to scale up for the company; models that are trained on previous generations of Nvidia chips inference really well on those chips; management thinks that as Blackwell proliferates in the AI industry, it will leave behind a large installed base of infrastructure for inference; management’s dream is that plenty of AI inference happens across the world; management thinks that inference is hard because it needs high accuracy, high throughput, and low latency

NVIDIA’s Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements…

…We’re seeing inference really starting to scale up for our company. We are the largest inference platform in the world today because our installed base is so large. And everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we’re seeing inference demand go up…

… Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really exceeded is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing inference 24/7…

…Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high-throughput as well as low latency is incredibly hard to build. 

Nvidia’s management has driven a 5x improvement in Hopper inference throughput in 1 year via advancements in the company’s software; Hopper’s inference performance is set to increase by a further 2.4x shortly because of NIM (Nvidia Inference Microservices)

Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in 1 year and cut time to first token by 5x. Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. 

Nvidia’s Blackwell family of chips is now in full production; Nvidia shipped 13,000 Blackwell samples to customers in 2024 Q3; the Blackwell family comes with a wide variety of customisable configurations; management sees all Nvidia customers wanting to be first to market with the Blackwell family; management sees staggering demand for Blackwell, with Oracle announcing the world’s first zetta-scale cluster with more than 131,000 Blackwell GPUs, and Microsoft being the first CSP to offer private-preview Blackwell instances; Blackwell is dominating GPU benchmarks; Blackwell performs 2.2x better than Hopper and is also 4x cheaper; Blackwell with NVLink Switch delivered up to a 30x improvement in inference speed; Nvidia’s management expects the company’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; Blackwell’s production is in full-steam ahead and Nvidia will deliver more Blackwells in 2024 Q4 than expected; demand for Blackwell exceeds supply

Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink and from liquid cooled to air cooled. 

Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers. We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world’s first zetta-scale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer, in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.

Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the per GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. The 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink Switch enables up to 30x faster inference performance and a new level of inference scaling, throughput and response time that is excellent for running new reasoning inference applications like OpenAI’s o1 model…

…As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s…

… Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated…

…It is the case that demand exceeds our supply. And that’s expected as we’re in the beginnings of this generative AI revolution as we all know…

…In terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible…

…[Question] Do you think it’s a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar ’25?

[Answer] Yes, I think it is a reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.  

Nvidia’s management is seeing that hundreds of AI-native companies are already delivering AI services and thousands of AI-native startups are building new services

Hundreds of AI-native companies are already delivering AI services with great success. Though Google, Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor and the Bridge are seeing great success while thousands of AI-native start-ups are building new services. 

Nvidia’s management is seeing large enterprises build copilots and AI agents with Nvidia AI; management sees the potential for billions of AI agents being deployed in the years ahead; Accenture has an internal AI agent use case that reduces steps in marketing campaigns by 25%-35%

Industry leaders are using NVIDIA AI to build Copilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years…

… Accenture with over 770,000 employees, is leveraging NVIDIA-powered agentic AI applications internally, including 1 case that cuts manual steps in marketing campaigns by 25% to 35%.

Nearly 1,000 companies are using NIM (Nvidia Inference Microservices); management expects the Nvidia AI Enterprise platform’s revenue in 2024 to be double that from 2023; Nvidia’s software, service, and support revenue now has an annualised revenue run rate of $1.5 billion and management expects the run rate to end 2024 at more than $2 billion

Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full year revenue to increase over 2x from last year and our pipeline continues to build. Overall, our software, service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.

Nvidia’s management is seeing an acceleration in industrial AI and robotics; Foxconn is using Nvidia Omniverse to improve the performance of its factories, and Foxconn’s management expects a reduction of over 30% in annual kilowatt hour usage in Foxconn’s Mexico facility

Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world, like NVIDIA NeMo for enterprise AI agents. We built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics…

…Foxconn, the world’s largest electronics manufacturer, is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring-up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce — a reduction of over 30% in annual kilowatt hour usage.

Nvidia saw sequential growth in Data Center revenue in China because of export of compliant Hopper products; management expects the Chinese market to be very competitive

Our data center revenue in China grew sequentially due to shipments of export-compliant Hopper products to industries…

…We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.

Nvidia’s networking revenue declined sequentially, but there was sequential growth in Infiniband and Ethernet switches, Smart NICs (network interface controllers), and BlueField DPUs; management expects sequential growth in networking revenue in 2024 Q4; management is seeing CSPs adopting Infiniband for Hopper clusters; Nvidia’s Spectrum-X Ethernet for AI revenue was up 3x year-on-year in 2024 Q3; xAI used Spectrum-X for its 100,000 Hopper GPU cluster and achieved zero application latency degradation and maintained 95% data throughput, compared to 60% for Ethernet

Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs and BlueField DPUs. Though networking revenue was sequentially down, networking demand is strong and growing, and we anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters.

NVIDIA Spectrum-X Ethernet for AI revenue increased over 3x year-on-year. And our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments. Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI’s Colossus 100,000 Hopper supercomputer experienced 0 application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet…

…Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we’re going to be right back up in terms of growing. We’re getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems we are providing them to.

Nvidia has begun shipping new GeForce RTX AI PCs

We began shipping new GeForce RTX AI PC with up to 321 AI FLOPS from ASUS and MSI with Microsoft’s Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo, and video editing, image generation and coding.

Nvidia’s Automotive revenue had strong growth year-on-year and sequentially in 2024 Q3, driven by self-driving brands of Nvidia Orin; Volvo’s electric SUV will be powered by Nvidia Orin

Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving brands of NVIDIA Orin and robust end market demand for NAVs. Volvo Cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.

Nvidia’s management thinks pre-training scaling of foundation AI models is intact, but it’s not enough; another way of scaling has emerged, which is inference-time scaling; management thinks that the new ways of scaling has resulted in great demand for Nvidia’s chips, but for now, most of Nvidia’s chips are used in pre-training 

Foundation model pretraining scaling is intact and it’s continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough, that we’ve now discovered 2 other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling. And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI’s o1, which does inference time scaling, what’s called test time scaling. The longer it thinks, the better and higher-quality answer it produces. And it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth…

… we now have 3 ways of scaling and we’re seeing all 3 ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pretraining scaling, post-training scaling, and then now very importantly, inference time scaling…

…[Question] Today, how much of the compute goes into each of these buckets? How much for the pretraining? How much for the reinforcement? And how much into inference today?

[Answer] Today, it’s vastly in pretraining a foundation model because, as you know, post-training, the new technologies are just coming online. And whatever you could do in pretraining and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority. And so you’ll always have to do on-the-spot thinking and in context thinking and a reflection. And so I think that the fact that all 3 are scaling is actually very sensible based on where we are. And in the area foundation model, now we have multimodality foundation models and the amount of petabytes video that these foundation models are going to be trained on, it’s incredible. And so my expectation is that for the foreseeable future, we’re going to be scaling pretraining, post-training as well as inference time scaling and which is the reason why I think we’re going to need more and more compute.  

Nvidia’s management thinks the company generates the greatest possible revenue for its customers because its products has much better performance per watt

Most data centers are now 100 megawatts to several hundred megawatts, and we’re planning on gigawatt data centers, it doesn’t really matter how large the data centers are. The power is limited. And when you’re in the power-limited data center, the best — the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual road map reduces cost. But on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues. 

Nvidia’s management sees Hopper demand continuing through 2025

Hopper demand will continue through next year, surely the first several quarters of the next year. 

Nvidia’s management sees 2 fundamental shifts in computing happening today: (1) the movement from code that runs on CPUs to neural networks that run on GPUs and (2) the production of AI from data centres; the fundamental shifts will drive a $1 trillion modernisation of data centres globally

We are really at the beginnings of 2 fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that’s happening is $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.

On the other hand, secondarily, I guess, is that on top of these systems are going to be — we’re going to be creating a new type of capability called AI. And when we say generative AI, we’re essentially saying that these data centers are really AI factories. They’re generating something. Just like we generate electricity, we’re now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we’re going to see this new type of system come online, and I call it an AI factory because that’s really as close to what it is. It’s unlike a data center of the past.

Nvidia’s management does not see any digestion happening for GPUs until the world’s data centre infrastructure is modernised

[Question] My main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase? Or is it just too premature to discuss that because you’re just at the start of Blackwell?

[Answer] I believe that there will be no digestion until we modernize $1 trillion with the data centers.

Okta (NASDAQ: OKTA)

Okta AI is really starting to help newer Okta products

Second thing is that we have Okta AI, which we talked a lot about a couple of years ago, and we continue to work on that. And it’s really starting to help these new products like identity threat protection with Okta AI. The model inside of identity threat protection and how that works is AI is a big part of the product functionality. 

Okta’s management sees the need for authentication for AI agents and has a product called Auth for Gen AI; management thinks authentication of AI agents could be a new area of growth for Okta; management sees the pricing for Auth for Gen AI as driven by a fee per monthly active machine

Some really interesting new areas are we have something we talked about at Oktane called Auth for Gen AI, which is basically authentication platform for agents. Everyone is very excited about agents, as they should be. I mean, we used to call them bots, right? 4, 5 years ago, they’re called bots. Now they’re called agents, like what’s the big deal? How different is it? Well, you can interact with them natural languages and they can do a lot more with these models. So now it’s like bots are real in real time. But the problem is all of these bots and all of these platforms to build bots, they have the equivalent of the monitor sticky notes with passwords on them, they have the equivalent of that inside the bot. So there’s no protocol for single sign-on for bots. They have like stored passwords in the bot. And if that bot gets hacked, guess what? You signed up for that bot and it has access to your calendar and has access to your travel booking and it has access to your company e-mail and your company data, that’s gone because the hacker is going to get all those passwords out there. So Auth for Gen AI automates that and make sure you can have a secure protocol to build a bot around. And so that’s a really interesting area. It’s very new. We just announced it and all these agent frameworks and so forth are new…

… Auth for GenAI, it’s basically like — think about it as a firm machine authentication. So every time — we have this feature called machine-to-machine, which does a similar thing today, and you pay basically by the monthly active machine.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Salesforce is at the edge of the rise of digital labour, which are autonomous AI agents; management thinks the TAM (total addressable market) for digital labour is much larger than the data management market that Salesforce was previously in; management thinks Salesforce is the largest supplier of digital labour right from the get-go; Salesforce’s AgentForce service went into production in 2024 Q3 and Salesforce has already delivered 200 AgentForce deals with more to come; management has never seen anything like AgentForce; management sees AgentForce as the next evolution of Salesforce; management thinks AgentForce will help companies scale productivity independent of workforce growth; management sees AgentForce AI agents manifesting as robots that will supplement human labour; management sees AgentForce, together with robots, as a driving force for future global economic growth even with a stagnant labour force; AgentForce is already delivering tangible value to customers; Salesforce’s customers recently built 10,000 AI agents with AgentForce in 3 days, and thousands more AI agents have been built since then; large enterprises across various industries are building AI agents with AgentForce; management sees AgentForce unlocking a whole new level of operational efficiency; management will be delivering AgentForce 2.0 in December this year

We’re really at the edge of a revolutionary transformation. This is really the rise of digital labor. Now for the last — I would say for the last 25 years at Salesforce, and we’ve been helping companies to manage and share their information…

…But now we’ve really created a whole new market, a new TAM, a TAM that is so much bigger and so much more exciting than the data management market that it’s hard to get our head completely around. This is the market for digital labor. And Salesforce has become, right out of the gate here, the largest supplier of digital labor and this is just the beginning. And it’s all powered by these autonomous AI agents…

…With Salesforce, agent force, we’re not just imagining this future. We’re already delivering it. And you so know that in the last week of the quarter, Agentforce went production. We delivered 200 deals, and our pipeline is incredible for future transactions. We can talk about that with you on the call, but we’ve never seen anything like it. We don’t know how to characterize it. This is really a moment where productivity is no longer tied to workforce growth, but through this intelligent technology that can be scaled without limits. And Agentforce represents this next evolution of Salesforce. This is a platform now, Salesforce as a platform or AI agents work alongside humans in a digital workforce that amplifies and augments human capabilities and delivers with unrivaled speed…

…On top of the agenetic layer, we’ll soon see a robotic layer as well where these agents will manifest into robots…

…These agents are not tools. They are becoming collaborators. They’re working 24/7 to analyze data, make decisions, take action, and we can all start to picture this enterprise managing millions of customer interactions daily with Agentforce seamlessly resolving issues, processing transactions, anticipating customer needs, freeing up humans to focus on the strategic initiatives and building meaningful relationships. And this is going to evolve into customers that we have, whether it could be a large hospital or a large hotel where not only are the agents working 24/7, but robots are also working side-by-side with humans, robots manifestations of agents this idea that it’s all happening before our eyes and that this isn’t just some far-off future. It’s happening right now…

…For decades, economic growth dependent on expanding the human workforce. It was all about getting more labor. But with labor and with the labor force stagnating globally, Agentforce is unlocking a new path forward. It’s a new level of growth for the world and for our GPT and businesses no longer need to choose between scale and efficiency with agents, they can achieve both…

…Our customers are already experiencing this transformation. Agentforce is deflecting service cases and resolving issues, processing, qualifying leads, helping close more deals, creating optimizing marketing campaigns, all at an unprecedented scale, 24/7…

…What was remarkable was the huge thirst that our customers had for this and how they built more than 10,000 agents in 3 days. And I think you know that we then unleashed a world tour of that program, and we have now built thousands and thousands of more agents in these world tours all over the world…

…So companies like FedEx, [indiscernible], Accenture, Ace Hardware, IBM, RBC Wealth Management and many more are now building their digital labor forces on the Salesforce platform with Agentforce. So that is the largest and most important companies in the world across all geographies, across all industries are now building and delivering agents…

…While these legacy chatbots have handled these basic tasks like password resets and other basic mundane things, Agentforce is really unlocking an entirely new level of digital intelligence and operational efficiency at this incredible scale…

…I want to invite all of you to join us for the launch of Agentforce 2.0. And it is incredible what you are going to see the advancements in the technology already are amazing and accuracy and the ability to deliver additional value. And we hope that you’re going to join us in San Francisco. This is going to happen on December 17. You’ll see Agentforce 2.0 for the first time,

Salesforce is customer-zero for AgentForce and the service is live on Salesforce’s help-website; AgentForce is handling 60 million sessions and 2 millions support cases annually on the help-website; the introduction of AgentForce in Salesforce’s help-website has allowed management to rebalance headcount into growth-areas; users of Salesforce’s help-website will experience very high levels of accuracy because AgentForce is grounded with the huge repository of internal and customer data that Salesforce has; management sees Salesforce’s data as a huge competitive advantage for AgentForce; AgentForce can today quickly deliver personalised insights to users of Salesforce’s help-website and hand off users to support engineers for further help; management thinks AgentForce will deflect between a quarter and half of annual case volume; Salesforce is also using AgentForce internally to engage prospects and hand off prospects to SDR (sales development representative) team

We pride ourselves on being customer [ 0 ] for all of our products, and Agentforce is no exception. We’re excited to share that Agentforce is now live on help.salesforce.com…

… Our help portal, help.salesforce.com, which is now live. This portal, this is our primary support mechanism for our customers. It lets them authenticate in, it then becomes grounded with the agent, and that Help portal already is handling 60 million sessions and more than 2 million support cases every year. Now that is 100% on Agentforce…

…From a human resource point of view, where we can really start to look at how are we going to rebalance our headcount into areas that now are fully automated and to into areas that are critical for us to grow like distribution…

…Now when you use help.salesforce.com, especially as authenticated users, as I mentioned, you’re going to see this incredible level of accuracy and responsiveness and you’re going to see remarkably low hallucinogenic performance whether for solving simple queries or navigating complex service issues because Agentforce is not just grounded in our Salesforce data and metadata including the repository of 740,000 documents and 17 languages, it’s also grounded in each customer’s data, their purchases, returns, that data it’s that 200 petabytes or through 200 to 300 petabytes of Salesforce data that we have that gives us this kind of, I would say, almost unfair advantage with Agentforce because our agents are going to be more accurate in the least hallucinogenic of any because they have access to this incredible capability. And Agentforce can instantly reason over this vast amounts of data, deliver precise personalizing [indiscernible] with citations in seconds, and Agentforce can seamlessly hand off to support engineers, delivering them complete summary and recommendation as well. And you can all try this today. This isn’t some fantasy land future idea this is today reality…

…We expect that our own transformation with Agentforce on help.salesforce.com and in many other areas of our company, it is going to deflect between a quarter and half of annual case volume and in optimistic cases, probably much, much more of that…

…We’re also deploying Agentforce to engage our prospects on salesforce.com, answering their questions 24/7 as well as handing them off to our SDR team. You can see it for yourself and test it out on our home page. We’ll use our new Agentforce SDR agent to further automate top of funnel activities when gatherings leads, lead data for providing education and qualifying prospects and booking meetings.

Salesforce’s management thinks AgentForce is much better than Microsoft’s AI Copilots

I just want to compare and contrast that against other companies who say they are doing enterprise AI. You can look at even Microsoft. We all know about Copilot, it’s been out, it’s been touted now for a couple of years. We’ve heard about CoPilot. We’ve seen the demo. In many ways, it’s just repackaged ChatGPT. You can really see the difference where Salesforce now can operate its company on our platform. And I don’t think you’re going to find that on Microsoft’s website, are you?

Vivint is using AgentForce for customer support and for technician scheduling, payment requests, and more; Adecco is using AgentForce to improve the handling of job applicants (Adecco receives 300 million job applications annually); Wiley is resolving cases 40% faster with AgentForce; Heathrow Airport is using AgentForce to respond to thousands of travelers instantly, accurately, and simultaneously; SharkNinja is using AgentForce for personalised 24/7 customer support in 28 geographies; Accenture is using AgentForce to improve deal quality and boost bid coverage by 75%

One of them is the smart home security provider, Vivint. They’ve struggled with this high volume of support calls, a high churn rate for service reps. It’s a common story. But now using the Agentforce, Vivint has created a digital support staff to autonomously provide support through their app, their website, troubleshooting, a broad variety of issues across all their customer touch points. And in addition, Vivint is planning to utilize Agentforce to further automate technician scheduling, payment request, proactive issue resolution, the use of device telemetry because Agentforce is across the entire sales force product line and including Slack…

…Another great customer example that’s already incredible to work they’ve already done to get this running and going in their company Adecco, the world’s leading provider of talent solutions, handling 300 million job applications annually, but historically, they have just not been able to go through or respond in a timely way, of course, to the vast majority of applications that they’re gating, but now the Agentforce is going to operate an incredible scale, sorting through the millions of resumes, 24/7 matching candidates to opportunities proactively prequalifying them for recruiters. And in addition, Agentforce can also assess candidates helping them to refine their resumes, giving them a better chance of qualifying for a role…

…Wiley, an early adopter, is resolving cases over 40% faster with Agentforce than their previous chat bot. Heathrow Airport, one of the busiest airports in the world will be able to respond to thousands of travelers inquiries instantly, accurately and simultaneously. SharkNinja, a new logo in the quarter, chose Agentforce and Commerce Cloud to deliver 24/7 personalized support for customers across 28 international markets and unifying its service operations…

…Accenture chose Agentforce to streamline sales operations and enhance bid management for its 52,000 global sellers. By integrating sales coach and custom AI agents, Agentforce is improving deal quality and targeting a 75% boost in bid coverage. 

College Possible is using AgentForce to build virtual college counsellors as there’s a shortage of labour (for example, California has just 1 counsellor for every 500 students); College Possible built its virtual counsellors with AgentForce in under a week – basically like flipping a switch – because it has been accumulating all its data in Salesforce for years

Another powerful example is a nonprofit, College Possible. College Possible matches eligible students with counselors to help them navigate and become ready for college. And in California, for example, the statewide average stands at slightly over 1 counselor for every 500 students. It just isn’t enough. Where are we going to get all that labor…

…We’re going to get it from Agentforce. This means the vast majority of students are not getting the help they need, and now they are going to get the help they need.

College Possible creates a virtual counselor built on Agentforce in under a week. They already had all the data. They have the metadata, they already knew the students. They already had all of the capabilities built into their whole Salesforce application. It was just a flip of a switch…

…  But why? It’s because all of the work and the data and the capability that College Possible has put into Salesforce over the years and years that they had it. It’s not the week that it took to get them to turn it on. They have done a lot of work.

Salesforce’s management’s initiative to have all of the company’s apps be rewritten into a single core platform is called More Core; the More Core initiative also involves Salesforce’s Data Cloud, which is important for AI to work; Salesforce is now layering the AI agent layer on top of More Core, and management sees this combination as a complete AI system for enterprises that also differentiates Salesforce’s AgentForce product

Over the last few years, we’ve really aggressively invested in integrating all of our apps on a single core platform with shared services for security workflow user interfaces more. We’ve been rewriting all of our acquisitions into that common area. We’re really looking at how do we take all of our applications and all of our acquisitions, everything and delivered into one consistent platform, we call that More Core internally inside Salesforce. And when you look at that More Core initiative, I don’t think there’s anyone who delivers this comprehensive platform, sales, service, marketing, commerce, analytics, Slack, all of it as one piece of code. And then now deeply integrated in that 1 piece of code is also our data cloud. That is a key part of our strategy, which continues to have this phenomenal momentum as well to help customers unify and federate with zero-copy data access across all their data and metadata, which is crucial for AI to work.

And now that third layer is really opening up for us, which is this agenetic layer. We have built this agenetic layer that takes advantage of all the investments in Salesforce for our customers and made it in our platform. It’s really these 3 layers. And in these 3 layers that form a complete AI system for enterprises and really uniquely differentiate Salesforce uniquely differentiate Agentforce from every other AI platform that this is one piece of code. This isn’t like 3 systems. It’s not a bunch of different apps all running independently. This is all one piece of code. That’s why it works so well, by the way, because it is 1 platform.

Salesforce’s management thinks jobs and roles within Salesforce will change because of AI, especially AI agents

The transformation is not without challenges. Jobs are going to evolve, roles are going to shift and businesses will need to adapt. And listen, at Salesforce, jobs are going to evolve and roles will shift and businesses will need to adapt as well. We’re all going to need to rebalance our workforce. This is the agents take on more of the workforce.

Salesforce’s management is hearing that a large customer of Salesforce is targeting 25% more efficiency with AI

This morning, I was on the phone with one of our large customers, and they were telling me how they’re targeting inside their company, 25% more efficiency with artificial intelligence.

Salesforce signed more than 2,000 AI deals in 2024 Q3 (FY2025 Q3), and number of AI deals that are over $1 million more than tripled year-on-year; 75% of Salesforce’s AgentForce deals, and 9 of Salesforce’s top 10 deals, in 2024 Q3 involved Salesforce’s global partners; more than 80,000 system integrators have completed AgentForce training; hundreds of ISVs (independent software vendors) and partners are building and selling AI agents; Salesforce has a new AgentForce partner network that allows customers to deploy customised AI agents using trusted 3rd-party extensions from Salesforce App Exchange; Salesforce’s partnership with AWS Marketplace is progression well as transactions doubled sequentially in 2024 Q3, with 10 deals exceeding $1 million

In Q3, the number of wins greater than $1 million with AI more than tripled year-over-year. and we signed more than 2,000 AI deals, including more than the 200 Agentforce wins that Marc shared…

…We’re also seeing amazing Agentforce energy across the ecosystem with our global partners involved in 75% of our Q3 Agentforce deals and 9 of our top 10 wins in the quarter. Over 80,000 system integrators have completed Agentforce training and hundreds of ISVs and technology partners are building and selling agents…

… We continue to unlock customer spend through new channels, including the Agentforce partner network that launched at Dreamforce, which allows customers to customize and deploy specialized agents using trusted third-party extensions from Salesforce App Exchange. And AWS Marketplace continues to be a growth driver. Our Q3 transactions doubled quarter-over-quarter with 10 deals exceeding $1 million. 

Veeva Systems (NYSE: VEEV)

Veeva Vault CRM has a number of new innovations coming, including two AI capabilities that will be available in late-2025 at no additional charge; one of the AI capabilities leverages Apple Intelligence; Vault CRM’s CRM Bot AI application will see Vault CRM be hooked onto customers’ own large language models, and Veeva will not be incurring compute costs

We just had our European Commercial Summit in Madrid where we announced a number of new innovations coming in Vault CRM, including two new AI capabilities – CRM Bot and Voice Control. CRM Bot is a GenAI assistant in Vault CRM. Voice Control is a voice interface for Vault CRM, leveraging Apple Intelligence. Both are included in Vault CRM for no additional charge and are planned for availability in late 2025…

…For the CRM Bot, that’s where we will hook our CRM system into the customers’ own large language model that they’re running. And that’s where we will not charge for, and we will not incur compute cost…

Veeva has a new AI application, MLR Bot, for Vault PromoMats within Commercial Cloud; MLR Bot helps perform checks on content with a Veeva-hosted large language model (LLM); MLR Bot will be available in late-2025 and will be charged separately; management has been thinking about MLR Bot for some time; management is seeing a lot of excitement over MLR Bot; management is still working through the details of the monetisation of MLR Bot; MLR Bot’s LLM will be from one of the big tech providers but it will be Veeva who’s the one paying for the compute 

We also announced MLR Bot, an AI application in Vault PromoMats to perform content quality and compliance checks with a Veeva-hosted large language model. Planned for availability in late 2025, MLR Bot will require a separate license…

… So I was at our Europe Summit event where we announced MLR Bot, something we’ve been thinking about and evaluating for some time…

…So there’s a lot of excitement. This is a really core process for life sciences companies. So a lot of excitement there…

…In terms of sizing and the monetization, we’re still working through the details on that, but there’s a ton of excitement from our existing customers. We look forward to getting some early customers started on that as we go into next year…

…MLR Bot, we will charge for, and that’s where we will host and run a large language model. Not our own large language model, right? We’ll use one from the big tech providers, but we will be paying for the compute power for that, and so we’ll be charging for that.

CRM Bot, Voice Control, and MLR Bot are part of Veeva’s management’s overall AI strategy to provide AI applications with tangible value; another part of the AI strategy involves opening up data for customers to power all forms of AI; management’s current thinking is to charge for AI applications if Veeva is responsible for paying compute costs

These innovations are part of our overall AI strategy to deliver specific AI applications that provide tangible value and enable customers and partners with the AI Partner Program, as well as the Vault Direct Data API, for the data needed to power all forms of AI…

… So where we have to use significant compute power, we will most likely charge. And where we don’t, we most likely won’t.

Wix (NASDAQ: WIX)

More than 50% of new Wix users are using the company’s AI-powered onboarding process which was launched nearly a year ago; users who onboard using Wix’s AI process are 50% more likely to start selling on Wix and are more likely to become paid subscribers; the AI-powered onboarding process is leading to a 13% uplift in conversion rate for the most recent Self-Creator cohort; the AI website builder is free but it helps with conversions to paid subscribers

Almost one year ago, we launched our AI website builder, which is now available in 20 languages and has been a game changer in our user onboarding strategy. Today, more than 50% of new users are choosing to create their online presence through our AI-powered onboarding process. The tool is resonating particularly well with small businesses and entrepreneurs as paid subscriptions originated from this AI-powered onboarding are 50% more likely to have a business vertical attached and significantly more likely to start selling on Wix by streamlining the website building process while offering a powerful and tailored commerce-enablement solution…

…Cash in our most recent self-created cohort showed a 13% uplift in conversion rate from our AI onboarding tool…

…[Question] A lot of the commentary seems that today, AI Website Builder is helping on conversion. I wanted to ask about specifically, is there an opportunity to directly monetize the AI products within the kind of core website design funnel?

[Answer] So I think that the way we monetize, of course, during the buildup phase of the website, is by making it easier. And our customers are happy with their websites, of course, we convert better. So I don’t think there is any better way to monetize than that, right? The more users finish the website, the better the website, the higher conversion and the high monetization. 

Wix now has 29 AI assistants to support users

Earlier this year, we spoke about our plan to embed AI assistance across our platform and we’re continuing to push that initiative forward. We now have a total of 29 assistants, spanning a wide range of use cases to support users and to service customers throughout their online journeys.

Wix has a number of AI products that are launching in the next few months that are unlike anything in the market and they will be the first AI products that Wix will be monetising directly

We have a number of AI products coming in the next few months that are unlike anything in the market today. These products will transform the way merchants manage their businesses, redefine how users interact with their customers and enhance the content creation experience. Importantly, these will also be the first AI products we plan to monetize directly. We are on the edge of unforeseen innovation, and I’m looking forward to the positive impact it will have on our users.

Zoom Communications (NASDAQ: ZM)

Zoom’s management has a new vision for Zoom, the AI-first Work Platform for Human Connection

In early October, we hosted Zoomtopia, our annual customer and innovation event, and it was an amazing opportunity to showcase all that we have been working on for our customers. We had a record-breaking virtual attendance, and unveiled our new vision, AI-first Work Platform for Human Connection. This update marks an exciting milestone as we extend our strength as a unified communication and collaboration platform into becoming an AI-first work platform. Our goal is to empower customers to navigate today’s work challenges, streamline information, prioritizing tasks and making smarter use of time.

Management has released AI Companion 2.0, which is an agentic AI technology; AI Companion 2.0 is able to see a broader window of context and gather information from internal and external sources; Zoom AI Companion monthly active users grew 59% sequentially in 2024 Q3; Zoom has over 4 million accounts that have enabled AI Companion; management thinks customers really like Zoom AI Companion; customer feedback for AI Companion has been extremely positive; management does not intend to charge customers for AI Companion

At Zoomtopia, we took meaningful steps towards that vision with the release of AI Companion 2.0…

…This release builds upon the awesome quality of Zoom AI Companion 1.0 across features like Meeting Summary, Meeting Query and Smart Compose, and brings it together in a way that evolves beyond task-specific AI towards agentic AI. This major update allows the AI Companion to see a broader window of context, synthesize the information from internal and external sources, and orchestrate action across the platform. AI Companion 2.0 raises the bar for AI and demonstrates to customers that we understand their needs…

…We saw progress towards our AI-first vision with Zoom AI Companion monthly active users growing 59% quarter-over-quarter…

…At Zoomtopia, we mentioned that there are over 4 million accounts who are already enabled AI Companion. Given the quality, ease of use and no additional cost, the customer really like Zoom AI Companion…

…Feedback from our customers at Zoomtopia Zoom AI Companion 2.0 were extremely positive because, first of all, they look at our innovation, the speed, right? And the — a lot of features built into the AI Companion 2.0, again, at no additional cost, right? At the same time, Enterprise customers also want to have some flexibility. That’s why we also introduced customized AI Companion and also AI Companion Studio. And that will be available first half of next year and also we can monetize…

…We are not going to charge the customer for AI Companion, at no additional cost

Zscaler is using Zoom AI Companion to improve productivity across the whole company; large enterprises such as HSBC and Exxon Mobil are also using Zoom AI Companion

Praniti Lakhwara, CIO of Zscaler, provided a great example of how Zoom AI Companion helped democratize AI and enhance productivity across the organization, without sacrificing security and privacy. And it wasn’t just Zscaler. the RealReal, HSBC, ExxonMobil and Lake Flato Architects shared similar stories about Zoom’s secure, easy-to-use solutions, helping them thrive in the age of AI and flexible work.

Zoom’s management recently introduced a road map of AI products that expands Zoom’s market opportunity; Custom AI Companion add-on, including paid add-ons for healthcare and education, will be released in 2025 H1; management built the monetisable parts of AI Companion after gathering customer feedback 

Building on our vision for democratizing AI, we introduced a road map of TAM-expanding AI products that create additional business value through customization, personalization and alignment to specific industries or use cases. 

 Custom AI Companion add-on, which will be released in the first half of next year, aims to meet our customers where they are in their AI journey by plugging into knowledge bases, integrating with third-party apps and personalizing experiences like custom AI avatars and AI coaching. Additionally, we announced that we’ll also have Custom AI Companion paid add-ons for health care and education available as early as the first quarter of next year…

…The reason why we introduced the Customized AI Companion or AI Companion Studio because, a few quarters ago — and we talked to many Enterprise customers. They shared with us feedback, right? So they like AI Companion. Also, they want to make sure, hey, some customers, they already build their own AI large language model. How to [ federate ] that into our federated AI approach. And some customers, they have very large content, like a knowledge base, how to connect with that. Some customers, they have with other beginning systems, right, like a ServiceNow, Atlassian and Workday, a lot of Box and HubSpot, how to connect those data sources, right? And also even from an employee perspective, right, they won’t have a customized avatar, like AI to — as a personal culture as well. So meaning those customers, they have customized requirements. To support those customer requirements, we need to make sure we have AI infrastructure and technology ready, right? That’s the reason why we introduced the AI Companion, the Customized AI Companion. The goal is really working together with integrated customers to tailored for each Enterprise customer. That’s the reason why it’s not free.

I think the feedback from Zoomtopia is very positive because, again, those features are not built by our — just the several product managers, engineers think about let’s build that. We already solicited feedback from our Enterprise content before, those features that I think can truly satisfy their needs.

Zoon’s management thinks that Zoom is very well-positioned because it is providing AI-powered tools to customers at no additional cost, unlike other competitors

Given our strength on the quality plus at no additional cost, Zoom is much better positioned. In particular, customers look at all the vendors when they try to consult and look at — again, the AI cost is not small, right? You look at some of the competitors, per user per month, $30, right? And look at Zoom, better quality at no additional cost. That’s the reason why it comes with a total cost of ownership. Customers look at Zoom, I think, much better positioned…

…Again, almost every business, they subscribe to multiple software services. If each software service vendors they are going to charge the customer with AI, guess what, every business is — they have to spend more. That’s the reason why they trust Zoom, and I think we are much better positioned.

Zoom’s management is seeing some customers find new budgets to invest in AI, whereas some customers are reallocating budgets from other areas towards AI

Every company, I think now they are all thinking about where they should allocate the budget, right? Where should they get more money or fund, right, to support AI? I think every company is different. And some internal customers, and they have a new budget. Some customers, they consolidated into the few vendors and some customers, they just want to say, hey, maybe actually save the money from other areas and to shift the budget towards embracing AI.

Zoom’s management thinks Zoom will need to continue investing in AI, but they are not worried about the costs because the AI features will be monetised

Look at AI, right? So we have to invest more, right? And I think a few areas, right? One is look at our Zoom Workplace platform, right? We have to [ invent ] more talent, deploy more GPUs and also use more of the cloud, basically GPUs, as well as we keep improving the AI quality and innovate on AI features. That’s for Workplace. And at the same time, we are going to introduce the customized AI Companion, also AI Studio next year. Not only do we offer the free service for AI Companion, but those Enterprise customization certainly can help us in terms of monetization. At the same time, we leverage the technology we build for the workplace, apply that to the Contact Center, like Zoom Virtual Agent, right, and also some other Contact Center features. We can share the same AI infrastructure and also a lot of technology components and also can be shared with Zoom Contact Center.

Where AI Companion is not free, the Contact Center is different, right? We also can monetize. Essentially, we build the same common AI infrastructure architecture and Workplace — Customized AI Companion, we can monetize. Contact Center, also, we can monetize. I think more and more — and like today, you see you keep investing more and more, and soon, we can also monetize more as well. That’s why I think we do not worry about the cost in the long run at all, I mean, the AI investment because with the monetization coming in, certainly can help us more. So, so far, we feel very comfortable.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google and GCP), Amazon (parent of AWS), Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Veeva Systems, Wix, and Zoom Video Communications. Holdings are subject to change at any time.