What We’re Reading (Week Ending 02 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 February 2025:

1. DeepSeek: The View from China – Jordan Schneider, Irene Zhang, Angela Shen, and Yiwen

In this newsletter, we share a translation of insights from a January 26 closed-door session hosted by Shixiang 拾象, a VC spun out from Sequoia China. Attended by dozens of AI researchers, investors, and industry insiders, the event captures how the Chinese AI community is processing the DeepSeek shock…

…The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant…

…In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough…

…Why did DeepSeek catch up so fast?

Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.

One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier….

…AI is similar to a step function, where the compute requirements for followers have decreased by a factor of 10. Followers have historically had lower compute costs, but explorers still need to train many models. The exploration of new algorithms and architectures will not stop. Behind the step function, there are significant investments by many people, meaning compute investments will continue to advance. Many resources will also be allocated to products. Apart from reasoning, there are other directions that are compute-intensive. While the vast amount of compute resources spent by explorers may not be visible, without such investment, the next “step” might not occur. Additionally, many are dissatisfied with current architectures and RL methods, and progress will continue.

When exploring directions, performance achieved with 10,000 GPUs may not always be significantly better than that of 1,000 GPUs, but there is a threshold somewhere. It’s unlikely that meaningful results can be achieved with only 100 GPUs because the iteration time for each solution would be too long…

…The question of why OpenAI and Anthropic did not do work in DeepSeek’s direction is a question of company-specific focus. OpenAI and Anthropic might have felt that investing their compute towards other areas was more valuable.

One hypothesis for why DeepSeek was successful is that unlike Big Tech firms, DeepSeek did not work on multi-modality and focused exclusively on language. Big Tech firms’ model capabilities aren’t weak, but they have to maintain a low profile and cannot release too often. Currently, multimodality is not very critical, as intelligence primarily comes from language, and multimodality does not contribute significantly to improving intelligence…

…2025 will, first and foremost, see interest in new architectures beyond Transformers. Some initial exploration is already underway, aiming to reduce costs while pushing the boundaries of intelligence. Secondly, the potential of reinforcement learning (RL) has yet to be tapped into completely. On the product side, there is significant interest in agents, though they have yet to see widespread application…

…It is reported that Meta is still in the process of reproducing DeepSeek, but so far, this has not significantly impacted their infrastructure or long-term roadmap. In the long run, beyond exploring the boundaries of the technology, cost efficiency must also be considered. Lowering costs will let us have more fun…

…From the developer’s perspective, models like Claude-3.5-Sonnet have been specifically trained for tool use, making them highly suitable for agent development. In contrast, models like DeepSeek have not yet focused on this area, but the potential for growth with DeepSeek is immense…

…Currently, reinforcement learning (RL) solves problems with standard answers but has not achieved breakthroughs beyond what AlphaZero accomplished. In fact, it is often simpler. Distillation addresses problems with standard answers, and RL methods work effectively when training with such answers. This explains why distillation and RL have made rapid progress in recent years.

Humanity’s demand for intelligence is vastly underestimated. Many critical problems, such as cancer and SpaceX’s heat shield materials, remain unsolved. Existing AI primarily automates tasks, but there are numerous unsolved challenges ahead. Looking forward, the potential for explosive growth is immense, and the advancement of intelligence cannot stop…

…Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.

The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency. DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization…

…Open source controls the margins of the whole market. If open source can do 95% of what closed source can do and closed source is too expensive, then open source can be used completely. If the capabilities of open source and closed source do not differ greatly, then this presents a big challenge for closed source…

…AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.

2. Explaining International Valuations –  Daniel Rasmussen

Perhaps the single greatest divergence in equity markets has been the continued outperformance of US versus international equities—and thus the widening of the valuation gap between the US and the rest of the world…

…By far the most significant difference, explaining about half the valuation gap, is the domicile of listing. US-listed stocks are substantially more expensive than internationally listed stocks for no reason other than the place of listing.

It’s particularly interesting that the regression shows having a higher percentage of sales in the US results in cheaper valuations. A key driver of this is that several of the US tech giants most responsible for high US equity valuations having a relatively low percentage of sales in the US (Alphabet, Microsoft, and Tesla at around 50%; Apple, Netflix, Meta, and NVIDIA at around 40%). The big question, then, is why half the valuation gap is explained simply by being listed on US exchanges. Even large internationally listed companies with >40% of their revenue coming from the US, like Toyota, Mitsubishi, Roche or Deutsche Telekom (which owns T-Mobile), trade at steep value multiples relative to US peers.

Were a larger percentage of the valuation gap explained by fundamentals, we’d expect such a gap to persist. But given that the valuation gap is primarily explained simply by the location of listing, we think there’s a strong reason to expect a convergence—and therefore to favor international over US-listed stocks, despite their terrible relative performance over the past decade.

3. The Most Impressive Prediction of All Time – Jeffrey Emanuel

My candidate for the most impressive prediction of all time came from a person who is practically unknown in the West except for a relatively small group of historians and people interested in niche subjects. The person I’m thinking of is named Pyotr Durnovo, and he was an Imperial Russian government official who lived from 1842 to 1915.

We will discuss more about him later and how his life experience may have prepared him to be able to make such an impressive prediction, but the short version of it is that he initially studied to be in the Navy and served there for around a decade, and then became the Director of Police for the Ministry of Internal Affairs for the entire Russian Empire under Tsar Alexander III. Later, he served as the Minister of the Interior under Tsar Nicholas II (the one who was ultimately executed with his family by the Bolsheviks in 1917 during the Russian Revolution).

So what is this prediction he made, anyway, and why is it so impressive? Well, in 1914, six months prior to the outbreak of World War 1, Durnovo wrote a truly remarkable ~7,600-word memorandum for Tsar Nicholas II and his top 2 or 3 ministers, which we know was given to them, since it was found in Nicholas’ papers and later published in 1922 by communist historians after the revolution. If they had only read it carefully and took its warnings more seriously, the world we live in today might look very different!…

…For one, it predicted an imminent war on the horizon, which he ultimately blamed on the collision course between England and Germany, which were the two greatest industrial powers at the time. This was certainly not some earth shattering or special prediction; a lot of people predicted some kind of big conflict, and it was often said that “war was in the air” at the time…

…It’s how he analyzed the situation, and then used that reasoning to predict the exact groupings of countries that would participate in the conflict and on which side, and how the situation would evolve from there, that is so impressive…

…His predictions about alliances and national behaviors were almost unbelievably specific and ran counter to the conventional wisdom of the time:

  • He predicted that Italy would not side with Germany despite being part of the Triple Alliance, and would instead join the opposing side if victory seemed likely, seeking territory from both Austria and Turkey. This is exactly what happened; Italy joined the Allies in 1915 after negotiating for territorial concessions.
  • He predicted that Romania would remain neutral until it was clear which side would win, then join the victorious side to claim territory. This also came true— Romania entered the war in 1916 on the Allied side after significant Russian successes.
  • Most surprsingly, he predicted that Bulgaria would side against Serbia and by extension against Russia, despite Russia being Bulgaria’s historic liberator from Ottoman rule— a prediction that seemed almost unthinkable to most observers at the time. This came true exactly as he foresaw, with Bulgaria joining the Central Powers in 1915.
  • He correctly predicted that Serbia and Montenegro would side against Austria, while Greece would likely remain neutral until the outcome was more or less predetermined.
  • He predicted unrest among Muslims in the Caucasus and Turkestan (which occurred).
  • He predicted the possibility of Afghanistan moving against Russia (which happened in 1919).
  • He predicted serious complications in Poland (the Polish-Soviet War of 1919-1921).
  • He predicted an uprising in Finland if Sweden joined Germany (Finland did declare independence in 1917)

…If all of that weren’t already so ridiculous to get right, he went way beyond all that to realize that, regardless of who won, the war would lead to “social revolution” in both the defeated AND victorious countries, starting with the losing side and then spreading to the winners. This was perhaps his most extraordinary prediction, as it came true in spectacular fashion:

  • Russia, despite being on the winning side, experienced the Bolshevik Revolution in 1917; we will go into much more detail about these predictions below.
  • Germany, after losing the war, experienced the German Revolution of 1918-1919; Durnovo predicted that unrest and revolution would be specifically tied to economic factors and class interests rather than purely political ones: he outlined how German workers would turn against the agricultural interests that had dominated pre-war German policy once defeat cut off their export markets and industrial employment, and this exact dynamic played out in the German Revolution of 1918-1919.

Now, you might object here that “Well, it’s not that crazy to believe there might be a revolution in a country which suffered massive losses in a catastrophic war; lots of people might have predicted that.” But the thing is, Durnovo went so far beyond merely predicting that there would be a Russian Revolution. He basically predicted every contour of the Revolution, the driving forces behind it, how it impacted different segments of Russian society, and how it would all unfold, step by step!…

…So how was Durnovo able to accomplish this incredible feat of prediction? Obviously, he was a genius of the first order, which is perhaps not so surprising given that he was a close relative of the famous Tolstoy family. But raw IQ is certainly not enough, nor is being well informed and knowledgeable. What kind of man could see so clearly what virtually everyone else missed? He was a complex character whose very contradictions likely enabled his extraordinary insights; he was, at the same time:

  • A conservative police chief who often expressed liberal thoughts in private
  • A supposed reactionary who opposed anti-Semitic measures and defended Jews
  • A cynical operator who nevertheless would help others when he could
  • A man capable of both strict officialdom and surprising gentleness
  • A high official who preferred informal interactions (his subordinates would warn visitors not to address him as “Your Excellency”)

These contradictions suggest someone who wasn’t bound by conventional ideological frameworks or social expectations— a crucial trait for seeing beyond accepted wisdom. He also had a wide range of professional experience that prepared him to see things in a multi-faceted, sophisticated way, as by 1915, he had done the following:

  • Naval officer (9 years of far-sea cruises)
  • Military legal training
  • Assistant Prosecutor in various parts of Russia
  • Director of Police Department for 10 years
  • Assistant Minister of Interior under multiple ministers
  • Minister of Interior
  • Member of State Council

This combination of experiences was extraordinary and atypical to say the least:

  • His naval and legal background gave him insight into the military, maritime trade, and the Russian legal system.
  • His prosecutorial work exposed him to conditions across Russia, not just in the big cities.
  • His police work gave him unparalleled insight into social discontent and the strategies and thinking of professional revolutionaries like Lenin, Stalin, and Trotsky.
  • His ministerial positions showed him the workings (and limitations) of state power.

He also occupied a unique position as both an insider and an outsider: 

  • He was from old nobility but not wealthy or particularly influential
  • He reached high office but was temporarily dismissed in disgrace (a sordid story in which Durnovo had his secret police officers search the private letters of a foreign ambassador— inside an embassy building no less— so they could steal love letters sent by Durnovo’s mistress to the ambassador; when the ambassador complained to Tsar Alexander III, he was furious, ordering his minister to “remove this swine within twenty-four hours.”)
  • He was a conservative who often disagreed with other conservatives
  • He understood both state power and its limitations

This dual perspective may have freed him from the groupthink that afflicted both conservative and liberal circles.

4. USA, Inc – Michael Batnick

Consider this face blower of a stat from Goldman: “Since 1992, earnings growth in the US has outpaced earnings in non-US developed economies by an annual average of 2.4 percentage points.”

Most of the world is barely earning more than they were prior to the pandemic. The U.S. looks like an unstoppable freight train…

…The one sided performance has driven valuations between us and the rest of the world to record levels. We’ve all seen a version of these charts before…

…BUT! These charts aren’t comparing apples with apples. Goldman notes that only 1% of the U.K. market is in technology companies. Another example they cite is that energy is 5% of S&P 500 earnings, 19% of UK, and just 1% of Japan. We’re not comparing apples with apples.

They did a great job adjusting for differences in sector weights…

…The U.S. still trades at a premium to the rest of the world ex-India, but not as much as the prior chart would have you believe. Before any adjustments, the Eurozone trades at a 39% discount to the U.S. And after the adjustments, that falls to 23%.

5. DeepSeek FAQ – Ben Thompson

Let’s work backwards: what was the V2 model, and why was it important?

The DeepSeek-V2 model introduced two important breakthroughs: DeepSeekMoE and DeepSeekMLA. The “MoE” in DeepSeekMoE refers to “mixture of experts”. Some models, like GPT-3.5, activate the entire model during both training and inference; it turns out, however, that not every part of the model is necessary for the topic at hand. MoE splits the model into multiple “experts” and only activates the ones that are necessary; GPT-4 was a MoE model that was believed to have 16 experts with approximately 110 billion parameters each.

DeepSeekMoE, as implemented in V2, introduced important innovations on this concept, including differentiating between more finely-grained specialized experts, and shared experts with more generalized capabilities. Critically, DeepSeekMoE also introduced new approaches to load-balancing and routing during training; traditionally MoE increased communications overhead in training in exchange for efficient inference, but DeepSeek’s approach made training more efficient as well.

DeepSeekMLA was an even bigger breakthrough. One of the biggest limitations on inference is the sheer amount of memory required: you both need to load the model into memory and also load the entire context window. Context windows are particularly expensive in terms of memory, as every token requires both a key and corresponding value; DeepSeekMLA, or multi-head latent attention, makes it possible to compress the key-value store, dramatically decreasing memory usage during inference.

I’m not sure I understood any of that.

The key implications of these breakthroughs — and the part you need to understand — only became apparent with V3, which added a new approach to load balancing (further reducing communications overhead) and multi-token prediction in training (further densifying each training step, again reducing overhead): V3 was shockingly cheap to train. DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.

That seems impossibly low.

DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses; from the V3 paper:

Lastly, we emphasize again the economical training costs of DeepSeek-V3, summarized in Table 1, achieved through our optimized co-design of algorithms, frameworks, and hardware. During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs. Consequently, our pre- training stage is completed in less than two months and costs 2664K GPU hours. Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M. Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.

So no, you can’t replicate DeepSeek the company for $5.576 million.

I still don’t believe that number.

Actually, the burden of proof is on the doubters, at least once you understand the V3 architecture. Remember that bit about DeepSeekMoE: V3 has 671 billion parameters, but only 37 billion parameters in the active expert are computed per token; this equates to 333.3 billion FLOPs of compute per token. Here I should mention another DeepSeek innovation: while parameters were stored with BF16 or FP32 precision, they were reduced to FP8 precision for calculations; 2048 H800 GPUs have a capacity of 3.97 exoflops, i.e. 3.97 billion billion FLOPS. The training set, meanwhile, consisted of 14.8 trillion tokens; once you do all of the math it becomes apparent that 2.8 million H800 hours is sufficient for training V3. Again, this was just the final run, not the total cost, but it’s a plausible number.

Scale AI CEO Alexandr Wang said they have 50,000 H100s.

I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.

Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.

Meanwhile, DeepSeek also makes their models available for inference: that requires a whole bunch of GPUs above-and-beyond whatever was used for training…

Is this why all of the Big Tech stock prices are down?

In the long run, model commoditization and cheaper inference — which DeepSeek has also demonstrated — is great for Big Tech. A world where Microsoft gets to provide inference to its customers for a fraction of the cost means that Microsoft has to spend less on data centers and GPUs, or, just as likely, sees dramatically higher usage given that inference is so much cheaper. Another big winner is Amazon: AWS has by-and-large failed to make their own quality model, but that doesn’t matter if there are very high quality open source models that they can serve at far lower costs than expected.

Apple is also a big winner. Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU (neural processing unit) have access to a shared pool of memory; this means that Apple’s high-end hardware actually has the best consumer chip for inference (Nvidia gaming GPUs max out at 32GB of VRAM, while Apple’s chips go up to 192 GB of RAM).

Meta, meanwhile, is the biggest winner of all. I already laid out last fall how every aspect of Meta’s business benefits from AI; a big barrier to realizing that vision is the cost of inference, which means that dramatically cheaper inference — and dramatically cheaper training, given the need for Meta to stay on the cutting edge — makes that vision much more achievable.

Google, meanwhile, is probably in worse shape: a world of decreased hardware requirements lessens the relative advantage they have from TPUs. More importantly, a world of zero-cost inference increases the viability and likelihood of products that displace search; granted, Google gets lower costs as well, but any change from the status quo is probably a net negative…

...How did DeepSeek make R1?

DeepSeek actually made two models: R1 and R1-Zero. I actually think that R1-Zero is the bigger deal…

…R1-Zero, however, drops the HF part — it’s just reinforcement learning. DeepSeek gave the model a set of math, code, and logic questions, and set two reward functions: one for the right answer, and one for the right format that utilized a thinking process. Moreover, the technique was a simple one: instead of trying to evaluate step-by-step (process supervision), or doing a search of all possible answers (a la AlphaGo), DeepSeek encouraged the model to try several different answers at a time and then graded them according to the two reward functions.

What emerged is a model that developed reasoning and chains-of-thought on its own…

…Here again it seems plausible that DeepSeek benefited from distillation, particularly in terms of training R1. That, though, is itself an important takeaway: we have a situation where AI models are teaching AI models, and where AI models are teaching themselves.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms, Microsoft, Netflix, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 January 2025:

1. Thoughts On A Month With Devin – Hamel Husain, Isaac Flath, and Johno Whitaker

Unlike typical AI assistants, Devin operates through Slack and spins up its own computing environment. When you chat with Devin, you’re talking to an AI that has access to a full computing environment – complete with a web browser, code editor, and shell. It can install dependencies, read documentation, and even preview web applications it creates…

…The experience is designed to feel like chatting with a colleague. You describe what you want, and Devin starts working. Through Slack, you can watch it think through problems, ask for credentials when needed, and share links to completed work. Behind the scenes, it’s running in a Docker container, which gives it the isolation it needs to safely experiment while protecting your systems. Devin also provides a web interface, which also allows you to gain access to its envirnoment and watch it work with IDEs, Web Browsers and more in real time…

…Our first task was straightforward but real: pull data from a Notion database into Google Sheets. Devin tackled this with surprising competence. It navigated to the Notion API documentation, understood what it needed, and guided me through setting up the necessary credentials in Google Cloud Console. Rather than just dumping API instructions, it walked me through each menu and button click needed – saving what would typically be tedious documentation sleuthing. The whole process took about an hour (but only a few minutes of human interaction). At the end, Devin shared a link to a perfectly formatted Google Sheet containing our data.

The code it produced was a bit verbose, but it worked. This felt like a glimpse into the future – an AI that could handle the “glue code” tasks that consume so much developer time. Johno had similar success using Devin to create a planet tracker for debunking claims about historical positions of Jupiter and Saturn. What made this particularly impressive was that he managed this entirely through his phone, with Devin handling all the heavy lifting of setting up the environment and writing the code…

…Over the course of a month, we systematically documented our attempts across these categories:

  1. Creating new projects from scratch
  2. Performing research tasks
  3. Analyzing & Modifying existing projects

The results were sobering. Out of 20 tasks, we had 14 failures, 3 successes (including our 2 initial ones), and 3 inconclusive results. Even more telling was that we couldn’t discern any pattern to predict which tasks would work. Tasks that seemed similar to our early successes would fail in unexpected ways…

…Working with Devin showed what autonomous AI development aspires to be. The UX is polished – chatting through Slack, watching it work asynchronously, seeing it set up environments and handle dependencies. When it worked, it was impressive.

But that’s the problem – it rarely worked. Out of 20 tasks we attempted, we saw 14 failures, 3 inconclusive results, and just 3 successes. More concerning was our inability to predict which tasks would succeed. Even tasks similar to our early wins would fail in complex, time-consuming ways…

…This reflects a pattern we’ve observed repeatedly in AI tooling. Social media excitement and company valuations have minimal relationship to real-world utility. We’ve found the most reliable signal comes from detailed stories of users shipping products and services. For now, we’re sticking with tools that let us drive the development process while providing AI assistance along the way.

2. Transcript: The Hidden History of Eurodollars, Part 1: Cold War Origins – Joe Weisenthal, Tracy Alloway, Lev Menand, and Josh Younger

Tracy (01:30):
It can be admittedly confusing. So why don’t we just define it right away. So eurodollars are dollar-denominated bank deposits held at foreign banks or overseas branches of US banks. And you can think of them as basically offshore dollars that sit outside the US banking system and kind of away from the Federal Reserve. They’re basically a very special form of money. You could call them shadow money.

Joe (01:57):
And it’s totally gigantic. So it’s almost $10 trillion. And I just find it so interesting, right? Because when I think of dollars, they’re either coming from, you know, the government spends dollars into existence or US bank credit. US banks [have a] license to de facto create dollars or deposits at will. And yet, eurodollars are kind of this weird thing, I guess because they’re not that.

Tracy (02:21):
Yeah, they’re not either of those. And eurodollars didn’t just spring up fully formed out of thin air. They were the result of a series of decisions all aimed at solving particular problems…

…Josh (04:27):
So eurodollars are among the most important financial instruments in the world and they are really the backbone of the global dollar system. But they come from very humble beginnings, very idiosyncratic start. And really it all started in Yugoslavia…

…So in 1945 in November, there’s a communist revolution and the US is miffed in a bunch of ways, but one of them is that the old government owes them money. And so the question is, how are they going to get it? And a few months later, Tito asked for his gold back because the Yugoslavia government had $70 million worth of gold in New York. And the Secretary of State, who was George Marshall of the Marshall Plan, he realizes he’s got a bargaining chip, which is the gold. It’s in New York and they don’t get it back until they settle their claims.

Now, even people within the State Department were kind of skeptical of this, the Yugoslavian government is obviously furious. And so are the Russians who, at this point, you know, Tito and Stalin have a falling out eventually a few years later. But at this point, they’re quite closely aligned..

…The Russians get the sense that the US is willing to use gold as a bargaining chip. They’d previously actually been building up dollar balances in New York. This is this kind of a misnomer about the post-war period. There’s this sense that that the Russians are extracting all their resources from the US, but they’re actually building up reserves of dollars because the thought is ‘We’re probably going to need to trade with these people. We have a trading company based in the US and they need resources.’ And so they’re building up foreign currency deposits and gold, but in 1947, they realize it’s not going to go well, potentially. And they pull all the gold out. They actually just called banks in New York and they say ‘We want our gold back.’ A massive reversal of the policy.

And the question is, where’s it going to go? And so they need dollars because the US dollar is the currency of foreign exchange. If they want to trade with the West, they have to trade in dollars. They need gold because gold is the basis for the monetary system. And so the question is, where can they put gold and dollars in a safe place that’s still on the right side of what was then already known as the iron curtain?

And so it turns out Paris is the ticket. They’ve actually been secretly stockpiling cash in gold in Paris. They put it in briefcases. They would fly people to Paris and put it in the consulate offices. They would just build up piles of cash and gold. And in particular, there’s a bank — BCEN — I won’t try to do it in French. And BCEN is owned by, or run by, a notorious communist sympathizer, who has a very good relationship with the Politburo. And so this is a friendly bank. And so they take on deposit the Soviet money and BCEN’s moniker in the Telex system they used to communicate was “Eurobank.”

And so, eurodollars were initially, in the late forties, just deposits issued by Eurobank, BCEN, generally for the Soviets, although also for the Chinese. And slowly this starts to percolate. There’s another communist-owned bank in London. There’s one in Brussels, which DCIA just describes as run by ‘someone with few scruples, I think is the way they put it. And so there’s some friendlies across Europe who are willing to take their money and the eurodollar market begins this way, which is preemptive sanctions evasion, basically…

…And so the first use case of eurodollars is sanctions evasion. The second use is to facilitate cross-Iron Curtain trade, although that’s a pretty small business. And so the third, and much larger business, is cross-border interest rate arbitrage. And that sounds really technical, but what it’s really doing is using foreign exchange markets and derivative markets to source dollars that the UK in particular needs in this post-war environment.

So imagine a eurodollar bank, a euro bank, takes in a eurodollar deposit, which means it gets a dollar in cash — let’s think of a physical bill, that’s an asset. It issues a eurodollar liability. And then, what is it going to do next? Because it needs to do some sort of investing. And what it does is it exchanges that dollar asset for a sterling cash, and it invests that sterling cash in some short term sterling investment — short bills or something like that. And after it does that, it says ‘I want to hedge my foreign exchange risk, because now I have a dollar liability and a sterling asset. So I’m going to use the foreign exchange forward market to agree to sell that sterling back for dollars at some point in the future at a fixed price that we agree on today.’

So that’s the bank’s position. Who’s on the other side of that trade? Let’s say a corporation, a manufacturing entity, they make radios, and that radio production process requires inputs. Those inputs are imported. And so that radio production company needs dollars with which to buy the raw materials that it uses to make the radio that it then sells for dollars in foreign markets. And so, they get those dollars from the eurobank, in exchange for the sterling they have on hand, they go buy all the parts, but they want to make sure that they know how much they’re going to receive in local currency at the end of the production process. When they sell that radio abroad, they don’t want the value of the dollar to go down. So they sell those dollars forward in exchange for sterling. And so they’ve entered into a derivative agreement, which is the opposite of the one that the euro bank has or the euro banking system.

And so then they put together the radio, they sell it abroad, they receive dollar proceeds, they turn those into sterling, which is what they pay their employees in, that’s what they pay for their land and equipment in. And that exchange rate was the one they agreed upon in advance through the foreign exchange forward contract. And so, basically what’s happening is the euro banks are pulling in dollars from abroad, distributing them through the foreign exchange market that’s trading onshore to those that need dollars today, and then providing hedges to those that will receive dollars in the future. And in the case of the euro bank, the dollars they’ll owe in the future, potentially, to their eurodollar deposit holder.

Lev (18:32):
Think about this from the perspective of the City of London coming out of the war and those bankers and the world that they grew up in, which is a world that we’ve completely forgotten, but was the world of sterling dominance before the First World War and the role that the empire played in financing global trade.

What we’re looking at in the 1950s is a group of London-based financial institutions trying to figure out a way to continue their dominance in a global economy that runs on dollars now and not on sterling. And so, the eurodollars are sort of worth the risk to the City of London, and to some extent to UK financial regulators like the Bank of England, because they need to fix their business model for a dollar world, and they want to get in on the dollar world…

…Josh (20:43):
And so this cross-border interest rate arbitrage is really just the way markets distribute the currency according to who needs it and provide the hedges that facilitate the functioning of British corporations as well. It’s what we’d call now like a use case, right? This is like a real underlying use case that doesn’t involve the Soviet Union for dollar deposits issued by non-US banks, which is, you can’t emphasize enough how fundamentally strange that is because if I tried to make dollars by writing it on piece of paper, I don’t think I’d get very far. But at the time, that’s essentially what these banks are doing.

And in particular London is a more, let’s say, reputable locale, particularly banks that are not known to be communist sympathizers. There’s a little bit of a funny thing about being a communist bank, but we won’t get into that specifically, but these are blue chip banks in London issuing dollar deposits. And that means you can use them for things and you can feel more comfortable…

…Lev (26:54):
Although, just let’s size this a little bit, right? It was a billion dollars in, say, 1960, which is maybe the equivalent of $50 billion today…

…So we have way more to go in terms of the growth of this market subsequent to 1960. It’s still pretty nascent in 1960…

…Josh (31:08):
So the question at this point is, it’s a nascent market, it’s half a Tether, and it’s unclear whether or not it’s become a big major global actor. We know it eventually becomes that, but at the time, that’s super unclear, but it becomes eventually and soon the solution to a big problem. So eurodollars are the solution to big problem because, in the background of all of this buildup, there’s massive trouble brewing and the whole global edifice of the dollar system is starting to crack.

And the question is, you know, how are we going to save it? Or should we?

3. Emergent Layers, Chapter 1: Scarcity, Abstraction & Abundance – Alex Danco

One foundational principle of the tech world is that as it builds upwards and outwards into the rest of the world, it’s doing so by building on top of these abundant resources and progressively leveraging them. We can think about the world that we know and understand today — with its constraints, and business models and maturing industries that are generally understood by all — as forming a layer, which we’ll call layer i. In time, as certain elements become abstracted and subsequently abundant, others emerge as newly scarce, or in play for new reasons and in new business models. The critical skill for understanding how this works (which is worth practicing!) is being able to work one’s way up and down between stack layers so as to understand when an abundant and scalable element has blossomed at layer i of a stack, and its scarce, non-scalable counterpart has emerged at a new layer — which we’ll call layer i+1…

…Microsoft

The original scarce resource at layer i = PC hardware. In the early days of PCs, manufacturers could compete along many axes of performance — memory, speed, functionality, and so forth — while being sufficiently differentiated from one another. But it was very hard to standardize common functions and applications that people could run across any computer, making it difficult for these use cases to grow rapidly — until Bill Gates and Paul Allen realized, Hey, there isn’t a software industry yet but there’s gonna be, so we should start it. Microsoft abstracted away the capabilities of a computer into software, so now anyone else could write their own software on top of Microsoft’s software without having to worry about the underlying machinery. PCs became an abundantly available commodity, and Microsoft became dominant and mega-profitable. A new scarce resource emerged at layer i+1: the ability to connect these PCs and get them to talk to one another…

…Facebook

Scarce resource at layer i = connections between humans using the internet. The internet was awash in people and content, but authentic human interaction was still relatively scarce and difficult. As such, all of the attempts at connecting people to content and advertising and services were feature-stuffed, spammy, bloated and bad. The critical step forward that Facebook accomplished was abstracting away the “reciprocal friendship” into a functioning social graph. And we’ve seen what’s happened since: Facebook, and social connectivity in general, has exploded and become a newly abundant resource. Facebook became dominant and mega-profitable…

…One critical aspect of this layering is that at each higher level of abstraction, the lever with which one can create value and extract profit becomes successively longer. You can see this by looking at market cap per employee of these dominant companies:

Intel: 106k employees, 55B revenue, 149B mkt cap

Microsoft: 120k employees, 93B revenue, 429B mkt cap

Google / Alphabet: 60k employees 75B revenue, 510B mkt cap

Facebook: 13k employees, 6B revenue, 320B mkt cap…

…A non-obvious but critical point to appreciate here is that for of the first n movers mobilizing around a scarce element, the arrival and eventual dominance of the last mover will be seen as a Black Swan event of sorts. By abstracting away the scarce resource instead of organizing around its scarcity, these companies become the first to be fully playing in the sandbox at level i+1, as opposed to the non-scalable scarcity-governed sandbox at level i…

…The last decade saw plenty of startups go after the transportation market, and I’m sure all of them described themselves as “scalable” in their investor decks. Meanwhile, the whole valley was busy passing on Uber because it was initially just a better way to do a black car service, and few people understood the true scalable potential in abstracting away the driver-rider trust required for UberX. The take home lesson here should be taken to heart: when the first n companies go after an issue, no matter what language they use in their pitch, their business models typically don’t truly venture beyond the constraints at layer i that anybody can see and understand. They’re easier to work through, make more sense to “rational investors”, and require fewer non-linear leaps of thinking to understand. As such, when the last mover emerges at level i+1, they’re a Black Swan event: few people foresaw their opportunity, their impact is enormous, and everybody rationalizes what happened after the fact…

…At level i+1 of the stack, the newly valuable resource is that which emerges as scarce out of the transition from scarcity to abstraction to abundance at layer i.

4. The Default Position: LevFin’s Latest Game Just Got Shut Down…Sort Of – JunkBondInvestor

Serta was no small player. We’re talking about the company behind Serta and Beautyrest—the beds you see in every department store in America. But by 2020, they were in serious trouble. Drowning in debt and sales were tanking.

That’s when a group of savvy lenders saw their opportunity. Already holding a chunk of Serta’s debt, they approached with what would become lawyers’ new favorite playbook.

The deal? A group holding 51% of their term loans would provide new money, but only if they got to exchange their old loans for new “super-senior” debt that jumps to the front of the line. The other 49%? They didn’t even get a phone call.

Here’s a sobering fact: non-participating lenders saw their position so deeply subordinated that their recovery prospects plummeted. The new super-senior debt was worth nearly full value, while the excluded lenders saw their position crater.

But here’s where they screwed up.

Their loan agreement only allowed “open market purchases.” Serta’s lawyers tried arguing that their private backroom deal counted as “open market” because… well, just because.

The Fifth Circuit wasn’t having any of it. They said what everyone was thinking: A private deal with hand-picked lenders isn’t an “open market” any more than a private club is a public park…

…On the exact same day—I’m not making this up—a New York court looked at pretty much the identical deal from Mitel Networks and said “Sure, go right ahead.”…

…Mitel pulled the exact same move as Serta. They were drowning in debt, so they cut a deal with friendly lenders to jump them to the front of the line. New super-priority debt paper. Everyone else got pushed to the back.

So what made this different from Serta?

Three words. That’s it. Instead of requiring “open market purchases,” Mitel’s agreement just said they could “purchase by way of assignment.” No mention of open markets anywhere.

The New York court basically said: “Look, if you didn’t want the company doing private deals, you should have said so in the contract.” Those excluded lenders who were screaming about their “sacred rights”? The court told them their rights weren’t so sacred after all.

Here’s the brutal truth—the same transaction either flies or dies based entirely on a few words in your documents. If that doesn’t scare the hell out of every lender out there, it should.

5. Tyler Cowen – The #1 Bottleneck to AI progress Is Humans – Dwarkesh Patel and Tyler Cowen

Dwarkesh Patel 00:00:11
Why won’t we have explosive economic growth, 20% plus, because of AI?

Tyler Cowen 00:00:17
It’s very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can’t use AI very well.

Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I’m guessing education is 6 to 7%. The nonprofit sector, I’m not sure the number, but you add it all up, that’s half of the economy right there.

How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you’ll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.

Dwarkesh Patel 00:01:04
The mechanism behind cost disease is that there’s a limited amount of laborers, and if there’s one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?

Tyler Cowen 00:01:25
Cost disease is more general than that. Let’s say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.

Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.

Dwarkesh Patel 00:01:57
If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you’d have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?

Tyler Cowen 00:02:11
He and I would agree, I hope. I think I would tell him, “Hey, it’s going to take a long time.” And he’d say, “Hmm, I don’t see it happening yet. I think it’s going to take a long time.” And we’d shake hands and walk off into the sunset. And then I’d eat some of his rice or wheat or whatever, and that would be awesome.

Dwarkesh Patel 00:02:29
But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don’t just eat it away, you could agree with that, right?

Tyler Cowen 00:02:38
I don’t know what the word “could” means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you’d call prediction markets are not forecasting super rapid growth anytime soon…

…Dwarkesh Patel 00:03:13
In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.

Tyler Cowen 00:03:26
I don’t agree.

Dwarkesh Patel 00:03:27
Why not buy the models?

Tyler Cowen 00:03:28
His model is far too much a one-factor model, right? Population. I don’t think it’s very predictive. We’ve had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.

So it’s really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it’s unclear what’s happened, but it’s also fragile. There’s the perspective of the economist, but also that of the anthropologist, the sociologist.

They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there’s any simple lever you can push on, intelligence or not, that’s going to give you breakaway economic growth.

Dwarkesh Patel 00:04:11
What you just said, where you’re bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you’re going to be bottlenecked by the restaurants…

…Here’s a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.

We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.

Dwarkesh Patel 00:04:53
So about a year ago, your co-writer on Martial Revolution, Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That’s why you have to do semiconductors in Taiwan, because that’s where they’re putting their nominal amount of geniuses. We’re putting ours in finance and tech.

If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn’t going to grow any faster?

Tyler Cowen 00:05:32
I didn’t agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they’re amazingly low. They’re pretty insignificant.

People who are very successful, they’re very smart, but they’re people who have say eight or nine areas where they’re like, on a scale of 1 to 10, there are nine. Like they have one area where they’re just like an 11 and a half on a scale of 1 to 10. And then on everything else, they’re an eight to a nine and have a lot of determination.

And that’s what leads to incredible success. And IQ is one of those things, but it’s not actually that important. It’s the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 January 2025:

1. OpenAI o3 Breakthrough High Score on ARC-AGI-Pub – François Chollet

OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.

This is a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models. For context, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All intuition about AI capabilities will need to get updated for o3…

…The high-efficiency score of 75.7% is within the budget rules of ARC-AGI-Pub (costs <$10k) and therefore qualifies as 1st place on the public leaderboard!

The low-efficiency score of 87.5% is quite expensive, but still shows that performance on novel tasks does improve with increased compute (at least up to this level.)

Despite the significant cost per task, these numbers aren’t just the result of applying brute force compute to the benchmark. OpenAI’s new o3 model represents a significant leap forward in AI’s ability to adapt to novel tasks. This is not merely incremental improvement, but a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain.

Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: you could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy. Meanwhile o3 requires $17-20 per task in the low-compute mode. But cost-performance will likely improve quite dramatically over the next few months and years, so you should plan for these capabilities to become competitive with human work within a fairly short timeline.

o3’s improvement over the GPT series proves that architecture is everything. You couldn’t throw more compute at GPT-4 and get these results. Simply scaling up the things we were doing from 2019 to 2023 – take the same architecture, train a bigger version on more data – is not enough. Further progress is about new ideas…

…Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible…

…To adapt to novelty, you need two things. First, you need knowledge – a set of reusable functions or programs to draw upon. LLMs have more than enough of that. Second, you need the ability to recombine these functions into a brand new program when facing a new task – a program that models the task at hand. Program synthesis. LLMs have long lacked this feature. The o series of models fixes that.

For now, we can only speculate about the exact specifics of how o3 works. But o3’s core mechanism appears to be natural language program search and execution within token space – at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is presumably guided by some kind of evaluator model. To note, Demis Hassabis hinted back in a June 2023 interview that DeepMind had been researching this very idea – this line of work has been a long time coming.

So while single-generation LLMs struggle with novelty, o3 overcomes this by generating and executing its own programs, where the program itself (the CoT) becomes the artifact of knowledge recombination. Although this is not the only viable approach to test-time knowledge recombination (you could also do test-time training, or search in latent space), it represents the current state-of-the-art as per these new ARC-AGI numbers.

Effectively, o3 represents a form of deep learning-guided program search. The model does test-time search over a space of “programs” (in this case, natural language programs – the space of CoTs that describe the steps to solve the task at hand), guided by a deep learning prior (the base LLM). The reason why solving a single ARC-AGI task can end up taking up tens of millions of tokens and cost thousands of dollars is because this search process has to explore an enormous number of paths through program space – including backtracking.

2. Energy Cheat Sheet – Brian Potter

Most energy we consume gets wasted. Of the 93.6 quads (~27,400 TWh) the US consumed in 2023, only around 1/3rd of that went towards producing useful work. The rest was lost due to various inefficiencies, such as heat engine and transmission losses…

…Another obvious fact is that despite the burgeoning construction of renewable energy infrastructure, the majority of our energy still comes from burning hydrocarbons. Petroleum, coal, and natural gas combined are responsible for roughly 82% of total energy consumption in the US.

Related to this fact is that electricity generation is a relatively small fraction of our energy system: roughly ⅓ of energy inputs go towards generating electricity. For residential and commercial consumption, only around half of energy use comes from electricity. For industrial and transportation energy (the two largest sources of consumption), electricity is around 13% and less than 0.1%.

What this chart makes clear, but also sort of abstracts away, is the enormous amount of infrastructure we’ve built for moving around hydrocarbons. The US has close to 1 million oil and natural gas wells, 3 million miles of natural gas pipeline, 145,000 gas stations, and capacity to refine 18.4 million barrels of oil a day.

This is why environmental advocates often focus on electrifying everything: decarbonizing energy infrastructure requires much more than just building low-carbon sources of energy like solar panels and wind turbines — it requires fundamentally reworking how our society moves energy around. It’s also why eliminating roadblocks and bottlenecks to energy infrastructure construction is so important.

We can also dive deeper and look at a sector-by-sector breakdown of energy use. The residential sector uses around 11.5 quads (3370 TWh) of energy, a little over 12% of total US energy consumption…

…One major takeaway here is that most residential energy consumption goes into heating things up: Space heating (5.74 quads), water heating (1.69 quads), and clothes dryers (0.26 quads) together account for ⅔rds of residential energy consumption.4 You sometimes see air conditioners decried as wasteful by energy-minded environmentalists, but air conditioning is a much smaller share of energy consumption than heating…

…Most transportation energy in the US is consumed in the form of gasoline and diesel fuel, with a relatively small amount of jet fuel. If we look at it by transportation mode, most energy (~78%) is consumed by cars, trucks, and motorcycles…

…The huge amount of energy used by transportation also means that households are using a lot of energy that isn’t captured by the residential energy consumption statistics above. In fact, in a year, the average US household consumes more energy from burning gasoline (~24,000 kilowatt-hours) than what’s used by the entire rest of the house (~22,500 kilowatt-hours).

The commercial sector is not that different from the residential sector, with heating air and water using the largest fraction, with cooling and ventilation (ie: moving air around) also using large fractions.5 As with residential, its energy consumption is roughly split between electricity and natural gas…

…With industrial energy use, we see a lot of the same patterns that we see in other sectors. One is that utility electricity is a relatively small amount of industrial energy consumption (less than 20%). Most industrial energy comes from burning fuel (mostly natural gas) directly. Once again, we see that heating things up accounts for a huge fraction of energy consumption: roughly half of all manufacturing energy goes into process heating: If we add process heat to residential and commercial air and water heating, we find that roughly 20% of total US energy consumption goes towards heating things up…

…It’s clear that most energy used in the US is ultimately wasted, with only a small fraction being used to perform useful work (moving cars, heating homes, operating electronics, and so on). Moving energy around and changing its form can’t be done perfectly efficiently (thanks in part to the 2nd law of thermodynamics), and all those conversions we require to get energy where it needs to be and in the form we need it whittle away the energy available to get things done…

…The biggest source of losses is probably heat engine inefficiencies. In our hydrocarbon-based energy economy, we often need to transform energy by burning fuel and converting the heat into useful work. There are limits to how efficiently we can transform heat into mechanical work (for more about how heat engines work, see my essay about gas turbines).

The thermal efficiency of an engine is the fraction of heat energy it can transform into useful work. Coal power plant typically operates at around 30 to 40% thermal efficiency. A combined cycle gas turbine will hit closer to 60% thermal efficiency. A gas-powered car, on the other hand, operates at around 25% thermal efficiency. The large fraction of energy lost by heat engines is why some thermal electricity generation plants list their capacity in MWe, the power output in megawatts of electricity…

…The low thermal efficiency of ICE cars and heat engines in general and the high efficiency of electrical equipment (especially things like heat pumps) are the biggest counterweight to the high energy capacity of hydrocarbons. The gas tank on an ICE car technically stores much more energy than a Tesla battery pack but only a small fraction of that gasoline energy can be converted into useful motion. Switching to EVs, even if that electricity is still provided by burning fossil fuels, could save large amounts of energy (and thus carbon emissions), as it could mean switching from a 25% efficient gasoline engine to a 60% efficient combined cycle gas turbine. And of course, with electric vehicles, there’s the possibility of powering them by non-carbon emitting sources of electricity like solar or wind. 

3. Stocks Are More Expensive Than They Used to Be – Michael Batnick

In January 2018, they wrote an article, CAPE Fear: Why CAPE Naysayers Are Wrong. The article featured yours truly…

…It’s hard to believe seven years have passed since this article. It’s harder to believe that the S&P 500 is up almost 100% since their article came out, and delivered the highest 7-year performance for any CAPE starting at 33x. I did not see this coming. At all.

My whole thing was, yes, valuations are high. But companies are better today and deserve the premium multiple. I was not saying that a high CAPE is bullish. In fact, I ended most of my posts on this topic with the message of, “Expect lower returns.” I’ve never been happier to be wrong.

I want to return to some of the arguments I made, and what the CAPE zealots missed.

To use a long-term average that goes back to the late 1800s is foolish for three reasons. First, we didn’t have CAPE data back in 1929. It was first “discovered” in the late 90s. The discovery of data in financial markets changes the very essence of it. Markets are not governed by the laws of physics. They’re alive. They adapt and evolve and adjust, like an micro organism.

Second, the CAPE ratio has been rising over time since the 1980s. We’ve only visited the long-term average once in the last 25 years, and that was at the bottom of the GFC. If that’s what it takes to return to the long-term average, maybe you should reconsider what an appropriate comp level really is.

Third, and most important, the companies are far better today than they were in the past.

4. AI’s Uneven Arrival – Ben Thompson

What o3 and inference-time scaling point to is something different: AI’s that can actually be given tasks and trusted to complete them. This, by extension, looks a lot more like an independent worker than an assistant — ammunition, rather than a rifle sight. That may seem an odd analogy, but it comes from a talk Keith Rabois gave at Stanford:

So I like this idea of barrels and ammunition. Most companies, once they get into hiring mode…just hire a lot of people, you expect that when you add more people your horsepower or your velocity of shipping things is going to increase. Turns out it doesn’t work that way. When you hire more engineers you don’t get that much more done. You actually sometimes get less done. You hire more designers, you definitely don’t get more done, you get less done in a day.

The reason why is because most great people actually are ammunition. But what you need in your company are barrels. And you can only shoot through the number of unique barrels that you have. That’s how the velocity of your company improves is adding barrels. Then you stock them with ammunition, then you can do a lot. You go from one barrel company, which is mostly how you start, to a two barrel company, suddenly you get twice as many things done in a day, per week, per quarter. If you go to three barrels, great. If you go to four barrels, awesome. Barrels are very difficult to find. But when you have them, give them lots of equity. Promote them, take them to dinner every week, because they are virtually irreplaceable. They are also very culturally specific. So a barrel at one company may not be a barrel at another company because one of the ways, the definition of a barrel is, they can take an idea from conception and take it all the way to shipping and bring people with them. And that’s a very cultural skill set.

The promise of AI generally, and inference-time scaling models in particular, is that they can be ammunition; in this context, the costs — even marginal ones — will in the long run be immaterial compared to the costs of people, particularly once you factor in non-salary costs like coordination and motivation…

…What will become clear once AI ammunition becomes available is just how unsuited most companies are for high precision agents, just as P&G was unsuited for highly-targeted advertising. No matter how well-documented a company’s processes might be, it will become clear that there are massive gaps that were filled through experience and tacit knowledge by the human ammunition.

SaaS companies, meanwhile, are the ad agencies. The ad agencies had value by providing a means for advertisers to scale to all sorts of media across geographies; SaaS companies have value by giving human ammunition software to do their job. Ad agencies, meanwhile, made money by charging a commission on the advertising they bought; SaaS companies make money by charging a per-seat licensing fee. Look again at that S-1 excerpt I opened with:

Our business model focuses on maximizing the lifetime value of a customer relationship. We make significant investments in acquiring new customers and believe that we will be able to achieve a positive return on these investments by retaining customers and expanding the size of our deployments within our customer base over time…

The positive return on investment comes from retaining and increasing seat licenses; those seats, however, are proxies for actually getting work done, just as advertising was just a proxy for actually selling something. Part of what made direct response digital advertising fundamentally different is that it was tied to actually making a sale, as opposed to lifting brand awareness, which is a proxy for the ultimate goal of increasing revenue. To that end, AI — particularly AI’s like o3 that scale with compute — will be priced according to the value of the task they complete; the amount that companies will pay for inference time compute will be a function of how much the task is worth. This is analogous to digital ads that are priced by conversion, not CPM.

The companies that actually leveraged that capability, however, were not, at least for a good long while, the companies that dominated the old advertising paradigm. Facebook became a juggernaut by creating its own customer base, not by being the advertising platform of choice for companies like P&G; meanwhile, TV and the economy built on it stayed relevant far longer than anyone expected. And, by the time TV truly collapsed, both the old guard and digital advertising had evolved to the point that they could work together.

If something similar plays out with AI agents, then the most important AI customers will primarily be new companies, and probably a lot of them will be long tail type entities that take the barrel and ammunition analogy to its logical extreme. Traditional companies, meanwhile, will struggle to incorporate AI (outside of whole-scale job replacement a la the mainframe); the true AI takeover of enterprises that retain real world differentiation will likely take years.

None of this is to diminish what is coming with AI; rather, as the saying goes, the future may arrive but be unevenly distributed, and, contrary to what you might think, the larger and more successful a company is the less they may benefit in the short term. Everything that makes a company work today is about harnessing people — and the entire SaaS ecosystem is predicated on monetizing this reality; the entities that will truly leverage AI, however, will not be the ones that replace them, but start without them.

5. Don’t let interest-rate predictions dictate your investment decisions – Chin Hui Leong

A little over a year ago, the US Federal Reserve signalled its intention to cut interest rates three times in 2024. This commentary sparked a flurry of predictions, with market watchers vying to outguess the Fed on the number, timing, and size of these cuts. Goldman Sachs, for instance, boldly predicted five cuts.

We ended up with just three interest-rate cuts in 2024 – a significant miss, to say the least…

…According to Visual Capitalist, four firms – Morgan Stanley, Bank of America, Citigroup and Nomura – pencilled in a one-percentage-point cut for 2024. Credit should be given where it’s due: their forecasts were right.

However, did getting these predictions right matter in the end? As it turns out, not so much.

Morgan Stanley, Bank of America and Citi set 2024’s S&P 500 price targets at 4,500, 5,000 and 5,100 respectively… 

…The S&P 500, of course, closed the year at 5,881…

…Forecasts and expectations may look similar, but they are different. My friend Eugene Ng puts it best: Forecasts rely on knowing when something will occur. Expectations, on the other hand, are the acknowledgement of what’s likely to occur without professing insight into when it will happen.

For example, it’s reasonable to expect the stock market to fall by 10 per cent or more sometime in the future. After all, history has shown that corrections are a common occurrence…

…In my eyes, calmness can be achieved by having the right expectations, and preparing well for any market turbulence even when we don’t know when the market will fall.

If you are prepared, you will have fewer worries. If you worry less, you will stand a better chance of doing better than average. And that’s more than any investor can hope for, whether the forecasts are right or wrong.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Deepmind), Meta Platforms (parent of Facebook), and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 January 2025:

1. Mike Alkin – Talking Uranium (Transcript here) – Bill Brewster and Mike Alkin

Alkin: So coming to this market, I did that. I spent a good almost couple of years doing supply/demand on my own. There’s 430 reactors around the world. And understanding the country where they operate, the attitude towards nuclear, understanding the math involved. Often as investors, you look for heuristics. How many reactors are there? How many pounds per reactor would there be? You’re looking for rules of thumb. As you start peeling the onion back, I realize that rules of thumb don’t apply here because the amount of uranium needed for the reactor fleet around the world is not always the same. It depends upon enrichment capacity. We won’t go down that rabbit hole, but there’s a whole other segment you need to learn.

As I was doing that, I would go to these conferences and I would talk to nuclear fuel buyers, people who buy this stuff. It was hard for me at first to really understand what I was dealing with because as somebody at that time having well over 20 years of experience as a hedge fund investor, I talked to people in all industries that were on all sides of the equation. But the people buying it typically were curious as to what we were thinking when we were questioning them. If we were talking to a buyer at a company that was buying a product, they would say “What are you as an investor hearing? What are you hearing from the other side? What are my competitors saying? What are you hearing about inventories?” They were inquisitive. That was not this cohort. As I started speaking to nuclear fuel buyers, I was met with an enormous wall put in front of me telling me, “I’m an outsider, I’m not a nuclear engineer, I don’t know what I’m doing, I should basically stay away and they’ve got it.”

I thought it was that attitude that just said to me, “Something’s not right here because the numbers I’m coming up with, whether I’m looking at inventories or the amount of the cost of the supply, or the actual demand” – for context, at the time the price of uranium was $17, $18, $19 a pound. It would say what it was trading for in the market. As I did the analysis, I realized that the average cost was somewhere in the mid-$50s. I’m not that sharpest tool in the shed but I know that if something costs you mid-$50s to make, you can’t sell it for $17 for very long. So it was then that I had to peel back the onion saying, “Why are they producing it at that price?” Then you start to understand that the uranium market is one driven mostly by long term contracts. Well north of 80% on average will trade in a long-term window with contracts that cover 5, 7, 10, 12, 15 years depending on the contract. But that’s where most of the pounds trade. After the Fukushima event, a lot of these uranium producers, when the spot market had declined precipitously, were still selling into much higher prices. My understanding of that when I was talking to fuel buyers at these nuclear conferences, they were telling me that the price of uranium was $17 and $18, it was going to $10, it was going to $5. There was all this uranium out there.

That’s not what my math was showing me. What my math was showing me was that the model was that the long term contracts that had been signed before Fukushima melted down in 2011 were going to start to expire and rather rapidly. Uranium producers could not sell $17, $18, $20 uranium when it cost him 2.5 times that. At some point, production would have to start to shut down.

So you ask, “Do you think you’re crazy?” Yes, because as I’m talking to people who are obviously very sharp – they’re nuclear engineers – but it’s understanding, as you realize, as an investor, you have to understand incentives and you have to understand market structure. Charlie Munger would always say, “Show me the incentive, I’ll show you the outcome.” It was as I was starting to go and talk to these folks and realizing a couple of things. Number one is, they had no interest in what I was learning on my journey. Even though I’m not a nuclear engineer, I’m still somebody who’s a market participant. I’m still somebody that while I don’t speak their language, sitting at a dinner table or a lunch table or at a bar having a beer with them, I certainly could hold my own in supply/demand conversation. And as I would talk about what I was learning and uncovering, I was shot down at every step. I thought, “Wow, that’s interesting because I’m seeing a recency bias. What is now will always be.” So they were kind of latched onto that.

Then as I started peeling that, I’m thinking, “Why is this?” I’ve been doing this a very long time. Over the years, I’ve been wrong many times. I’ve been right more often than not. But you’re wrong and you try and understand where you’ve been wrong. I was thinking, “What is it? Why are they so uninterested in hearing what an outsider’s view is?” As I started to explore that more, you start to understand the makeup and the cost structure of a nuclear reactor, which I have known, but it really started to come into clear vision for me was the fuel. Uranium is just one part of the fuel cycle that goes in. You have uranium, they convert uranium from a powder into a gas. It then gets enriched, it then gets fabricated into pellets. That takes 18 to 24 months to do this stuff. There’s many different stages of the fuel cycle. As I was starting to think about what are the costs of that, all those stages are probably around 20% to 25%. What’s the cost of the uranium? That depends on the price. But it could be mid-single digits, high-single digits, somewhere around that. As you start talking to them about that, you realize it’s not a meaningful cost.

For comparative purposes, if I’m running a natural gas power plant or a coal power plant, my feedstock, the natural gas and the coal are 80% to 90% of the cost of operating it. Here, the uranium is single digits cost of operating it. The vision that started to come to me was uninterested market participants. They’re in the market very infrequently. Why are they uninterested? Because the cost is de minimis. Not to say it’s meaningless, but it’s de minimis. Then as I started to explore and ask questions, “Why are you not as concerned about this?” I was obviously met with a wall.

But what started to come to me was – and I asked flat out at a particular dinner at a World Nuclear Conference – I asked one, actually there were four fuel buyers at a dinner, I said, “If you all had a really enterprising fuel buyer that did the supply/demand work and said, “I think consensus is wrong. Here we are, $17, $18, $20 a pound. We should be buying uranium because the forecasts going out of the future are for deficits to be forming.” Let me ask you a question. Do you all, if the price were to go parabolic and you had all these great cost savings for your plant, do you participate that in any way, shape or form? Are you rewarded financially? Are you rewarded with a promotion?” The answer was I got laughed at. “What are you talking about? We’re paid to secure fuel.” These were buyers. As you come to a market as an investor, you think buyers are traders – they’re commercial creatures. These aren’t. These are really smart nuclear engineers that happen to buy a product that happens to not be a major cost component. There’s infrequent price discovery on their part and so it’s a lesson in understanding incentives and market structure…

Alkin: One of the things you see now is you have expert networks who provide hedge funds and mutual funds experts to speak to in any industry. If you’re a hedge fund wanting to get up to speed right now on the nuclear power industry, you’re going to say, “Get me three nuclear fuel buyers. I’d like to speak to them about uranium.” They’re going to get on the phone and they’re going to speak to them. For years – though I’m sure they’ve been doing this – they can get on the phone and speak to three fuel buyers and they say, “Yeah, there’s plenty of uranium out there.” Those are the same folks, when the price was $17 was telling me that, versus here you’re seeing floors and ceilings at $125 and $135. They are the gift that keep on giving. Yet the way the structure of the research process is, they’re going to expert networks. They find these people, and if you don’t understand how the sausage is made, you’re going to be misled. They’re not purposely misleading you. It’s just what their own beliefs are. For me, that’s a beautiful thing. I’ve been doing this a long time now, almost 30 years as a professional investor, and I’ve never seen a cohort of people who are so uninterested in hearing the other side of the story. So far I’ve seen them prices move up 4x in there against them and they still have the same attitude.

Brewster: To your point, it doesn’t sound like they’re very incentivized to care.

Alkin: There’s very little to no incentive to care, other than maybe you would think pride? I don’t know. But it doesn’t matter. It’s just not a thing. We actually chuckle because when we go to these conferences, you talk to them in a hallway or in a bar, it’s as though you’re an adversary. It’s very bizarre. They don’t have an incentive. It doesn’t matter what they pay. So that’s the bizarre thing.

2. Chip Cities Rise in Japan’s Fields of Dreams – Gearoid Reidy

In Chitose, a city of 100,000 in the northernmost main island of Hokkaido, billboards seek recruits for the Self-Defense Forces, which saw a 50% shortfall last year. When I arrived on a fully booked plane from Tokyo packed with salarymen in cheap suits and expensive watches, it was easy to see where the competition was coming from: a half-dozen towering cranes jutting into the sky, a jarring contrast against the surrounding countryside…

…Those cranes are building the first fab for Rapidus Corp., a public-private venture that aims to skip Japan to the head of the chip production queue. Founded just two years ago, it hopes to produce cutting-edge, 2-nanometer chips by 2027, in cooperation with IBM Corp. It’s fraught with risks, and the government’s record in promoting industry is spotty. But this is just the latest and most ambitious example of a series of bets on chips, with Prime Minister Shigeru Ishiba recently pledging an extra ¥10 trillion ($66 billion) on top of ¥3.9 trillion invested since 2021. Near the other end of the Japanese archipelago, 1,500 kilometers (930 miles) to the southwest, is another. In Kumamoto, on the island of Kyushu, mass production is soon set to begin at a $7 billion semiconductor plant.

Here, Taiwan Semiconductor Manufacturing Co., drawn by government subsidies and the region’s supply chain, opened its first Japanese plant in February. A second is in the works, with authorities lobbying for a third. It’s triggered an influx of Taiwanese workers into a city where until recently almost everyone was Japanese…

…As many as 6,000 laborers are employed to build Rapidus. But talk is of the arrival of permanent workers once test production begins. That’ll bring at least 1,000 high-earning jobs, along with their supply chains. On my visit, ASML Holding NV, the Dutch maker of chip-testing tools, had just opened offices, with 50 staff expected. Every second building seems to be being torn down and rebuilt…

…The scale of the ambition creates the risk of spectacular failure, one many in Japan’s media fully expect. Skepticism is warranted, considering previous government-led efforts, from DRAM maker Elpida Memory Inc., sold to Micron Technology Inc. after its 2012 bankruptcy, to troubled Japan Display Inc.

The economy was already doing well even before talk of Rapidus, Mayor Ryuichi Yokota told me, describing the fab as a “Big Bang” that has the city scrambling. Yet at night, when the construction crews leave, the silence is deafening. I couldn’t feel the billions I expected to find flowing, just a cold wind that would soon begin to turn to snow…

…The risk from disaster is unpredictable; but what if these experiments simply don’t work out? Japan has spent billions on subsidies to bring a foreign company in Kumamoto. And when it comes to Rapidus, the risks are immense. Even if the company can find the talent it needs (the country is expected to have a shortfall of 40,000 engineers), the technology succeeds and yields are acceptable, it still has to outcompete rivals — including TSMC — to attract customers with an unproven product.

Chitose mayor Yokota shrugged off these concerns. “I’m convinced it will succeed,” he said, resolute that researchers currently studying with IBM in the US will return, like Meiji-era scholars, with secrets Japan can use to rebuild.

3. Before Berkshire: Warren Buffett’s Tab Card Triumph – Kingswell and Alice Schroeder

He decided that he would come in and invest in this company — Mid-Continent Tab Card Co. — but, interestingly, he did not take Wayne and John’s word for it. The numbers they gave him were really enticing, but again he went through and he acted like a horse handicapper.

Here’s another point of departure from what almost anybody else would do. Everybody that I know — or knew as an analyst — would have created a model for this company and would have projected out its earnings and would have looked at its return on investment in the future. Warren didn’t do that. In fact, in going through hundreds of his files, I’ve never seen anything that resembled a model.

What he did is he did what you would do with a horse. He figured out the one or two factors that could make the horse succeed or fail — and, in this case, it was sales growth and making the cost advantage continue to work. Then, he took all of the historical data, quarter by quarter for every single plant, he got the similar information as best he could from every competitor they had, and he filled pages with little hen scratches of all this information and he studied that information.

And, then, he made a yes/no decision. He looked at it: They were getting 36% margins [and] they were growing over 70% a year on a million of sales. Those were the historic numbers. He looked at them in great detail — just like a horse handicapper studying the tip sheet — and then he said to himself, “I want a 15% return on $2 million of sales.” And then he said, “Yeah, I can get that.” And he came in as an investor.

So what he did is he incorporated his whole earnings model and compounding discounted cash flow into that one sentence. “I want 15% on $2 million of sales.”

Why 15%? Because Warren is not greedy. He always wants a mere 15% day one return on an investment and then it compounds from there. That’s all he has ever wanted. He’s happy with that. It’s a very simple thing. There’s nothing fancy about it…

…The $2 million of sales was pretty simple, too. It had $1 million [and] it was growing 70%. There was a big margin of safety built into these numbers. It had a 36% profit margin and he said, “I’ll take half that.”

He ended up putting $60,000 of his personal non-partnership money into this company, which was about 20% of his net worth at the time. He got 16% of the company’s stock, plus some subordinated notes.

4. China’s Bond Yields Scream the ‘D’ Word – Lingling Wei

Over the past week, just as Chinese leaders tried to get the public—and markets—excited with another round of stimulus talk, China’s 10-year sovereign yield kept falling to fresh lows. Now, the yield is around 1.7%, a full percentage-point plunge from a little over a year ago. The return on the 30-year government bond has also dropped below 2%.

The sovereign-debt yield still has a ways to go before falling to zero, but the speed of the drop is astonishing. The lower the yield falls, the deeper the market is signaling economic stress.

…In reality, Beijing is sticking to the formula of boosting demand through investment. The official thinking is, investment creates jobs, which would in turn create demand. That means more roads will be built, factories will be expanded and debts will continue to rise. Already, residents in some cities are complaining about the inconvenience from old roads being dredged up as authorities search for ways to invest.

One big irony is the source of bond buying—the force pushing down the yields.

State-owned banks, insurance firms and funds, the very institutions Beijing is counting on to support the economy, are the major purchasers of government bonds. These institutions would rather park their money in the safety of bonds than financing business projects or otherwise putting it to work.

“What’s good to invest in these days when demand is so low?” a Chinese banker told me, referring to weak business and consumer spending.

5. An Interview with Gregory Allen About the State of China Chip Export Controls – Ben Thompson and Gregory Allen

Here’s the question though. China doesn’t generally seem to be operating, and for good reason under the circumstances, under a real stringent return on invested capital calculation. I mean the 7nm chips that are being produced, we know with I think a pretty high degree of certainty, the yields are terrible.

GA: The yields are dreadful.

But they’re doing it anyway just because it needs to be done and this sort of ties into another thing. You referenced Dylan Patel and SemiAnalysis, who have been pretty strident critics of the enforcement of chip controls. But I think a good point he has made is that China, unlike the US, is not necessarily constrained in power or in the ability to build a ton of data centers, and so there’s a bit where they could just sort of — it’s not great, but they could just be way less efficient and accomplish similar things. Is there a bit where these expert controls are fashioned with Western/US constraints and concerns about how you go about building this stuff that might make them less impactful in the long run?

GA: Yeah, the export controls have not achieved their wildest dreams. There was a faction in the Biden administration that says, “Bwahaha, we found the secret weapon, and China’s AI dreams are gone” — that theory is just dead. Where we are now is at more of a cost imposition strategy. “We are going to make this as expensive and complicated as possible for you to do it, we’re going to try and slow you down, we’re going to try and increase your costs, and that is the race that we’re going to run”.

I mean, if you think about it, we’re switching from a mode in which the US AI ecosystem and the Chinese AI ecosystem were largely fused such that if we’re running a race, you can imagine there’s US people giving China Gatorade and those new Nike shoes that make you run faster. Now we’re moving to a moment where we’re trying to trip them in the race, that’s the change in mindset that we’ve experienced, and it’s not working to its most extreme form, but there is real cost imposition takes the form of the fact that SMIC has to operate at these dreadful yields. The economics are terrible, the fact that when they’re building all of these data centers, they’re having to use lousy chips, they’re having to buy more of them, and they’re having to deal with the higher energy costs of all of that.

It’s true that China does have just this extraordinary willingness to spend, but the point is we’re in this race, we’re in this competition, and it gives us an edge, not an infinite edge, but a meaningful edge.

This is a field, maybe you don’t have an answer to this, but there are some that argue that actually the better approach to some of these chips is a much more expensive, a much more high speed memory approach that has much lower latency using SRAM instead of High Bandwidth Memory. Is there a possibility that we actually pushed China down a different route towards developing these chips that maybe ends up being better because we thought HBM was the right way?

GA: I think that’s probably not what’s going to happen. It’s definitely worth saying that that could happen, a version of that kind of happened with YMTC and their NAND memory. There were multiple different approaches they could have taken technologically. All the Western and US allied Asian firms picked one way because it was obviously the best economics, and they held all the intellectual property, they held all the patents and so YMTC basically said, “Okay, we’re going to go down this other road and because we’re so heavily subsidized, it doesn’t really matter that it’s going to be more expensive”, and they did ultimately figure out how to get it work.

I think what you’re describing, the SRAM in massive quantities thing verges on the neuromorphic architecture, and it’s not that that’s impossible, and it’s not that that’s never going to happen, but it’s clearly not the right step for China right now. I think they have a path to domestic HBM production and that’s so much easier for them to chase than a SRAM revolution. I think traditionally they would just wait for somebody else to try and figure out and demonstrate that it’s possible and then they would throw infinite resources at it…

...For all of these chip controls, all this stuff that you’ve covered and written about, does any of it matter, if you add it all up, in comparison to that point that they don’t have EUV?

GA: EUV is the highest return on investment export control that we have had and are likely to have. It’s definitely the case that some of the other stuff hurts. If you talk about SMIC, for example, increasing their yields on their 7nm line and expanding the capacity of their 7nm line, they actually are bottlenecked by US equipment, a lot of US metrology equipment, etc. But if you want to talk about why they can’t—

But they do have the equipment, they just need to figure out how to duplicate it. The challenge with EUV is they don’t even have one, so duplicating it is that much harder.

GA: Yes exactly, it’s a lot harder to reverse engineer something that you don’t have a copy of, it really helps to have a copy of it. So I would say the EUV thing really matters, but there’s areas where China is facing headwinds that aren’t part of the EUV story.

So just to take one example, in DRAM, Micron still doesn’t use EUV in their production of DRAM, and they’re a globally competitive firm. So CXMT, the Chinese domestic champion of DRAM, the reason why they’re not currently globally competitive is not the absence of EUV, but I do think you could make a story that it is the absence of all this other stuff that we’ve been refusing to sell…

You’re not necessarily like a geopolitical analyst, but the thing that scares me about all this, I think I’ve asked you this every time, it still scares me, is we’re talking and saying the administration needs to do better at enforcing these laws that guarantee a power imbalance in the long run, that is usually very destabilizing. China might think, if we’re going to have a fundamental power imbalance, then how about we take Taiwan off the board because that will screw everyone? Now we’re equal again. Do you worry about this? You’re a strong advocate for doing this better.

GA: So. Number one is, I don’t know that I ever agree that the balance of power is the stable universe. In 1994, the Taiwanese defense budget was half of that of the Chinese defense budget, now the Chinese defense budget is infinity times that of the Taiwanese defense budget. And by contrast, in 1997, I think there was a single U.S aircraft carrier battle group that was more than capable of defeating the entire Chinese Navy and the entire Chinese Air Force, that was a massive power imbalance and it was a very stable relationship. And by the way, it was a relationship in which a lot of people got rich and had productive free trade and all these kinds of happy relationships. So the idea that power parity is the path to peace here, don’t know that I necessarily agree with that, I don’t think the historical record really bears that out.

Now, you could argue if we’re going to make bold moves and try and seize a decisive advantage, could those bold moves be destabilizing? Yeah, I think definitely think so.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 December 2024:

1. Meet Willow, our state-of-the-art quantum chip – Hartmut Neven

Errors are one of the greatest challenges in quantum computing, since qubits, the units of computation in quantum computers, have a tendency to rapidly exchange information with their environment, making it difficult to protect the information needed to complete a computation. Typically the more qubits you use, the more errors will occur, and the system becomes classical.

Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes…

…This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits…

…There are other scientific “firsts” involved in this result as well. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system — crucial for any useful computation, because if you can’t correct errors fast enough, they ruin your computation before it’s done. And it’s a “beyond breakeven” demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits do, an unfakable sign that error correction is improving the system overall.

As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built…

…As a measure of Willow’s performance, we used the random circuit sampling (RCS) benchmark. Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today…

…Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch…

…Willow was fabricated in our new, state-of-the-art fabrication facility in Santa Barbara — one of only a few facilities in the world built from the ground up for this purpose. System engineering is key when designing and fabricating quantum chips: All components of a chip, such as single and two-qubit gates, qubit reset, and readout, have to be simultaneously well engineered and integrated. If any component lags or if two components don’t function well together, it drags down system performance…

…The next challenge for the field is to demonstrate a first “useful, beyond-classical” computation on today’s quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.

2. X (previously Twitter) thread on quantum computing and Google’s Willow – Jeffrey Scholz

Like a regular computer, a quantum computer keeps bits in groups. So a 64 bit quantum computer would have a vector of 64 2d vectors serving as it’s “word.”

Here is where the speedup happens: in a regular computer, each of the 64 bits don’t know anything about the value of any of the other 64 bits.

If we want one bit to affect another bit, we have to explicilty combine them with a logic gate.

However, in a quantum computer, each of the 64 qbits can “talk to each other” via “quantum entanglement.”

Running a quantum circuit means you plug in a quantum vector, run it through a bunch of matrix multiplications, then collapse the output.

The final vector will be the correct answer. Technically, quantum computers can give wrong answers, but if you run the computation multiple times, then you will get the correct answer on average…

…The current problem with quantum computers is that as the circuit gets bigger, they become less correct on average. All of the “talking to each other” creates so much noise the system stops working.

Once your probability of being correct drops below a certain threshold your quantum computer becomes useless. This is a major blocker for current quantum compute.

Let’s look at a specific (oversimplified but helpful) example. Suppose you shine a laser beam into an ice cube.

Actually simulating what the laser will do when it exits the ice cube is very hard to predict because some quantum phenomena is involved.

To actually compute what the laser will do means you have to explicilty compute quantum entanglement, which is slow for classical computers but “built in” to a quantum computer.

However, you can *estimate* the distribution of how the laser will scatter without a quantum computer, so you can have at least a rough idea if your answer might be correct…

…By analogy, this is what Google was doing. The computation Google was doing was a “pseudo-random quantum circuit” (think pseudoranom ice cube) but we know a quantum circuit is just matrix multiplications (on crack). Therefore, it is a bunch of random matrix multiplications with an output that looks right.

Google’s actual breakthrough was that the output of the circuit “looks correct” — which sounds underwhealming — and compared to the headlines, it definitely is. The academic breakthrough is that Google was able to use a larger circuit and notice an apparent *increase* in accuracy when modeling how a laser shines through an ice cube. That is noteworthy.

You can definitely tell if a computation has failed, and it seemed to be failing less as the circuit got bigger…

…However, note that the problem is “rigged” in favor of quantum computers. The benchmark is explicitly modeling a quantum phenomenon, so *of course* we get a speedup.

In other words, Google created a random distribution on the output that “seems correct.” Why does it “seem correct?” well because by design, the computation cannot be run on a classical computer. But if we can’t run it on a classical computer, how do we know the quantum computer is actually giving the right answer? The answer is we don’t, and this is a serious gap…

…Quantum computing is kind of at the stage right now where some smart teenager wired a few logic gates together in a random fashion and said “hey look, my circuit made a random output and didn’t explode!” Compared to previous attempts, it is an improvement. But he is still a long way from training an LLM.

3. Volatility: A Double-Edged Sword for Long-Term Equity Investors – Daniel Crowley

The ability to measure risk in a portfolio has long been a puzzle for the financial world. When Harry Markowitz introduced Modern Portfolio Theory in 1952, he revolutionized how institutions approached risk and return. His use of standard deviation as a proxy for volatility offered a clean, mathematical way to quantify the unpredictability of markets. It gave investors a seemingly precise tool to compare assets and assess portfolio risk. Over time, this approach became gospel, with concepts like beta and the Sharpe ratio reinforcing volatility as the core measure of risk.

But here’s the problem: volatility tells only part of the story. Financial markets don’t follow the neat patterns of a normal distribution, which is what these models assume. Extreme events occur far more often than traditional models predict. We’ve seen this play out time and again—from the collapse of Long-Term Capital Management to the Great Financial Crisis. The models couldn’t account for the market’s tendency to behave irrationally and with far greater extremes than the math suggested. That’s why I’ve come to view volatility not as risk itself but as a signal, an invitation to investigate further…

…Volatility is often misunderstood because it treats upward and downward price movements as equal. A stock with erratic upward swings may have high volatility but poses little risk if the business fundamentals are sound. Conversely, a stock that steadily declines might appear “safe” on paper but can quietly destroy wealth.

The market’s reliance on volatility as a measure of risk often misses these nuances.

This misunderstanding creates a divide among investors. On one side are those who cling to volatility as the ultimate arbiter of risk, building models that rely on neat equations and assumptions about market behavior. On the other are those who dismiss it entirely, treating volatility as irrelevant noise.

My view lies somewhere in the middle. Volatility is neither good nor bad—it’s just a clue. It’s a signal to dig deeper and assess whether the market’s movements are justified by changes in a business’s intrinsic value.

What I’ve come to appreciate about volatility is its ability to surface opportunity. Markets are emotional, driven by fear, greed, and short-term thinking. Prices frequently diverge from reality, creating moments where high-quality businesses are available at steep discounts. When markets panic, as they did during the COVID-19 pandemic or the Great Financial Crisis, those who can stay calm and look beyond the noise can identify extraordinary opportunities.

Volatility, far from being a risk, is often the price of admission for outsized returns.

4. The AI nuclear renaissance – SMRs role – Rihard Jarc

The global nuclear power market is about 10% of global electricity (about $350-$400B annually) and around 32% of zero-carbon electricity generation.

As of 2023, nuclear energy accounted for about 18.6% of total electricity generation in the United States. The International Energy Agency (IEA) highlights that global nuclear power output must more than double by 2050 to meet net-zero emission targets. Most of the U.S.’s nuclear power plants are over 50 years old and nearing the end of their operational lives. While their lifespans have been extended to support the grid, they will need to be replaced in the coming decades…

…The introduction of ChatGPT and the AI boom that we have experienced in the last 2 years have only accelerated as AI workloads and AI chips consume much more energy than traditional data center workloads. This Nuclear Energy expert gives a good example:

» If you provide a simple search in Google, you consume 0.3 W per hour of electricity. If you do the same with ChatGPT or Alexa or Gemini, any AI that we can imagine, this 0.3 W transforms into 2.9 W, so it means 10X the consumption.«…

…Driven by artificial intelligence (AI), cloud computing, and digital transformation, U.S. data centers consumed an estimated 150 TWh of electricity in 2023, equivalent to around 3% of the nation’s power demand. According to Goldman Sachs estimates, data center demand hovered at 340 TWh in 2023 globally, which is about 1.3% of worldwide electricity use. U.S. data center power use is expected to triple between 2023 and 2030 roughly and will require about 47 gigawatts of new generation capacity…

…Nuclear energy has become very attractive because companies want to be carbon-neutral and have stable power. An additional benefit of nuclear power is that it can provide more stable long-term contracts that are less sensitive to inflation and supply chain problems…

…Interest in nuclear energy, particularly Small Modular Reactors (SMRs), is growing as they have been heralded as a solution to streamline nuclear power production, offering flexibility, lower upfront costs, and modular deployment. The simplest way to imagine SMR is that it is a smaller version of the traditional nuclear reactor. One of their most significant benefits is that they are modular. They are designed to be built in factories, not on-site. Because they are built in factories, they are easier to assemble and control. From quality checks to a more predictable supply chain and quality of workers. When assembled, they are then shipped to the site of the nuclear plant, where they are stacked together to form the whole plant. In terms of energy output, traditional nuclear plants have outputs between 1,000-1,600 megawatts of electric (MWe) per reactor, while SMRs are around 50-300 MWe per module. Some SMRs are also said to be safer due to passive safety features, which rely on natural processes like convection to prevent meltdowns in emergencies. But they also come with cons. The primary one is that they are much smaller than traditional nuclear plants, so they do not have the cost benefits of economy of scale. Because of that, producing the same amount of energy is more expensive than on a traditional nuclear plant…

…Over 25 countries, according to the International Atomic Energy Agency (IAEA), are investing in SMRs. In March, Wood Mackenzie estimated the pipeline of SMR projects was worth more than $176 billion and that SMRs could account for as much as 30% of the global nuclear fleet by 2050…

…We can look at the example of NuScale, which has its Pressurised Water Reactor design. Their levelized cost of electricity ranges from $89-135/MWh, while traditional nuclear plants are in the $110-160/MWh. However, looking at the most traditional alternative in data centers, which is combined solar and gas, gas costs $45-70/MWh, and solar plus storage costs $30-60/MWh…

…State-backed projects in countries like China and Russia have made more progress, leveraging integrated supply chains, controlled costs, and assured revenue streams. But even for them, the costs to build these reactors compared to first estimates are still much bigger…

…We must also face reality, which says that only 2 SMRs are operational right now, one of which is in Russia and the other one in China.

Another important topic when assessing nuclear energy is the problem of nuclear waste and its storage. Most SMR designs produce a similar amount of nuclear waste on a unit production basis than traditional nuclear plants, so the problem of storing nuclear waste stays.

5. How to invest without relying on target prices – Chin Hui Leong

The US stock market is soaring to new heights. But what does that mean for your stock returns in 2025? I would like to give you a definite answer but if I did so, I would be lying to you. In fact, you should view anyone who gives you target prices with suspicion.

Here’s the hard truth: No one can control where the market is headed in the short term. Yet, the allure of target prices persists…

…The answer lies in the inherent difficulty in predicting the future of rapidly evolving technologies.

The best example is Amazon.com. In mid-2010, when I first invested in the company, it had just reported US$24.5 billion in annual revenue, primarily from its online retail business. Here is the twist: it was impossible to know what the business would look like a decade later…

…Fast forward to 2023, and AWS had become a financial cash cow with nearly US$90 billion in annual revenue and an impressive US$24.6 billion in operating income. In other words, AWS, an insignificant division back in 2009, had generated more operating income in 2023 than the entire company’s revenue in 2009…

…I like to go back to the reason why valuation is used in the first place: to reduce your investment risk. The way I see it, valuation is one of the many ways you can employ to manage risk. But valuation is not the only risk in investing.

A weak, shrinking business can pose risks that no amount of stock valuation can solve. Hence, starting with high-quality businesses is my preferred approach.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Amazon. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 December 2024:

1. Why China’s Economy Opened Up in the 1970s – Joe Weisenthal, Tracy Alloway, and Odd Arne Westad

Joe (13:32):

What does it mean when you talk about history being “contingent?” You used that word a couple of times and I actually don’t know if I fully understand what that means, but when you’re telling these stories, or this story, and you’re keeping in mind the contingency in history, can you talk a little bit more about this idea?

Odd (13:48):

So you’ll see from the book that we go in and out from the sort of micro to the macro level of telling history. And if you look at the night when the coup against the radicals — the so-called Gang of Four within the party — took place, which we describe in some detail, you know, what happens from hour to hour…

Joe (14:10):

Right, this was the moment in which the left faction, after Mao dies, was arrested, and allowed for a sort of more moderate path to emerge.

Odd (14:21):

That’s right. And it was in effect a military coup. I mean, it was undertaken by the military and the security forces against the people who Mao himself had put in charge of the party, including his widow who was most prominent of all, Jiang Qing. Now that night, and the following few days, things could have ended up very differently. I mean, Shanghai, the biggest city in China by far, was still under control of the radicals. There were military units that supported the radical approach to politics. This could have ended up very differently from what it did.

And as we describe in the book, some of the plotters, some of the coup-makers themselves, in those days that followed the coup itself, were completely surprised by how little resistance there had been from the left. And how little chaos there had been on the streets. So that’s what I mean with it being contingent. I mean, this is something that obviously connects to the larger picture that we see today — going back to your sort of three level version of what happened in China. But it didn’t seem that obvious at the time. And it could have gone in very different directions from what we’re seeing today.

Tracy (15:30):

How important was the fraying of the relationship between China and the Soviet Union in the 1960s, early 1970s to spurring or catalyzing that opening up? Because it does feel like the sudden emergence of the Soviet Union as an external enemy, it feels like that led China in some respects to open up to the US and some other countries.

Odd (15:56):

This is a sort of trajectory that I think it’s really important to get right, because what Mao and his group of leaders did in the late 1960s was to turn to the United States as an ally — a pseudo ally, security ally — against the Soviet Union because they were so deadly afraid that there would be a war with the Soviets — a war that China certainly would have lost, given the state that Chinese communists themselves had pulled China into during the Cultural Revolution. So what Mao did was to turn to the enemy far away, the United States, to help back him against an enemy much closer to home, the Soviet Union, which they had this falling out with mainly for ideological reasons.

From Mao’s perspective, this was always intended to be a strictly security oriented pseudo alliance. It was directed against the Soviet Union. Mao to the end of his days was puzzled that United States would support the real communists, meaning him, against the fake communists, meaning the Soviet Union. But as long as they were willing to do that, he was certainly willing to reap the benefits. But he never intended that this would have any effect in terms of the increasingly radical communist direction that he was taking for China internally, domestically.

So that’s when what happens in 1976, after Mao’s death, becomes so significant, because the people who then took over, they thought, ‘Aha! We have this relationship between United States. They are supporting us for their own reasons in the Cold War against the Soviet Union. We can now also make use of this to supercharge Chinese reform.’ If it hadn’t been for that relationship, strictly security oriented, that already existed between China and the United States, I doubt that that would be possible. So it’s very important when about the longer term US-China relationship to think about that origin and how this actually got started. Very different from the way most people think about it, where the security element and the reform element are sort of conflated into one…

…Odd (36:05):

I think it was both. I mean in the Xi Jinping case, I think he was picked by the party as the, what Chinese would call, the core leader, back in the early twenty-teens, in response to what was seen as a bunch of real problems, from a Chinese Communist Party perspective, over liberalization, decentralization, corruption, strength of private companies that meddled in a lot of things that the communists didn’t want them to meddle in. They wanted to get a strong leader in who could deal with those issues, in a way that his predecessors, Jiang Zemin [and] Hu Jintao, had not been able to do it. So they wanted a strong leader. It’s just that, I think even for many communist leaders of that generation, they got more than they bargained for. So that’s where the personality aspect comes in. They got a leader who really wanted to return, at least on some issues, to the Maoist or even the sort of pre-Mao period, in terms of the CCP’s history and emphasizes the party’s position over what even many party leaders back 10 [or] 15 years ago thought would be good for China.

And it’s a classic example of responding to real world problems — not unknown in this country, right? — by going very far in one direction, hoping that that would resolve the problem that is there, and then getting stuck in a way with the kind of leader that you have in this case, in Xi Jinping. So I think that’s the story, the way we can tell it now. I hope at some point to be able to tell that story based on archives and primary documents, as an historian, we can’t do that yet. But I think at some point, we’ll be able to do that, and then it’ll be fascinating to test that hypothesis about how this happened.

Tracy (37:54):

So just on the revolution from below point, one of the things that you emphasize in the book is a lot of the stuff that happens in this time period is a result of people feeling that they are heading somewhere, that there’s a grander Chinese vision that can be achieved. And so that motivates people to actually do something. I’m curious, just going up to the present day, do you get a sense that people feel that? That there’s like a direction that China is heading in that it’s clear to people what they are trying to do?

Odd (38:33):

At the moment, absolutely not. I think it’s very, very clear that a lot of people in China do not understand where the country is heading and what the reasons are. And you know, you don’t spend much time in Beijing before you realize that these days. I think it was very different in the time period that we are talking about, which was generally a time of uplift, at least in economic and and social terms. And it’s right to say, I mean as many historians have said, that there was an element of a bargain in this. That, at least for some Chinese, not not everyone, but for some Chinese, maybe particularly in business, that would accept a dictatorship for what it was and then went on getting rich and and establishing some of these great or middling fortunes that you find so many of in China today. And that is good. I mean that was positive. It was much, much better than the dark past that we described at the beginning of the book.

It was just that, China wasn’t able to take what, in our view, is a necessary step to improve its political system, its overall attempt at trying to become a more open, more pluralistic country in the period when the going was good, when there was a general sense that China was making advances, domestically and internationally. Now, I think even if people from within the Chinese Communist Party after Xi Jinping would try to move in a direction of increased liberalization — which I think they will have to do at some point because people are just very unhappy with the kind of system that is there at the moment — it would be much more difficult, because the going is not that good. And probably it’s never going to be that good again. I mean, it was a remarkable period of economic transformation, 10% per year growth rates. It would’ve been possible to carry out necessary reform. But these people didn’t want to do it because they had become so preoccupied with holding onto power themselves. And I think, historically, that that might turn out to be the biggest mistake that the Chinese Communist Party has made.

2. Tim Cook Wants Apple to Literally Save Your Life – Steven Levy and Tim Cook

Some companies charge for AI-enhanced services. Did you consider that?

We never talked about charging for it. We view it sort of like multitouch, which enabled the smartphone revolution and the modern tablet.

You’ve personally been using Apple Intelligence for a while. What has been most useful for you?

We’re an email-based company, and I get enormous numbers from users, employees, partners, and so forth. Having it summarize author responses is a game changer, and having it prioritize things for you so you’re not doing your usual triage. Then, of course, there are fun things like the Image Playground.

I’ve heard you say that Apple Intelligence could make you funnier, which seems strange.

I think it can make you friendlier, which, in many ways, can be funnier as well.

Having AI speak for people makes me wonder whether the nature of communication will degrade. If Apple Intelligence writes something funny, who’s being funny, the sender or the AI?

It’s still coming from you. It’s your thoughts and your perspective. You and I both remember the productivity that came from the advent of the personal computer. It was no longer you punching your calculator, you were doing something on a spreadsheet. It was no longer you at the typewriter, you were using a word processor. Logic Pro helps musicians create music, but they’re still the author.

One of your demos involves a fictional recent graduate applying for a job. The cover letter is colloquial and somewhat sophomoric, but with Apple Intelligence a single click changes it to look like a savvy, smart person wrote it. If I’m a recruiter who hired that person, maybe I will feel tricked if they don’t live up to the professionalism of that letter.

I don’t think so. By using the tool, it comes across as more polished. It’s still your decision to use the tool. It’s like you and I collaborating on something—one plus one can equal more than two, right?…

When you’re thinking about things late at night, don’t you sometimes ask what it would mean if computers had superhuman intelligence?

Oh, of course. Not just for Apple, but for the world. There’s so much extraordinary benefit for humanity. Are there some things you have to have guardrails on? Of course. We’re very deeply considerate about things that we do and don’t do. I hope that others are as well. AGI itself is a ways away, at a minimum. We’ll sort out along the way what the guardrails need to be in such an environment…

Meta and Snap are leading us to mixed-reality glasses that we’d wear continually. Is the bigger, heavier Vision Pro ultimately headed that way?

Yes, it’s a progression over time in terms of what happens with form factors. AR is a huge deal. With Vision Pro, we’ve progressed to what is clearly the most advanced technology we’ve ever done, and I think the most advanced technology in the world in terms of electronics problems. We’ll see where it goes.

Apple has created a lot of consumer tools for medical technology. What’s the strategy for biological metrics and prosthetics?

It’s clear to me that if you zoom out way into the future, and you look back and ask what Apple’s biggest contribution was, it will be in the health area. That’s what I really believe. When we started pulling that string with the Apple Watch, it was a cascade of events. We started with something simple, like monitoring your heart rate, and then figured out we could pick up heart signals to get to an EKG and an AFib determination. Now we are monitoring sleep apnea. I’ve gotten so many notes over time from people who would have not survived had it not been for the alert on their wrist.

Apple plans to give AirPods the ability to correct for hearing loss. I bet the makers of expensive hearing aids are freaking out.

It’s not about competing against hearing aids on the market. It’s about trying to convince people who have hearing loss to use their AirPods. The vast majority of people with hearing issues have not been diagnosed. For some people, hearing aids have a stigma, and we can counter that with AirPods. And we can have people diagnose themselves. It’s the democratization of health…

We’re doing this interview at Apple Park, which is now seven years old. Have you been surprised by anything that couldn’t have been anticipated when it was just blueprints?

It’s promoted collaboration even more than I thought. That was a key component of the design, but there are so many places here where you just unexpectedly run into people. In the cafeteria, at the coffee bar, outside when you’re going across the pathway. Also, there’s a connection here to Steve that is incredible and very deep. We have the theater named after him and think about him all the time, but I can feel him in other spaces too.

3. 2024: The State of Generative AI in the Enterprise – Tim Tully, Joff Redfern, Derek Xiao, with Claude Sonnet 3.5

AI spending surged to $13.8 billion this year, more than 6x the $2.3 billion spent in 2023—a clear signal that enterprises are shifting from experimentation to execution, embedding AI at the core of their business strategies…

…Today, 60% of enterprise generative AI investments come from innovation budgets, reflecting the early stages of generative AI adoption. However, with 40% of generative AI spending sourced from more permanent budgets—58% of which is redirected from existing allocations—businesses are demonstrating a growing commitment to AI transformation…

…While foundation model investments still dominate enterprise generative AI spend, the application layer is now growing faster, benefiting from coalescing design patterns at the infrastructure level. Companies are creating substantial value by using these tools to optimize workflows across sectors, paving the way for broader innovation…

…In 2024, much of the action happened at the application layer. With many architectural design patterns established, app layer companies are leveraging LLMs’ capabilities across domains to unlock new efficiencies and capabilities. Enterprise buyers are seizing the moment, pouring $4.6 billion into generative AI applications in 2024, an almost 8x increase from the $600 million reported last year…

…Code copilots lead the charge with 51% adoption, making developers AI’s earliest power users…

…Support chatbots have captured significant usage, with 31% enterprise adoption…

…Enterprise search + retrieval and data extraction + transformation (28% and 27%, respectively) reflect a strong drive to unlock and harness the valuable knowledge hidden within data silos scattered across organizations…

…Meeting summarization ranks fifth in use cases (24% adoption), saving time and boosting productivity by automating note-taking and takeaways…

…When selecting generative AI applications, enterprises have clear priorities: Return on investment and industry-specific customization matter most when selecting new tools. Surprisingly, price isn’t a major issue; just 1% of the enterprise leaders we surveyed mentioned price as a selection concern. Buyers are playing the long game: They are far more focused on tools that can deliver measurable value (30%) and that understand the unique context of their work (26%) over those offering the lowest price tag (1%)…

…When AI pilots stutter or stall, it’s often due to challenges not adequately considered during the selection process. Although buyers aren’t checking price tags, implementation costs, cited in 26% of failed pilots, frequently catch them off guard. Data privacy hurdles (21%) and disappointing return on investment (ROI) (18%) also throw pilots off course. Technical issues, especially around hallucinations (15%), round out the top reasons for failure…

…Traditionally slow to adopt tech, healthcare is now leading generative AI adoption with $500 million in enterprise spend…

…Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows…

…With its complex data, strict regulations, and critical workflows, financial services ($100 million in enterprise AI spend) are primed for AI transformation…

…From Hollywood screens to creators’ smartphones, generative AI is reshaping media and entertainment ($100 million in enterprise AI spend)…

…Foundation models still dominate. The LLM layer commands $6.5 billion of enterprise investment…

…Rather than relying on a single provider, enterprises have adopted a pragmatic, multi-model approach. Our research shows organizations typically deploy three or more foundation models in their AI stacks, routing to different models depending on the use case or results…

…Among closed-source models, OpenAI’s early mover advantage has eroded somewhat, with enterprise market share dropping from 50% to 34%. The primary beneficiary has been Anthropic,* which doubled its enterprise presence from 12% to 24% as some enterprises switched from GPT-4 to Claude 3.5 Sonnet when the new model became state-of-the-art. When moving to a new LLM, organizations most commonly cite security and safety considerations (46%), price (44%), performance (42%), and expanded capabilities (41%) as motivations…

…To power RAG, enterprises must store and access relevant query knowledge efficiently. While traditional databases like Postgres (15%) and MongoDB (14%) remain common, AI-first solutions continue to gain ground. Pinecone,* an AI-native vector database, has already captured 18% of the market.

4. An Interview with Understanding AI Author Timothy B. Lee – Ben Thompson and Timothy B. Lee

As a side note, just as you sort of referenced it in passing, there is always the question of where are the productivity gains, when it came to, first the PC, and then the Internet? Is your sense that those just take a while to show up? Is there just a massive amount of consumer surplus that is not measured? What’s your big picture take on that question?

TL: There’s a couple of things. One is it takes a while to show up because to really get the big gains from a new general purpose technology, often you need to reorganize a lot of other business processes. There’s a famous analogy economists like to use for when they originally electrified the economy. The first thing they try to do is they tried to take the old steam-powered factories that just had one big crank shaft and put an electric motor in and that didn’t get you much improvement because the electricity was not cheap.

It was arguably worse.

TL: But then ten to twenty years later, people figured out, “Oh, we can have a bunch of small electric motors, one at each workstation, and now factories can be a lot more efficient”, but you had to build new factories and new businesses to do that…

Believe me, I think we’re around the same age, I know exactly what you mean and feel. That said, I feel like the big company — Wikipedia came out back when I was in college, or around that time and of course everyone, professors or teachers, banned the use of it. But what you quickly realized is that the key way to use Wikipedia is the sources. You go to Wikipedia, and then it has links to all the sources, then you have your original source documentation. I do feel like ChatGPT is just such a better version of that, particularly with the search version, and when it does sources, it’s just like, “What if we make a Wikipedia that just fills all sort of weight and space about knowledge”, and it’s pretty tough to beat in that regard.

TL: Yeah, absolutely. And as with Wikipedia, you have to be smart about it. You can’t assume that everything is accurate, you have to check your work. But I definitely find, anytime I have, if I’m trying to make a list of things and I want to know all the companies in a particular category, it’s a pain in the ass to find that on Google. Whereas if you ask ChatGPT, “Here’s like three companies in this category, give me more on the list”, it’ll know a bunch more of them. There’s so many things like that. So yeah, definitely, I don’t want to say never use it or it’s not useful. It’s definitely useful, but it’s 1% to 2% more productive over the course of a week rather than really transformational…

...Again, to go back to your perspective of looking at it over the last 18, 20 months since you started, do you think we’ve hit a wall with AI? You started wondering this publicly actually last December when Gemini came out and you felt a little underwhelmed, particularly given Google’s advantages. You weren’t sure at the time, was Google underperforming for Google specific reasons, maybe have we gotten as far as we can with GPT-4? What’s your evaluation 11 months on from that article?

TL: The thing I’ve noticed is that we keep hearing about there’s going to be a GPT-5—

It’s not here.

TL: There’s going to be a new big model and it hasn’t been released and I don’t have enough sources in the inside to those companies to know why that’s happening. But it could be they’re just still working on it and it’s going to come out next month and blow my mind, but every month that ticks by makes me a little more skeptical. Especially because the other thing trend we’ve seen is these companies are releasing these smaller models that are almost as good as the big models.

And then even to some extent, I was pretty impressed by o1, but what o1 did is kind of different. It wasn’t like scaling up the model, it’s like we’re going to do more inference time compute. In certain ways, it was much better, but it wasn’t better overall.

So my still pretty rough hypothesis, but my hypothesis is that there’s kind of a limit to what the current LLM architectures can do and we’re sort bumping up against that in various — I mean, another thing, we’ve had multimodal models that are much better, so we can do real-time voice and we can do images, so there’s new things it can do. But in terms of just the increase of overall reasoning capability, it doesn’t seem like we’ve had a big jump, really since March of 2023 when GPT-4 came out, and so I’m not going to make a strong prediction because again, it could come out next month and amaze me, but every month that ticks by I get a little bit more wondering what’s going on.

What do you think is the limitation? Is it data, compute or is it just a fundamental limitation of the transformer architecture?

TL: My guess is it’s a fundamental limitation of the transformer architecture, and I think the main issue is that the transformer architecture requires all of the model state to be in these vectors for individual words, and then it keeps a record of that forever — the whole context, there’s no process where you summarize and abstract a way. If you think about your life, you think about something that happened ten years ago, you don’t remember every single thing you said, everything that others said, you have a abstract memory that, “Oh, in 2014 I remember I lived in this place and I had this job”, and things you learn kind work their way into the brain, but it’s organized in a good way. LLMs just don’t have a way to do that.

So if I think about how people expect that at some point you’re going to have an LLM who’s like a personal assistant who maybe will work with you over your career and know all your habits and make all your appointments stuff and to do that, I just think this architecture where you remember every token exactly and do attention over that whole corpus, I don’t have any way of synthesizing and abstracting and forgetting unimportant things, just as a computer scientist, that doesn’t seem viable to me…

Do you think there’s a bubble now then?

TL: That’s always a hard question to say. Part of what’s hard about bubbles is that often people start calling a bubble pretty early and then the bubble keeps growing and people keep saying there’s a bubble.

Right. If people think there’s a bubble, there is not a bubble, that’s my heuristic.

TL: Well, there’s that, but also, at some point, the stock or the house price or whatever will peak and then go down, and the people who said it was a bubble right at the top will be right, but some people who called it way at the beginning were probably wrong.

I do expect a period where AI gets overly frothy and then crashes. Whether we’re currently there or just headed for that, is a little hard to say. I do not expect a dot-com bust level expansion, because as you were saying, I do think that this technology has clear benefits, it’s mostly big technology companies, it’s not as venture-funded. In fact, some of the early really crazy-funded companies have already been acquired.

So, yeah, I think the level of hype right now is a little too high and there’ll be some pullback, but I don’t think you’ll see a big crash and I don’t think you’ll see much of a pullback from deployment, because I think there really is enough value here that there’s going to be a big market for a lot of people working on it, and a lot of valuable stuff will come out of it in a pretty direct way.

I saw a new theory this week that actually really resonated with me. So this might be new to you, so I’m going to drop it to you on the spot. I think the big question on if you’re thinking about bubbles, you go back to a Carlota Perez model of the importance of bubbles and driving, you go back to the dot-com era, the really important part was the telecoms build out, which was, at the time, some people called it, and in retrospect, clearly insane. If you’re rolling out all this fiber and everyone’s doing it, the costs are going to go to zero, you’re all going to go bankrupt because it’s all financed by debt, as large infrastructure usually is. But the long-term payoff from that was massive, right? That, basically, booted off the whole Web 2.0 era where now everyone, suddenly, had broadband. Recessions suck, but there was a huge societal benefit that did come from that build out.

You go back to previous ones, whether it be electricity or steam, you had these similar cycles and the big question was, “What’s the societal beneficial output of an AI bubble if there is a bubble?” and chips never quite fit, because chips wear out and chips get better. So, if you buy a bunch of chips, but they’re five-year-old chips, what’s the benefit there? Doug O’Laughlin put this tweet out here, that has been really striking to me. He said, “Internet Bubble:Telecom::AI:Power/DCs”, and to me, that makes sense. If you’re going to actually build more nuclear power, or you’re going to do massive investments in solar and batteries, or whatever it might be to fuel these sorts of things, those are investments that, 1) can definitely make you go bankrupt because you’re taking out a bunch of debt to fund it, but 2) will retain value for many, many, many years to come. What do you think of that analogy? To me, it seems pretty compelling.

TL: Yeah, I one hundred percent agree with that. I mean, I was actually going to say the part of it that seems most bubbly is this stuff about Microsoft leasing out Three Mile Island for 20 years. Again, we were talking before is, “Do I think scaling law thing is going to run out of steam?”, my guess is it probably will. I don’t know if we’re on the verge of that, but, anyway, so I would not be surprised if people look back ten years from now, and say, “Oh, man, all that money companies spent on data centers and power is, that was kind of a waste of money”. But then, like you said, the country needs more power, and at some point, probably, we’ll want to be training really big models and so, if we have a bunch of huge data centers that we can use to train models, probably, we’ll get some value out of that. It’s tech companies spending the money so the social cost is not probably that high.

5. 7% of Book Value; 1x EBITDA; Cash is 2.5x Larger than Market Cap – Dirtcheapstocks

Highlands REIT, Inc. (Ticker HHDS) was created in 2016 when it was spun out of InvenTrust Properties Corp.

HHDS was formed to hold non-core assets of InvenTrust.

Today, HHDS owns 13 apartment houses, 3 retail properties, 1 office property and 1 correctional facility…

…HHDS has:

  • $205MM of book value.
  • $16.7MM of net operating income (NOI) in 2023.
  • $17MM of NOI in 2022.
  • $85MM of net debt.
  • 57% of NOI generated from multifamily assets

What do you think? Is Highlands worth book value? Is it worth half of book value?

If we want to value the business at an 8 cap, the equity must be worth $124MM.

Within the last two weeks, HHDS has been valued as low as $14.4MM.

That’s less than 1x NOI, and 7% of book value…

…Most companies valued at $14MM might have a few hundred shareholders of record. Apple is valued at $3.5 Trillion, and it has 23,000 record holders.

Highlands has 143,000 record holders…

…Here’s my theory: When Highlands was spun out of InvenTrust, every shareholder was given ownership individually. There are 143,000 separate people/entities that own this stock. And this stock was an afterthought. It was just a few noncore assets being spun out of a $2 billion REIT…

…HHDS, perhaps wanting to ward off future material purchases by Mackenzie, announced a tender offer in October 2023. While Mackenzie was tendering at $0.04/share earlier that summer, HHDS was willing to pay $0.12 – $0.17/share. What’s more, HHDS was committing $20MM to the share buyback.

HHDS would repurchase 13-19% of its shares if fully subscribed.

A few weeks later, HHDS increased the buyback to $25MM!

In the end, $23.7MM was spent to buy in 169MM shares – nearly 20% of the outstanding share count…

…HHDS showed up as an expert market security, even though it’s SEC registered.

But I found that the traditional expert market brokers couldn’t buy shares.

Then I went to alternative market brokers. They’d be happy to take my money, and told me I could get as much volume at $0.10 as my heart desired.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta, Microsoft, and MongoDB. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 December 2024:

1. America, China, and the Death of the International Monetary Non-System – Russell Napier

Something changed in America in the 1990s. The U.S. federal funds rate began a decline from above 5 percent to reach the effective zero bound by 2009. U.S. ten-year Treasury yields declined from above 6 percent to levels not even recorded during the Great Depression. Credit to the U.S. nonfinancial corporate sector rose from 56 percent of GDP to a new all-time high of 87 percent, and U.S. Government debt rose from 60 percent of GDP to a recent high of 106 percent, very near the peak level recorded during World War II. The valuation of U.S. equities rose from a cyclically adjusted price-to-earnings ratio (CAPE) of 15x to the current level of 34x, having reached a new all-time high of 44x in 2000. U.S. tangible investment declined from 7 percent of GDP to as low as just 1 percent of GDP, a level only previously recorded in the Great Depression and briefly in the hiatus of investment after World War II…

…Today, we have an international monetary system that does not have a name…

…It is a non-system to the extent that its terms and conditions were never agreed upon by all the parties involved, but instead it was born from choices made by a few, most notably China, that the other parties accepted and adjusted to. The extremes of interest rates, debt levels, asset price valuation, and investment in tangible assets in the United States are just part of that global adjustment to the new international monetary system that grew from China’s unilateral decision to manage its exchange rate beginning in 1994. This system would never have been agreed to in any negotiation, as it was a system replete with distortions that would lead to dangerously large imbalances with dangerous political ramifications…

…The crucial distortion imposed by China’s decision in 1994 was a decoupling of developed world growth rates from interest rates, the discount rates used in asset valuations, which many assumed to be a new normal. When interest rates appear to be permanently depressed relative to growth rates, asset valuations rise, leverage increases, and investors are incentivized to pursue gain through rising asset prices rather than through investment in new productive capacity. The decoupling of growth and interest rates was driven by the People’s Bank of China’s (PBOC) appearance as a non-price-sensitive buyer of U.S. Treasury securities, and indirectly by the role China’s excessive fixed-asset investment played in reducing global inflation and hence interest rates…

…For developed-world companies facing the cheap resources, cheap finance, and cheap exchange rate of China, there was little incentive to invest in tangible assets at home. In the United States, in particular, where companies are managed to maximize return on equity and returns to shareholders, the corporation was able to benefit from both cheap Chinese production and the low interest rates that allowed balance sheets to be levered to buy back equity. In other countries, with different social contracts and less focus on rewarding management via stock options, closing productive capacity and pursuing financial engi­neering were more difficult. Thus, it was U.S. corporations that most fully adapted to the new international monetary system.

When the Bretton Woods system was established, severe restrictions were placed on the free movement of capital. The architects of that system recognized that maintaining exchange rate stability would not be possible if capital were allowed to move freely. Our current system permits, at least into and within the developed world, the free movement of capital. In this system, the private sector capital that left the developed world for China was transformed, via PBOC exchange rate inter­vention, into an accumulation of developed-world debt securities financed by the creation of renminbi reserves…

…. China’s inability to run sufficient surpluses since 2014 to generate sufficient broad money growth and prevent the escalation of its already high debt-to-GDP ratio is not widely recognized as a similar problem. Yet China’s move to a flexible exchange rate to avoid a debt deflation and create sufficient growth in broad money to reduce its debt burden will end the non-system as surely as President Nixon’s announcement that the U.S. dollar was no longer linked to gold ended Bretton Woods. Few analysts understand the impact that this move will have on the international monetary system and the long-accumulating distortions to credit, money, asset prices and the global economy.

When China moves to a flexible exchange rate, it is difficult to foresee how just one new international monetary system could replace the non-system. Given current geopolitical tensions, the prospect of China and the United States hashing out a new Bretton Woods–style agreement is highly unlikely…

…Predicting how any new U.S.-centric monetary system will develop is not easy, but such a system must allow for excessively high debts, the legacy of the non-system, to be inflated away. While much of the focus is on the high U.S. total nonfinancial debt-to-GDP ratio of 255 percent, there are many countries in the world struggling under even higher debt ratios: Canada, 311 percent; France, 315 percent; Japan, 400 percent; Netherlands, 316 percent; Switzerland, 297 percent, etc.15 The rise and rise of debt-to-GDP levels, a product of the gap between interest rates and growth rates under the non-system, will now have to be addressed.

With austerity, default, hyperinflation, or very high real GDP growth unlikely to be the solution, a new global monetary system will have to be created that offers a path of moderation toward reducing debt‑to-GDP levels. That path of moderation is likely to take the form of financial repression—such as that imposed upon savers in the after­math of World War II, to force their savings to fund the investment needed for postwar reconstruction, but at interest rates that did not reward them for the current and expected levels of inflation. That is a world in which bankers will create more credit and more money and more inflation than they have in recent decades. Higher nominal GDP growth combined with imposed purchases of low-yielding debt securi­ties will, over time, reduce debt-to-GDP levels, just as it did in the decades following World War II. Whatever the new international monetary system looks like, it will have to accommodate the financial repression that will finally begin to reduce debt-to-GDP levels…

…In the long period in which developed-world debts will have to be inflated away, policymakers will have to take a view as to which section of society will bear the heaviest cost. One of the quickest and least painful ways to enforce a deleveraging is through encouraging a rapid re‑equitization of the private sector. The ability of all corporations to deduct interest expense in calculating their taxes has to be reconsidered. In an era when much greater fixed-asset investment is essential, the tax privilege of deducting interest expense should not be available to cor­porations using debt to lever up an existing income stream; rather, the tax code should reward corporations using debt to build new businesses and new income streams. There are of course losers from such a change in taxation, but they are those who have been the winners from the prolonged period of falling interest rates and rising asset prices that have been the key feature of our now failing non-system. A long financial repression is in nobody’s interest, and the longer it prevails, the more likely it will create wealth redistributions that threaten social stability. Proactive intervention to force re-equitization upon a small section of society through the withdrawal of a tax privilege is painful for some but is a more equitable path to reducing high debt-to-GDP levels while facilitating greater investment.

To reduce the high and dangerous debt-to-GDP ratios of the developed world, nominal GDP must grow faster than total credit. This can be achieved by increasing the growth rate in bank credit while limiting the growth in nonbank credit. While the non-system was a key driver of the rise and rise of debt-to-GDP, the disintermediation of credit also played a key role. It is commercial bankers who create money, and if nominal GDP growth is to remain at a high enough level to reduce debt-to-GDP levels, bank balance sheets must grow faster than they have over the past three decades. Commercial banks create money when they expand their balance sheets, and if they do not create enough money, nominal GDP growth will remain low while credit growth, spurred by the growth in nonbank credit, can remain high.18 A combination of faster growth in bank credit combined with the re­striction of the growth in nonbank credit will be at the core of reducing debt-to-GDP ratios. The targeted ending of interest deductibility in the computation of corporate income tax, mentioned earlier, can assist in promoting the growth in bank credit and hence money at the expense of growth in nonbank credit. If it is bankers who are at the vanguard of funding the necessary investment renaissance in the United States, and not credit markets, then the move to lower debt-to-GDP levels will be less painful than if we are forced to take the hard path of austerity, default, hyperinflation, or a very long financial repression. A new focus on the growth of bank credit and therefore money is at the core of any policy to reduce dangerously high debt-to-GDP ratios.

2. Are U.S. Stocks Overvalued? – Ben Carlson

The S&P 500 is up nearly 90% since election day 2020 yet valuations are essentially identical.

How can that be?…

…Stock prices are up a lot but fundamentals2 have kept pace. In fact, the stock market has actually gotten less expensive over the past couple of years because of earnings growth…

…It’s also important to point out that much of the valuation premium on the S&P 500 comes from the largest stocks…

…These stocks have high valuations for good reason — they’re some of the best-run corporations in the world…

…The good news for valuation-conscious investors is there is plenty of value outside of the mega-cap stocks. Valuations for small and mid cap stocks are still pretty cheap. They are far less expensive now than they were before the pandemic. Maybe there’s a reason for that but stocks don’t get cheap for no reason.

3. Amazon’s Moonshot Plan to Rival Nvidia in AI Chips – Matt Day, Ian King, and Dina Bass

Nvidia’s biggest customers — cloud providers like Amazon Web Services, Microsoft Corp.’s Azure and Alphabet Inc.’s Google Cloud Platform — are eager to reduce their reliance on, if not replace, Nvidia chips. All three are cooking up their own silicon, but Amazon, the largest seller of rented computing power, has deployed the most chips to date…

…Fifteen years ago, the company invented the cloud computing business and then, over time, started building the infrastructure that sustains it. Reducing its reliance on one incumbent after another, including Intel Corp., Amazon ripped out many of the servers and network switches in its data centers and replaced them with custom-built hardware. Then, a decade ago, James Hamilton, a senior vice president and distinguished engineer with an uncanny sense of timing, talked Jeff Bezos into making chips…

…After almost four decades in the business, Hamilton knows taking Amazon’s chip ambitions to the next level won’t be easy. Designing reliable AI hardware is hard. Maybe even harder is writing software capable of making the chips useful to a wide range of customers. Nvidia gear can smoothly handle just about any artificial intelligence task. The company is shipping its next-generation chips to customers, including Amazon, and has started to talk up the products that will succeed them a year from now. Industry observers say Amazon isn’t likely to dislodge Nvidia anytime soon…

… The unit’s first chip was designed to power something called inference — when computers trained to recognize patterns in data make a prediction, such as whether a piece of email is spam. That component, called Inferentia, rolled out to Amazon’s data centers by December 2019, and was later used to help the Alexa voice assistant answer commands. Amazon’s second AI chip, Trainium1, was aimed at companies looking to train machine learning models. Engineers also repackaged the chip with components that made it a better fit for inference, as Inferentia2.

Demand for Amazon’s AI chips was slow at first, meaning customers could get access to them immediately rather than waiting weeks for big batches of Nvidia hardware. Japanese firms looking to quickly join the generative AI revolution took advantage of the situation. Electronics maker Ricoh Co., for example, got help converting large language models trained on English-language data to Japanese.

Demand has since picked up, according to Gadi Hutt, an early Annapurna employee who works with companies using Amazon chips. “I don’t have any excess capacity of Trainium sitting around waiting for customers,” he says. “It’s all being used.”

Trainium2 is the company’s third generation of artificial intelligence chip. By industry reckoning, this is a make-or-break moment. Either the third attempt sells in sufficient volume to make the investment worthwhile, or it flops and the company finds a new path. “I have literally never seen a product deviate from the three-generation rule,” says Naveen Rao, a chip industry veteran who oversees AI work at Databricks Inc., a purveyor of data and analytics software.

Databricks in October agreed to use Trainium as part of a broad agreement with AWS. At the moment, the company’s AI tools primarily run on Nvidia. The plan is to displace some of that work with Trainium, which Amazon has said can offer 30% better performance for the price, according to Rao. “It comes down to sheer economics and availability,” Rao says. “That’s where the battleground is.”…

…Amazon’s Trainium2 will likely be deemed a success if it can take on more of the company’s internal AI work, along with the occasional project from big AWS customers. That would help free up Amazon’s precious supply of high-end Nvidia chips for specialized AI outfits. For Trainium2 to become an unqualified hit, engineers will have to get the software right — no small feat. Nvidia derives much of its strength from the comprehensiveness of its suite of tools, which let customers get machine-learning projects online with little customization. Amazon’s software, called Neuron SDK, is in its infancy by comparison.

Even if companies can port their projects to Amazon without much trouble, checking that the switch-over didn’t break anything can eat up hundreds of hours of engineers’ time, according to an Amazon and chip industry veteran, who requested anonymity to speak freely. An executive at an AWS partner that helps customers with AI projects, who also requested anonymity, says that while Amazon had succeeded in making its general-purpose Graviton chips easy to use, prospective users of the AI hardware still face added complexity.

“There’s a reason Nvidia dominates,” says Chirag Dekate, a vice president at Gartner Inc. who tracks artificial intelligence technologies. “You don’t have to worry about those details.”…

…  “We’re particularly impressed by the price-performance of Amazon Trainium chips,” says Tom Brown, Anthropic’s chief compute officer. “We’ve been steadily expanding their use across an increasingly wide range of workloads.”

Hamilton says Anthropic is helping Amazon improve quickly. But he’s clear-eyed about the challenges, saying it’s “mandatory” to create great software that makes it easy for customers to use AWS chips.

4. Key Square Capital 2024 January letter – Scott Bessent and the Key Square team

In essence, a second Trump administration would be expected to embrace a “Peace Through Strength” trade policy. Of course, in the case of recalcitrant trade partners, Trump can always offer them a negotiating session with former US Trade Representative Robert Lighthizer who will likely play a prominent role in his second term.

Our base case is that a re-elected Donald Trump will want to create an economic lollapalooza and engineer what he will likely call “the greatest four years in American history.” Economist Ed Yardeni believes that post-Covid America has the potential to have a boom similar to the “Roaring Twenties” of a century ago. We believe that a returning President Trump would like this to be his legacy. In this scenario, the greatest risk factor, in our opinion, would be a sudden rise in long-end rates.

The talk of revenge will likely be limited to a small group of political enemies, and the wider policies of the administration will be oriented toward de-regulation, energy independence, reviving U.S. manufacturing and extending the tax cuts. We find it unlikely that across-the-board tariffs, as currently reported by the media, would be enacted at the same time as he moves to fix the immigration crisis. The tariff gun will always be loaded and on the table but rarely discharged. Of course, strategic and national security issues around China will remain.

Another differentiated view that we have is that Trump will pursue a weak dollar policy rather than implementing tariffs. Tariffs are inflationary and would strengthen the dollar–hardly a good starting point for a US industrial renaissance. Weakening the dollar early in his second administration would make U.S manufacturing competitive. A weak dollar and plentiful, cheap energy could power a boom. The current Wall Street consensus is for a strong dollar based on the tariffs. We strongly disagree. A strong dollar should emerge by the end of his term if the US reshoring effort is successful.

5. Scott Bessent Sees a Coming ‘Global Economic Reordering.’ He Wants to Be Part of It – Peter Rudegeair and Gregory Zuckerman

In his first interview following his selection, Bessent said his policy priority will be to deliver on Trump’s various tax-cut pledges. Those include making his first-term cuts permanent, and eliminating taxes on tips, social-security benefits and overtime pay…

…Bessent became one of Trump’s closest advisers by adding depth to his economic proposals and defending his plans for more-activist trade policies. He has argued that the president-elect’s plans to extend tax cuts and deregulate parts of the U.S. economy would create an “economic lollapalooza.”…

…Bessent has long been worried about the U.S.’s heavy debt and thinks the main way it can be reduced is by boosting growth, which increases tax revenues.

He has advised Trump to pursue a policy he calls 3-3-3, inspired by former Japanese Prime Minister Shinzo Abe, who revitalized the Japanese economy in the 2010s with his “three-arrow” economic policy. Bessent’s “three arrows” include cutting the budget deficit to 3% of gross domestic product by 2028, spurring GDP growth of 3% through deregulation and producing an additional 3 million barrels of oil or its equivalent a day.

To get government spending under control, Bessent has advocated extending the 2017 Tax Cuts and Jobs Act but with what are called pay-fors to lower its cost. That would involve either reducing spending or increasing revenue elsewhere to offset the impact. He also proposed freezing nondefense discretionary spending and overhauling the subsidies for electric vehicles and other parts of the Inflation Reduction Act.

Earlier this year, Bessent thought about tariffs as a negotiating tool, telling investors in a letter that the “tariff gun will always be loaded and on the table but rarely discharged.” He has since argued for them more forcefully, especially as a source of tax revenue.

In a speech last month titled “Make the International Economic System Great Again,” Bessent argued for increasing tariffs on national-security grounds and for inducing other countries to lower trade barriers with the U.S.  


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 24 November 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 November 2024:

1. Cash! – The Brooklyn Investor

Over the past few years, people have kept talking about mean reversion to value and whatnot, but I have ignored that for the most part for the reasons I’ve been saying here. The growth / value spread just seems to me so much reflecting values being taken away from the old economy into the new one. Yes, sounds like 1999 bubble, but it just seems true. Retail just seems to be going down the drain, old school marketing / advertising just seems to be losing to online marketing etc…

…The massive transfer of wealth has been going on for decades, or more than a century. Industrialization just sucked the wealth and value out of skilled workers / craftsman and transferred it to large corporations via factories. Formerly skilled workers were transferred into factories that required no skill (therefore, lower income). All the value-added accrued to the owners of the factories (capitalists). Same with national chain restaurants and retail. WMT transferred wealth from the local shops / restaurants to Arkansas; former store-owners end up having to work at WMT for lower pay (as unskilled workers). This is nothing new.

Now, the same thing is happening at so many levels at the same time that it is quite frightening. Just as a simple example, I’ve mentioned this before, but companies like Squarespace and Wix (or free options like WordPress) have sort of wiped out a large part of the web development world. People who knew a little HTML / CSS / Javascript might have been able to make a living not too long ago, but not now. All that ‘wealth’ is transfered to the companies that provide the platform for people to build it themselves.

Photographers are complaining for similar reasons. You no longer need to hire a photographer for low-end projects. You can just buy photos from various photos sites for very low prices, or even have AI generate the exact photo you need. I have used AI to generate artwork, photos and text in various volunteer work, and it is scary. I thought to myself, jeez, I would have paid an art student $300 for this 3 years ago; now I do it for free online via AI…

…This is why when people say the stock market as a percentage of GDP is going up, the concentration of stocks in the market is getting too high etc., I think it is obvious that this is happening because the wealth and value is actually being more and more focused and concentrated, so the market is only reflecting reality…

…A similar group of very rich and smart people are saying that long term rates can’t stay low and they must move substantially higher due to these unsustainably large and growing federal deficits. Do I worry about that? Yes. But, I look to Japan as the model of an aging society and growing government deficits. Sure, there are plenty of differences (Japan is a high savings nation), but I still can’t get around the fact that slowing population growth and maturity of the U.S. economy would make growth harder to achieve going forward. Almost certainly, we can’t get back to the growth of the post-war baby boom generation. So given that, how do interest rates go up? Deficit-driven inflation? We haven’t really seen that in Japan, and even in the U.S. until Covid and Ukraine. So is the recent inflation really deficit-driven inflation? Or exogenous event-driven inflation? Maybe a combination of both.

This is not to say I don’t care about deficits. Of course it’s a problem, and we need to deal with it at some point. My opinion is just seeing things as an investor. I am just telling you why, as an investor, I am not yet concerned too much with the deficit and inflation.

2. Off The Beaten Path Investing – David Katunarić and Lawrence J. Goldstein

Goldstein: I started at Burnham when they had about 22 senior analysts following every industry in America, or so they thought. One day, after discovering the pink sheets, or actually, I found the pink sheets afterwards. I saw a list of trucking companies. It was in the Standard & Poor’s transportation manual, which came out weekly, supplements to put in the looseleaf book. I got a list of every trucking company in the United States and there must have been well over 50, maybe more, and every one of them had lower earnings or losses, except for four companies. Those four were Roadway Express, Denver Chicago Trucking, Merchant Fast Motor Lines and Overnite, spelled N-I-T-E. I called them first, and I ended up making a friend of J. Howard Cochran, the founder and president. At the beginning, he sent me a copy of his monthly financial statement. There were no rules against doing that. I remember they were printed in purple ink on a ditto machine. His first report he sent me was the five months ended May. He had earned in those five months, per share, I remember $1.86. He told me also that in the trucking business, the second half of the year is better than the first half. I said, “Let’s see, five months $1.86, times 2 is over $3.60, and I’m missing a month and the second half is better, so it’s got to be higher than that.” The stock was $1.75 or $1.25 off it. I couldn’t believe it. So I wrote a report, gave it to my boss, head of research.

He said to me, and I can hear it to this day, “Listen, kids, this is an institutional research department. We don’t write or recommend reports on dollar-stocks.” So I knew I was onto something. My boss was crazy. It ended up, by the way, they earned almost $4 a share that year. I got to laugh, it’s funny – I could buy the first share at $1.75, and I did. A number of years later, I think two decades later, or less than, Overnite sold out to, I think it was the Southern Pacific Railway, they sold out for $100 million. This thing was worth $500,000 when I met them. So the pink sheets made sense to look there. Basically, what I came to do was to look left when everybody’s looking right, look down when everybody’s looking up, and find companies that are off the beaten path, overlooked or ignored by otherwise intelligent investors…

…Katunaric: What would you say, Larry, in these 40-some years that you’re managing Santa Monica Partners, how has your investing approach changed since then? What are some lessons that sparked the change?

Goldstein: It’s not changed at all, except that you don’t write to the SEC and ask for the 10-Ks and Qs and the proxy and have it take two weeks if you get it. Now you hit a keyboard and you get it all. That’s changed. The second thing is now there are people like you. There are a lot of people – I don’t mean you personally – who are on top of what’s called microcaps. So everybody’s searching for the goal. Obviously you’ve developed a business and you want to develop a bigger business. But that’s what happened. Competition that didn’t exist. When I did it, there was one firm that got big, Tweedy Browne. You know them? What happened to them was terrible. They got so big they had to buy ordinary stocks…

…Goldstein: When I bought Mastercard, it was not a huge company. When they went public, if I remember right, it was $39, $38, $37. I can’t remember the exact price, and it’s since split 10-for-1. So my cost is, I guess, $3 and change. I forget the exact split. I have to look it up. Let’s say it’s $10, $15 – but I think my cost is less than $15.

Katunaric: I saw somewhere that it was a hundred-bagger since the IPO. Maybe I read it last year. I think it was one of the best performing ones, but I’m not sure also.

Goldstein: I’ll focus on that for a second. The reason I bought it, was in 1971, I went to my boss, Tubby Burnham, and I said, “There’s a business that’s going public, Madison Avenue.” Madison Avenue is where all the advertising agencies were in New York, every one of them. The company that was going public, it was the second company to go, ad company. The first one was a company called Puppet, Koning, and Lois. They had been public for some period of time and the stock did okay. The second one was Batten, Barton, Durstein, and Osborne, which subsequently changed their name to BBD&O, which subsequently changed their name, and it’s the same company to Omnicom, which is the world’s first and second largest advertising agency. Why did I want to buy it? I said to my boss, “Advertising companies are required if you have a consumer product to sell. It’s a royalty company. They get a royalty on every new consumer product that’s marketed to the world.” That’s what I think it was. If you’re going to sell a new widget, you want to advertise it. They get a cut of that. So, a great business. I said, “That’s exactly what Mastercard is.” Everything that anybody buys, they get a cut. By the way, there’s no risk to their business. They don’t make loans. Banks make loans. They get a cut. Banks have risk, but Mastercard, it’s like every time you turn on the water, you get a free glass…

…I tell you, the biggest recommendation to me, and the biggest thing I don’t believe or understand is, Warren Buffett, he has never bought it, except for himself when he was a kid. He bought Oxy. I don’t know that much about Occidental, but there’s nothing better than TPL if you want to be in the oil business. They just own the stuff and you can take it out at your cost and pay them not only for that, but the right to get to the well and leave the well and for the water for fracking. If you run a hose or a pipeline, pay them. What better business is there than that? None.

Katunaric: I agree. You pitched me TPL extensively yesterday and the asset light nature of the business was really attractive.

3. Here’s How Trump Could Lose the Coming Trade War – Paul Krugman

All indications are that China’s era of torrid economic growth is behind it. For decades, Chinese growth was fueled mainly by two things: a rising working-age population and rapid productivity growth driven by borrowed technology. But the working-age population peaked around a decade ago and is now falling. And despite some impressive achievements, the overall rate of technological progress in China, which economists measure by looking at “total factor productivity,” appears to have slowed to a crawl…

…China, however, has built an economic system designed for the high-growth era — a system that suppresses consumer spending and encourages very high rates of investment.

This system was workable as long as supercharged economic growth created the need for ever more factories, office buildings and so on, so that high investment could find productive uses. But while an economy growing at, say, 9 percent a year can productively invest 40 percent of G.D.P., an economy growing at 3 percent can’t.

The answer seems obvious: redistribute income to households and reorient the economy away from investment toward consumption. But for whatever reason, China’s government seems unwilling to move in that direction…

…So what do you do if you have lots of capacity but your consumers can’t or won’t buy what you make? You try to export the problem, keeping the economy humming by running huge trade surpluses…

…China appears to be exporting close to $1 trillion more than it imports, and the trend is upward.

Hence the coming trade war. The rest of the world won’t passively accept Chinese surpluses on that scale…

…That’s why the Biden administration has been quietly pursuing a quite hard line on China, retaining Trump’s tariffs and trying to limit its progress in advanced technologies. It’s why the European Union has imposed high tariffs on electric vehicles made in China, which is probably only the beginning of expanded trade conflict…

…Trump’s insistence that tariffs don’t hurt consumers — even as businesses across America are planning to raise prices when his planned tariffs hit — strongly suggests that neither he nor anyone he listens to understands how global trade works. Not a good thing at a time of trade conflict.

4. Is the United States Going Broke? – Ben Carlson

There seem to be two extreme views when it comes to government debt levels.

One is the view that government debt doesn’t really matter all that much since we have the global reserve currency and the ability to print as much of that currency as we’d like.

The other view is that government debt levels are reaching a tipping point that will lead to calamity…

…It is true that U.S. government debt is enormous…

…Total government debt in the United States was around $23 trillion heading into the pandemic so debt levels are up 50% or so this decade alone.

It’s also true that the interest we pay on government debt has risen considerably because we’ve taken on so much and interest rates are so much higher than they were in the 2010s…

…But you can’t look at debt levels on their own. You have to think of them through the lens of a $30 trillion U.S. economy.

Here is interest expense as a percentage of GDP:..

…It’s shot up considerably in recent years but it’s still below 1990s levels. The Fed cutting interest rates should help on the margins…

…Spending was 45% of GDP during the pandemic. That was obviously unsustainable but things are now back to normal…

…The thing you have to understand is the United States government does not operate like a household when it comes to debt. You pay your mortgage off over time and eventually retire that debt.

The government’s budget is not at all like a household budget. First of all, the government can print its own currency. That helps in a pinch and it’s the main reason our government can’t go broke. Inflation is the true constraint when it comes to politicians spending money.

As long as the economy is growing, debt should be growing too…

…I would be more worried if you told me government and consumer debt were down in the coming decades. That would mean something is seriously wrong with the economy.

Debt grows because assets grow (remember government debt is an asset in the form of bonds for investors). Debt grows because the economy grows. Income grows. Prices grow. So of course debt will rise. 

5. Wall Street’s Elites Are Piling Into a Massive AI Gamble – Neil Callanan, Gillian Tan, Tasos Vossos, Carmen Arroyo, and Immanual John Milton

While much of the speculative hype around AI has played out in the stock market so far, as seen in chipmaker Nvidia Corp.’s share price, the giddiness is spreading to the sober suits of debt finance and private equity.

Analysis by Bloomberg News estimates at least $1 trillion of spending is needed for the data centers, electricity supplies and communications networks that will power the attempt to deliver on AI’s promise to transform everything from medicine to customer service. Others reckon the total cost could be double that…

…Further proof of the “unsatiable demand” for computing horsepower, according to real-estate broker Jones Lang LaSalle Inc., is the more than sevenfold increase over two years in construction work on US co-location centers, which lease out rack space to tech firms. Asking rents in those facilities have jumped as much as 37% in 12 months, the firm estimated in an August report.

All of this unbridled spending is revving up the issuance of both investment-grade debt and riskier leveraged loans, especially in the US, handily for private lenders and fee-starved investment bankers alike. Hedge funds are looking as well to profit from AI hysteria with novel types of debt structures.

It’s also opened up a new corner of the asset-backed securities market, where sales of debt backed by data centers have already jumped to a near-record $7.1 billion this year, according to data compiled by Bloomberg News. Chuck in fiber networks and other bits of kit, and it’ll be much higher. Matt Bissonette, who heads Guggenheim Securities’ business in this area, says the number of buyers for his data-center ABS products has roughly doubled in four years…

…While Blackstone hasn’t risked that kind of capital on construction before, developers of data centers can make stellar returns if all goes well. Property researcher Green Street reckons profit margins on London sites are about 65%.

Financiers are eager to back these grand projects because future occupants have usually pre-signed long leases, making them safer bets. Some banks are offering to lend as much as 70% or 80% of the cost and occasionally more when a lease is already signed, according to a person with knowledge of the matter…

…Lenders are more twitchy, however, about data centers explicitly earmarked for AI rather than more general purposes, according to a banker who works in the sector. Such deals can carry costlier debt and less leverage, he says, because the technology still has to prove its worth.

Separately, a senior partner at a leading private equity firm says he’s troubled by the emergence of speculative development, meaning construction takes place before a tenant has been found, as it’s hard to be sure of final demand. Some lawyers talk of “zombie projects” that may never be finished.

And not everyone believes that the “if you build it, they will come” approach is a surefire winner for those gambling on an era-changing AI breakthrough. Massachusetts Institute of Technology professor Daron Acemoglu says a lot of capital will be wasted.

Despite the misgivings, the appetite for deals from bankers and private lenders — especially for sites with blue-chip, signed-up occupants — is giving most data-center owners and developers a strong hand when pricing debt. A site leased long term by a tech giant can snag bank funding at a margin below two percentage points, says Brookland’s Hussain. Co-locators typically pay 2.5 percentage points or less, he adds.

“Recently, we raised €850 million ($907 million) in nine-year bonds at below 4% and refinanced and upsized our revolving credit facilities to $4.5 billion,” says Jordan Sadler, senior vice president at Digital Realty Trust Inc., a tech property firm that has signed joint ventures with Blackstone and others for almost $9 billion of hyperscale data-center developments…

…Across the Atlantic, one utility told the Federal Reserve Bank of Atlanta that electricity usage by data centers rose 17% in recent months. In Virginia, host to the world’s highest concentration of these sites, records for peak power demand were set six times in July, according to Dominion Energy Inc.

Trying to satisfy energy-devouring data centers means the utility sector’s capital spending is set to exceed $200 billion by next year, about double what it was a decade earlier. That would have stressed utility balance sheets, but a recent easing of how Moody’s Ratings views some of the industry’s riskier hybrid bonds — letting them be treated as half equity — has opened the floodgates to companies raising capital without being downgraded.

Sales of these bonds have risen almost eightfold this year to $15 billion, data compiled by Bloomberg shows. Only issues by bulge-bracket banks match that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Wix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 17 November 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 17 November 2024:

1. OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows – Stephanie Palazzolo, Erin Woo and Amir Efrati

In May, OpenAI CEO Sam Altman told staff he expected Orion, which the startup’s researchers were training, would likely be significantly better than the last flagship model, released a year earlier.

Though OpenAI had only completed 20% of the training process for Orion, it was already on par with GPT-4 in terms of intelligence and abilities to fulfill tasks and answer questions, Altman said, according to a person who heard the comment.

While Orion’s performance ended up exceeding that of prior models, the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4, the last two flagship models the company released, according to some OpenAI employees who have used or tested Orion.

Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks, according to the employees. Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee. That could be a problem, as Orion may be more expensive for OpenAI to run in its data centers compared to other models it has recently released, one of those people said.

The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process…

…However, OpenAI researcher Noam Brown said at the TEDAI conference last month that more-advanced models could become financially unfeasible to develop.

“After all, are we really going to train models that cost hundreds of billions of dollars or trillions of dollars?” Brown said. “At some point, the scaling paradigm breaks down.”…

…Orion was trained in part on AI-generated data, produced by other OpenAI models, including GPT-4 and recently released reasoning models, according to an OpenAI employee. However, such synthetic data, as it is known, is leading to a new problem in which Orion may end up resembling those older models in certain aspects, the employee said.

2. Prejudice And China – Louis-Vincent Gave

This led us to the comments made in September by Ford chief executive officer Jim Farley. Freshly returned from a visit to China, Farley told The Wall Street Journal that the growth of the Chinese auto sector poses an existential threat to his company, and that “executing to a Chinese standard is now going to be the most important priority.”

By any measure, this is an earth-shattering statement.

Making cars is complicated. Not as complicated as making airliners or nuclear power plants. But making cars is still the hallmark of an advanced industrial economy. So, the idea that China is suddenly setting the standards that others must now strive to meet is a sea-change compared with the world we lived in just five years ago…

…This brings me to the simplest, most obvious, and likeliest explanation why most CEOs and investors missed how China leapfrogged the West in industry after industry over the last five years: during that time, no one from the West bothered to visit China…

…Unlike Japan in the 1990s, China has not seen its banking system go bust and lose its ability to fund new projects. On the contrary, the surge in loans to industry over the past few years lies at the heart of China’s booming industrial productivity…

…This is another key difference between China today and Japan in the 1990s. China today is not only more efficient and more productive than a decade ago, it is probably more efficient and more productive than most other major industrial economies. And it boasts a very attractive cost structure. Until a few years ago, you would need to check your bank balance before going out for dinner in Tokyo. Today, you can stay in the Four Seasons in Beijing or Shanghai for less than US$250 a night. Perhaps the best illustration of how Japan’s past is a very poor guide to China’s present is the difference in their trade balances; a reflection of how different their competitiveness has been…

…This is not to understate the magnitude of the Chinese property bust. The rollover in real estate has been a massive drag on growth and on animal spirits over the past five years. But on this front, there is another key difference between China and Japan: in China, the contraction of real estate was the policy. It was not the unfortunate consequence of policies gone-wrong. Reallocating capital away from real estate and towards industry was a stated goal of the government…

…There seem to be at least three separate visions of China.

The first is the China you read about in much of the Western media: a place of despond and despair. It is permanently on the cusp of social disorder and revolution, or it would be, were it not an Orwellian nightmare of state surveillance, supervision and repression that strangles creativity and stifles progress. This is the place that Westerners who have never visited China typically imagine, because it is the place portrayed by the media…

…The second is the vision of China you get from talking to Chinese millennials in tier-one cities. This version of China recalls the “lost decades” of Japanese deflationary depression…

…This brings me to the third vision of China: that it is only just beginning to leapfrog the West in a whole range of industries. This vision is starting to show up itself in the perception of Western brands in China, and their sales. For example, Apple’s iPhones no longer figure in the five best-selling smartphone models in China. And Audi’s new electric cars made and sold in China will no longer carry the company’s iconic four-circle logo; the branding is now perceived to be more of a hindrance than a benefit.

To put it another way, following years of investment in transport infrastructure, education, industrial robots, the electricity grid and other areas, the Chinese economy today is a coiled spring. So far, the productivity gains engendered by these investments have manifested themselves in record trade surpluses and capital flight—into Sydney and Vancouver real estate, and Singapore and Hong Kong private banking.

This has mostly been because money earners’ confidence in their government has been low. From bursting the real estate bubble, through cracking down on big tech and private education, to the long Covid lockdowns, in recent years the Chinese government has done little to foster trust among China’s wealthy. It’s small surprise, then, that many rich Chinese have lost faith in their government’s ability to deliver a stable and predictable business environment.

This brings me to the recent stimulus announcements and the all-important question whether the measures rolled out will prove sufficient to revitalize domestic confidence in a meaningful way. Will it even be possible to lift confidence as long as the Damocles’ sword of a wider trade conflict with the US and yet more sanctions looms over the head of Chinese businesses?

From this perspective, perhaps the most bullish development for China would be for the new US administration (regardless who sits behind the Resolute desk) to come in and look to repair the damage done to relations by the 2018 semiconductor sanctions and the 2021 Anchorage meeting…

…When it comes to China’s relevance to investors, there are four ways of looking at things.

  • China can be uninvestible and unimportant. This is the pool that most investors have been swimming in for the last few years. But this is akin to saying that China is like Africa. It simply doesn’t pass the smell test. Instead of sliding into irrelevance, China’s impact on the global economy only continues to grow.
  • China can be uninvestible but important. This is essentially what Jim Farley, fresh back from his China trip, told The Wall Street Journal.
  • China can be investible but unimportant. This is the space Japan inhabited for a couple of decades, and into which Europe seems to be gently sliding. However, the idea that China today is where Japan has been for the last three decades is grossly misplaced on many fronts, including the competitiveness of its economy, its overall cost structure, and its weight in global indexes.
  • China can be investible and important. This is what David Tepper of Appaloosa Management argued on CNBC following the announcement of China’s stimulus (see Changing Narratives Around The World). For now, this is still a minority view, at least among Western investors. Not that Western investors matter all that much. What truly matters is whether Chinese investors themselves start rallying to this view. If they do, the unfolding bull markets in Chinese equities and the renminbi could really have legs.

3. $2 H100s: How the GPU Rental Bubble Burst – Eugene Cheah

ChatGPT was launched in November 2022, built on the A100 series. The H100s arrived in March 2023. The pitch to investors and founders was simple: Compared to A100s, the new H100s were 3x more powerful, but only 2x the sticker price.

If you were faster to ramp up on H100s, you too, can build a bigger, better model, and maybe even leapfrog OpenAI to Artificial General Intelligence – If you have the capital to match their wallet!

With this desire, $10-100’s billions of dollars were invested into GPU-rich AI startups to build this next revolution. Which lead to ….

The sudden surge in H100 demand

Market prices shot through the roof, the original rental rates of H100 started at approximately $4.70 an hour but were going for over $8. For all the desperate founders rushing to train their models to convince their investors for their next $100 million round…

…For most of 2023, the H100 prices felt like they would forever be above $4.70 (unless you were willing to do a huge upfront downpayment)

At the start of 2024, the H100 prices reached approximately $2.85 across multiple providers…

…In Aug 2024, if you’re willing to auction for a small slice of H100 time (days to weeks), you can start finding H100 GPUs for $1 to $2 an hour.

We are looking at a >= 40% price drop per year, especially for small clusters. NVIDIA’s marketing projection of $4 per GPU hour across 4 years, has evaporated away in under 1.5 years.

And that is horrifying because it means someone out there is potentially left holding the bag – especially so if they just bought it as a new GPUs…

…The average H100 SXM GPU in a data center costs $50k or more to set up, maintain, and operate (aka most of the CAPEX). Excluding electricity and cooling OPEX cost…

…If the price falls below $1.65/hour, you are doomed to make losses on the H100 over the 5 years, as an infra provider. Especially, if you just bought the nodes and cluster this year…

…Many infrastructure providers, especially the older ones – were not naive about this – Because they had been burnt firsthand by GPU massive rental price drops, after a major price pump, from the crypto days – they had seen this cycle before.

So for this cycle, last year, they pushed heavily for a 3-5 year upfront commitment and/or payment at the $4+ price range. (typically with 50% to 100% upfront). Today, they push the $2.85+ price range – locking in their profits…

…When a model creator is done training a model, you have no more use for the cluster. What would they do? – they resell and start recouping some of the costs…

…This ended up creating a triple whammy in reducing the demand for H100s!

1. Finetuning is significantly cheaper than training from scratch.

a. Because the demands for fine-tuning are significantly less in compute requirements (typically 4 nodes or less, usually a single node), compared to training from scratch (from 16 nodes, usually more, for 7B and up models).

b. This industry-wide switch essentially killed a large part of smaller cluster demands.

2. Scaling back on foundation model investment (at small/mid-tier)

a. In 2023, there was a huge wave of small and medium foundation models, within the text and image space.

b. Today, however, unless you are absolutely confident you can surpass llama3, or you are bringing something new to the table (eg. new architecture, 100x lower inference, 100+ languages, etc), there are ~no more foundation model cos being founded from scratch.

c. In general, the small & medium, open models created by the bigger players (Facebook, etc), make it hard for smaller players to justify training foundation models – unless they have a strong differentiator to do so (tech or data) – or have plans to scale to larger models.

d. And this has been reflected lately with investors as well, as there has been a sharp decline in new foundation model creators’ funding. With the vast majority of smaller groups having switched over to finetuning. (this sentiment is combined with the recent less than desired exits for multiple companies).

e. Presently today, there is approximately worldwide by my estimate:

<20 Large model creator teams (aka 70B++, may create small models as well)

<30 Small / Medium model creator teams (7B – 70B)

f. Collectively there are less than <50 teams worldwide who would be in the market for 16 nodes of H100s (or much more), at any point in time, to do foundation model training.

g. There are more than 50 clusters of H100 worldwide with more than 16 nodes.

3. Excess capacity from reserved nodes is coming online

a. For the cluster owners, especially the various foundation model startups and VCs, who made long reservations, in the initial “land grab” of the year 2023.

b. With the switch to finetuning, and the very long wait times of the H100’s
(it peaked at >= 6 months), it is very well possible that many of these groups had already made the upfront payment before they made the change, essentially making their prepaid hardware “obsolete on arrival”.

c. Alternatively, those who had the hardware arrive on time, to train their first few models, had come to the same realization it would be better to fine-tune their next iteration of models. Instead of building on their own.

d. In both cases, they would have unused capacity, which comes online via “Compute Resellers” joining the market supply…. 

…Both AMD and Intel may be late into the game with their MX300, and Gaudi 3 respectively.

This has been tested and verified by us, having used these systems. They are generally:

  • Cheaper than a H100 in purchase cost
  • Have more memory and compute than a H100, and outperforms on a single node.
  • Overall, they are great hardware!

The catch? They have minor driver issues in training and are entirely unproven in large multi-node cluster training.

Which as we covered is largely irrelevant to the current landscape. To anyone but <50 teams. The market for H100 has been moving towards inference and single or small cluster fine-tuning.

All of which these GPUs have been proven to work at. For the use cases, the vast majority of the market is asking for.

These 2 competitors are full drop-in replacements. With working off-the-shelf inference code (eg. VLLM) or finetuning code for most common model architectures (primarily LLaMA3, followed by others)…

…Given that the open-weights model has entered the GPT-4 class arena. Falling H100 prices will be the multiplier unlock for open-weights AI adoption.

It will be more affordable, for hobbyists, AI developers, and engineers, to run, fine-tune, and tinker with these open models.

Especially if there is no major leap for GPT5++, because it will mean that the gap between open-weights and closed-source models will blur.

This is strongly needed, as the market is currently not sustainable. As there lacks the value capture on the application layer for paying users (which trickles down the platform, models, and infra layers)

In a way, if everyone is building shovels (including us), and applications with paying users are not being built (and collecting revenue and value).

But when AI inference and fine-tuning becomes cheaper than ever.

It can potentially kick off the AI application wave. If it has not already slowly started so.

4. Politics, Portfolios & Perspective: Investing in a Crazy Election Year – Alliance Wealth Advisors

How we feel about the economy is directly correlated to if the party we most closely identify with is in power or not. This is regardless of what the economic data actually tells us. In other words, our emotions get the best of us and cloud our ability to stay objective…

…In the past two presidential elections, there were many “expert” predictions claiming that electing both Donald Trump and Joe Biden would cause a significant stock market correction. Yet, both presided over stock market highs at various times. Anyone who made changes to their portfolio based on those election outcomes suffered a serious opportunity cost that will impact them for a long time…

…Politics aside, the stock market is a complex adaptive system, influenced by countless variables interacting with one another in constantly evolving ways. Companies are dynamic and run by smart people who learn to adapt to new environments. History has shown that companies can react to all kinds of changes and have always been able to grow their earnings over time. When they do stock prices tend to follow.

5. Writes and Write-Nots – Paul Graham

I’m usually reluctant to make predictions about technology, but I feel fairly confident about this one: in a couple decades there won’t be many people who can write…

…The reason so many people have trouble writing is that it’s fundamentally difficult. To write well you have to think clearly, and thinking clearly is hard…

…Till recently there was no convenient escape valve for the pressure created by these opposing forces. You could pay someone to write for you, like JFK, or plagiarize, like MLK, but if you couldn’t buy or steal words, you had to write them yourself. And as a result nearly everyone who was expected to write had to learn how.

Not anymore. AI has blown this world open. Almost all pressure to write has dissipated. You can have AI do it for you, both in school and at work.

The result will be a world divided into writes and write-nots…

…Is that so bad? Isn’t it common for skills to disappear when technology makes them obsolete? There aren’t many blacksmiths left, and it doesn’t seem to be a problem.

Yes, it’s bad. The reason is something I mentioned earlier: writing is thinking. In fact there’s a kind of thinking that can only be done by writing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple. Holdings are subject to change at any time.

What We’re Reading (Week Ending 10 November 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 November 2024:

1. Why I’m Leaving OpenAI and What I’m Doing Next – Miles Brundage

So how are OpenAI and the world doing on AGI readiness?

In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.

To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).

Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.

As a sidenote, I think that AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense. One of the things my team has been working on lately is fleshing out the “levels of AI” framework referenced here. I hope that OpenAI and I will be able to publish a related paper before long. But for now I’d just note that when I say “ready for AGI,” I am using this as shorthand for something like “readiness to safely, securely, and beneficially develop, deploy, and govern increasingly capable AI systems.”…

…I think the upsides of AI are already big and could be dramatically bigger, as are the downsides. As someone who has worked in this field for longer than most, it has been very sad to see increasing polarization along the lines of whether people focus on one side of the cost/benefit ledger or the other, or have different risk priorities, etc. My view is that there is a lot to worry about and a lot to be excited about, we don’t have to choose one thing to care about, and we should find common ground where it exists.

I think AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion. Notably, this is true not just for risk mitigation but also for ensuring equitable distribution of the benefits, as is the case with, e.g., electricity and modern medicine as well. This is true for a few reasons, including, non-exhaustively, collective action problems, various unpriced negative externalities, and unequal starting positions of digital infrastructure access, wealth, etc. that affect who benefits and is harmed by default and to what degrees. As with railroads, electricity, etc., corporate and government policies will be critical to ensuring safe and fair outcomes.

I think AI capabilities are improving very quickly and policymakers need to act more urgently…

..I think quantitative evaluations of AI capabilities and extrapolations thereof, in combination with analysis of the impacts of certain policies, will be critical in truthfully and persuasively demonstrating that urgency. There’s great work happening on measuring frontier models from a safety perspective, measuring trends over time in AI, and a growing body of work assessing the labor market implications of AI, but more is definitely needed.

I think we don’t have all the AI policy ideas we need, and many of the ideas floating around are bad or too vague to be confidently judged. This is particularly true of international competition over AI, where I find the existing proposals to be especially bad (e.g. “race against [competing country] as quickly as possible”) and vague (e.g. “CERN for AI”), although it’s encouraging to see a growing trend towards more nuanced discussion of some of these ideas. There are also many aspects of frontier AI safety and security that will require creative solutions…

…I think that improving frontier AI safety and security is quite urgent, given the number of companies (dozens) that will soon (next few years at most) have systems capable of posing catastrophic risks. Given that that is not much time to set up entirely new institutions, I’m particularly interested in opportunities for action under existing legal authorities, as well as shaping the implementation of already-approved legislation such as the EU AI Act.

As noted above, and explained in more detail in this paper and similar work, companies and governments will not necessarily give AI safety and security the attention it deserves by default (this is not a comment specifically about OpenAI, as discussed above). There are many reasons for this, one of which is a misalignment between private and societal interests, which regulation can help reduce. There are also difficulties around credible commitments to and verification of safety levels, which further incentivize corner-cutting: people assume others are going to cut corners to gain an advantage and can’t tell what the ground truth is, or think they will change their minds later. Corner-cutting occurs across a range of areas, including prevention of harmfully biased and hallucinated outputs as well as investment in preventing the catastrophic risks on the horizon. There are, to be clear, some ways in which commercial incentives encourage safety, though I think it would be irresponsible to assume that those incentives will be sufficient, particularly for ambiguous, novel, diffuse, and/or low-probability/high-magnitude safety risks.

I’m excited about understanding how companies can credibly demonstrate safety while protecting valuable and potentially misusable IP. The difficulty of demonstrating compliance without compromising sensitive information is a major barrier to arms control agreements, which requires innovation to address. This issue is also at the core of effective domestic regulation. I’m excited to collaborate with people working on this and other related technical AI governance questions.

While some think that the right approach to the global AI situation is for democratic countries to race against autocratic countries, I think that having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security, an attack on Taiwan (given its central role in the AI chip supply chain), and other very bad outcomes. I would like to see academics, companies, civil society, and policymakers work collaboratively to find a way to ensure that Western AI development is not seen as a threat to other countries’ safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead.

Even if, as I think is very likely, Western countries continue to substantially outcompete China on AI, there is more than enough “gas in the tank” of computing hardware and algorithmic progress in autocratic countries for them to build very sophisticated capabilities, so cooperation will be essential. I realize many people think this sounds naive but I think those people haven’t thought through the situation fully or considered how frequently international cooperation (enabled by foresight, dialogue, and innovation) has been essential to managing catastrophic risks…

…I think it’s likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable (assuming appropriate policies to ensure fair distribution of that bounty). Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely. In the near-term, I worry a lot about AI disrupting opportunities for people who desperately want work, but I think it’s simultaneously true that humanity should eventually remove the obligation to work for a living and that doing so is one of the strongest arguments for building AI and AGI in the first place. Likely some will continue to work in the long-term but the incentive to do so might be weaker than before (whether this is true depends on a variety of cultural and policy factors). That is not something we’re prepared for politically, culturally, or otherwise, and needs to be part of the policy conversation. A naive shift towards a post-work world risks civilizational stagnation (see: WALL-E), and much more thought and debate about this is needed…

…Compared to software, data, and talent, computing hardware has unique properties that make it an important focal point for AI policy: “it is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain” (quoted from this paper I worked on). This makes it worrying that the part of the US government responsible for overseeing what happens when that compute is shipped overseas is severely understaffed and underfunded, and that more generally there is little serious policy discussion of what the endgame is here (besides occasionally tightening export controls and requiring companies to report their big datacenters and training runs).

To the extent that there is serious analysis of compute governance happening in the academic literature, it generally lags behind developments in industry by a fair amount – e.g., to those within frontier AI companies, it has become increasingly clear in recent years that scaling up inference, not just training, can enable higher performance, but public analysis of the policy implications of this has only begun in earnest relatively recently. Ideas for distributing computing power (and the associated benefits of AI) more widely, such as via the government providing greater compute for academics, are generally too little too late and neglect issues specific to the developing world, which is in a quite different situation.

2. Industry Is Not Destiny – Greg Obenshain

We’d go as far as to argue that industry analysis generally is much less valuable than fundamental investors or strategy consultants might hope.

Mauboussin’s new study, Measuring the Moat: Assessing the Magnitude and Sustainability of Value Creation, grapples with this issue. Mauboussin’s study includes a chart that is difficult to unsee once you’ve seen it (h/t Edward Conard’s Macro Roundup for highlighting this)…

…This chart shows that profitability varies more within industry (the vertical bars) than across industries (the dots). Over the long run, the fate of a company is not primarily determined by its industry—a finding consistent with Chicago school research from the 1980s that dealt a death blow to structure-conduct-performance theory in antitrust law.

Mauboussin notes that while industry analysis matters when it comes to deciding where to compete, ultimately the right unit of analysis is not the industry level but the company level…

…Industries with higher overall profitability have more companies that are profitable, but even within industries with low profitability, there are still companies that have returns well above the cost of capital and some companies that have profitability substantially above.

Industry is not destiny. Great companies can emerge from mediocre industries.

3. Watch Out: Wall Street Is Finding New Ways to Slice and Dice Loans – Matt Wirz

Goldman Sachs GS 2.14%increase; green up pointing triangle this month sold $475 million of public asset-backed securitization, or ABS, bonds backed by loans the bank makes to fund managers that tide them over until cash from investors comes in. The first-of-its-kind deal is a lucrative byproduct of the New York bank’s push into loans to investment firms, such as these so-called capital-call lines.

Goldman’s new deal reflects two trends transforming financial markets. Increasingly large managers of private-debt and private-equity funds are moving up in the Wall Street pecking order, but they often need money fast. Banks, once again, are reinventing themselves to adapt…

…The transactions are relatively small for now. Still, they are intertwining banks (in Wall Street parlance, the sell side) with investors (the buy side) in ways that are new and difficult to parse for analysts, regulators and others…

…Capital-call loans function like credit cards for private-fund managers. The funds borrow money to invest quickly in private debt, private equity, real estate and infrastructure. They then “call up” cash commitments from clients in the funds, mostly institutions such as pensions and insurers, and repay the loans when the clients deliver.

Defaults on capital-call commitments from large institutions “have been historically close to 0%,” according to a marketing document for Goldman’s bond viewed by The Wall Street Journal. That makes the bonds extremely safe, said debt fund managers to whom Goldman offered the deal.

Even so, the shiny new products that banks are inventing have yet to be tested through market cycles…

…As Goldman and other banks make more capital-call loans to private-fund managers, they are also buying insurance from many of the same investment firms to protect against potential losses from corporate, consumer and real-estate loans. The so-called synthetic risk transfers, or SRTs, help banks reduce risk to meet new regulatory requirements and give fund managers investments to put into their wildly popular private-credit funds.

Some private-credit funds are developing another product that is similar to capital-call lines called net-asset-value, or NAV loans, made to private-equity fund managers. Rising interest rates have made it harder for private-equity funds to sell companies they own to repay their limited partners. NAV loans help them to start returning cash to clients until they can dispose of the companies. Many of the firms that manage private-equity funds also manage private-credit funds…

…The International Monetary Fund published a report in April warning that “interconnections and potential contagion risks many large financial institutions face from exposures to the asset class are poorly understood and highly opaque.”

4. Big Banks Cook Up New Way to Unload Risk – Matt Wirz

U.S. banks have found a new way to unload risk as they scramble to adapt to tighter regulations and rising interest rates…

…These so-called synthetic risk transfers are expensive for banks but less costly than taking the full capital charges on the underlying assets. They are lucrative for the investors, who can typically get returns of around 15% or more, according to the people familiar with the transactions.

U.S. banks mostly stayed out of the market until this autumn, when they issued a record quantity as a way to ease their mounting regulatory burden…

…In most of these risk transfers, investors pay cash for credit-linked notes or credit derivatives issued by the banks. The notes and derivatives amount to roughly 10% of the loan portfolios being de-risked. Investors collect interest in exchange for shouldering losses if borrowers of up to about 10% of the pooled loans default…

…The deals function somewhat like an insurance policy, with the banks paying interest instead of premiums. By lowering potential loss exposure, the transfers reduce the amount of capital banks are required to hold against their loans.

Banks globally will likely transfer risk tied to about $200 billion of loans this year, up from about $160 billion in 2022, according to a Wall Street Journal analysis of estimates by ArrowMark Partners, a Denver-based firm that invests in risk transfers…

…Banks started using synthetic risk transfers about 20 years ago, but they were rarely used in the U.S. after the 2008-09 financial crisis. Complex credit transactions became harder to get past U.S. bank regulators, in part because similar instruments called credit-default swaps amplified contagion when Lehman Brothers failed.

Regulators in Europe and Canada set clear guidelines for the use of synthetic risk transfers after the crisis. They also set higher capital charges in rules known as Basel III, prompting European and Canadian banks to start using synthetic risk transfers regularly.

U.S. regulations have been more conservative. Around 2020, the Federal Reserve declined requests for capital relief from U.S. banks that wanted to use a type of synthetic risk transfer commonly used in Europe. The Fed determined they didn’t meet the letter of its rules…

…The pressure began to ease this year when the Fed signaled a new stance. The regulator said it would review requests to approve the type of risk transfer on a case-by-case basis but stopped short of adopting the European approach.

5. Xi Stimulus Clues Found in Protest Data Showing Economic Stress – Rebecca Choong Wilkins

From a basement in Calgary, often accompanied by his pet cat, Lu Yuyu spends 10 hours a day scouring the internet to compile stats on social instability before they are scrubbed by China’s censors. The 47-year-old exile won’t reveal his exact method because it risks jeopardizing the overall goal of the project called “Yesterday,” which documents cases of group protests.

“These records provide an important basis for people to understand the truth of this period of history,” said Lu, who started the effort in January 2023 but didn’t make it public until he arrived in Canada a year ago. “I didn’t want to go to jail again,” he explained.

While Lu’s interests are political, his database — available for free — is among a growing number of metrics tracking dissent in China that investors are watching to figure out when Xi will open up the spigots to bolster growth. And some banks are now starting to develop similar products.

Morgan Stanley in September debuted a new gauge of distress that could be used to predict policy swings in China. Robin Xing, the bank’s chief China economist, says it’s nearing the low levels reached two other times in the past decade: in 2015, when Beijing took drastic steps to arrest a $7 trillion stock market rout, and in 2022 — the point at which the Communist Party abruptly dropped its strict Covid controls after simultaneous street protests in major cities…

…While China’s opaque political system makes it difficult to attribute policy moves to any single factor, investors and analysts who track instances of unrest say authorities may be especially sensitive to them when deciding on whether to roll out stimulus and how much to deploy. Economic protests have become more frequent in recent years as China’s youth unemployment rate soared and its housing crisis worsened…

…Getting a read on what’s happening on the ground is a challenge for academic researchers and finance professionals alike. Widespread censorship, heavy surveillance and suppression of dissent have made it hard to assess the depth of economic malaise in the country of 1.4 billion people…

…The rising prominence of dissent metrics is part of a blossoming industry of so-called alternative data aimed at decoding the state of the world’s second-biggest economy…

…Life has become tougher for many in recent years as pandemic lockdowns, a real estate crisis and trade tensions have slowed growth in China.

Incomes are still rising, but gains under Xi have been the weakest since the late 1980s. Faith in the country’s meritocracy also appears to be waning, leaving white-collar workers feeling increasingly disillusioned. Companies mired in fierce price wars are laying off employers, while college graduates are struggling to find work.

China Dissent Monitor’s data shows that cases of dissent rose 18% in the second quarter compared to same period last year, with the majority of events linked to financial issues.

“If you look at everything regarding social well-being — be it wage growth, urban unemployment rate, consumer confidence and even tracking labor incidents — I think it’s deteriorating,” Morgan Stanley’s Xing said.

Although protests aren’t particularly rare in China, they’re typically small scale, uncoordinated with other places and lacking in overt criticism of Beijing. Still, political criticism can bubble up, usually in cases linked to rural land actions where the local governments find themselves the target of discontent, according to China Dissent Monitor research…

…Even so, there are few signs that the unrest is coalescing around a particular instance of perceived injustice or a single issue. Unlike the Tiananmen Square protests and unrest in the late 1980s, current dissent doesn’t present an existential threat to the regime. A more likely response is therefore a dose of economic medicine that will keep the market guessing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.