What We’re Reading (Week Ending 16 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 February 2025:

1. The real threat to American prosperity – Daron Acemoglu

American economic success in the era after the second world war depended on innovation, which in turn relied on strong institutions that encouraged people to invest in new technologies, trusting that their inventiveness would be rewarded. This meant a court system that functioned, so that the fruits of their investments could not be taken away from them by expropriation, corruption or chicanery; a financial system that would enable them to scale up their new technologies; and a competitive environment to ensure that incumbents or rivals couldn’t block their superior offerings. These kinds of institutions matter under all circumstances, but they are especially critical for economies that rely heavily on innovation.

Stability requires that people trust institutions, and institutions become more likely to fail when people think they are failing. This is what explained the sudden meltdown of US economic dynamism…

…Economic growth in the US was rapid for most of the post-1980 era, but about half of the country didn’t benefit much from this. In a pattern unparalleled in the industrialised world, Americans with less than a college degree experienced a real (inflation-adjusted) decline in their wages between 1980 and 2013, while those with postgraduate degrees experienced robust growth…

…Many Americans felt that they no longer had much of a political voice. In surveys, more than 80 per cent started saying that politicians did not care about what people like them thought…

…But perhaps the most important determinant of this dwindling trust in institutions was that the US had become much more polarised, making it increasingly difficult to satisfy the majority of the voters. The flames of grievance were powerfully fanned by social media, which deepened polarisation. This then further reduced trust in democracy and in public institutions. Worse, with intensifying distrust, something essential to democracy — compromise — became more and more challenging.

By the 2010s something unprecedented was happening. Ever since data on this had been collected, an overwhelming majority of Americans saw democracy as the “only game in town” and gave it strong support relative to alternatives such as monarchy, military dictatorship or rule by unelected experts. That began changing, especially among young people, who reported growing scepticism about democracy and much more lukewarm support for these institutions.

The cracks were visible long before Trump was first elected in November 2016. He was in many ways a symptom of those troubled times…

…Turning points are useful to locate because they are symbolic of deeper causes of social change. In hindsight, an obvious turning point came just before Trump’s second inauguration. Biden, who had four years ago made defence of democracy a main agenda item, pre-emptively pardoned his family and a number of politicians and public servants, including former Republican Congresswoman Liz Cheney and the former medical adviser to the president, Anthony Fauci. The optics were clear and ugly: Biden and his camp by this point had so little trust in US institutions that they thought only such pre-emptive pardons could stop Trump’s retribution (and making the reality worse than the optics, it was only the enemies of Trump who were close to Biden that counted)…

…While Trump’s domestic agenda intensified the loss of trust in US institutions and expertise in government, his relations with foreign allies did the same for the so-called rules-based order. Of course, there was some truth to critics’ contention that these rules were designed for America’s benefit and that when they didn’t serve it well, they were bent or broken by US politicians, diplomats and companies. But the world was not ready for Trump’s tariffs, threats and military expansionist rhetoric towards Panama, Greenland and even Canada.

This set the scene for a series of catastrophic governmental failures. With morale gone and key personnel fired, the US state was ill-equipped to deal with emergencies. When new pandemics arrived, the response was haphazard, and unpreparedness cost tens of thousands of lives. The few remaining independent media sources uncovered a glaring and dangerous lack of oversight of critical infrastructure, including nuclear reactors and cyber security.

But the real extent of the damage became clear only with the tech meltdown of 2030. Economists and historians have now shown that a lot of this was the outcome of institutional failures and growing concentration in the industry. After Trump lifted all roadblocks ahead of AI acceleration and cryptocurrency speculation, there was initially a boom in the tech sector. But within a few years the industry had become even more consolidated than before, and both insiders and outsiders came to realise that only companies favoured by the administration could survive…

…By late 2029, many commentators were questioning what was going on in the tech industry, which had invested heavily in AI but had little to show for this in terms of innovation or productivity growth. There was huge enthusiasm and investment in cryptoassets, which were one by one revealed to be scams costing regular Americans billions of dollars. The AI empire had no clothes by this point, because the competitive energy had been sucked out of it. It took a while longer for the market to realise that, but when it did, a massive stock market crash followed.

This is the kind of shock that a dynamic economy can recover from, with new innovators coming in, government experts using fiscal policy and other interventions to prevent the crash from translating into a deep recession, and all sorts of people still believing in their ability to make a difference. But once malaise about US institutions had sunk in and experts were no longer around in the government, the crash became a recession and then a depression.

The depression continued and intensified. Many now understood that institutions needed to be fixed, but after the damage that Biden and Trump had done and the polarisation that had reached even higher peaks, rebuilding them proved difficult. American innovators and scientists started emigrating to Canada and the European Union. Some even went to China.

America’s collapse thus followed Hemingway’s famous line on bankruptcy. It happened gradually, as shared prosperity, high-quality public services and the operation of democratic institutions weakened, and then suddenly, as Americans stopped believing in those institutions.

2. The Drug Industry Is Having Its Own DeepSeek Moment – David Wainer

In 2020, less than 5% of large pharmaceutical transactions worth $50 million or more upfront involved China. By 2024, that number had surged to nearly 30%, according to DealForma. A decade from now, many drugs hitting the U.S. market will have originated in Chinese labs…

…China’s biotech boom mirrors its rise in tech. In both cases, China has moved up the value chain, from manufacturing goods to becoming a more sophisticated hub for innovation, competing in industries once dominated by the U.S. There are several reasons for the industry’s growth. For one, many top scientists trained in the U.S. have returned to China over the past decade, fueling the emergence of biotech hubs around Shanghai. And just as DeepSeek built a formidable chatbot—allegedly on a lean budget with limited access to semiconductors—Chinese biotech companies are also scrappier, capitalizing on a highly skilled, lower-cost workforce that can move faster.

Additionally, companies can conduct clinical trials at a fraction of what they would cost in the U.S., while recent changes in the Chinese regulatory system have streamlined and accelerated the approval process to get a study started. 

For now, much of China’s biotech innovation is incremental rather than groundbreaking. Many companies focus on improving existing drugs—tweaking the chemistry, enhancing efficacy or differentiating them in key ways.

But Chinese innovation is steadily improving and is already starting to disrupt the U.S. drug-development ecosystem…

…Chief executives of large pharmaceutical companies are broadening their horizons. Why spend $10 billion acquiring a U.S. biotech with a mid-stage drug when a similar molecule can be licensed from China for a fraction of the price?…

…In late 2024, after scouring the market for obesity assets—presumably eyeing U.S. companies like Viking Therapeutics, which trades at a market value of around $3.7 billion—Merck chose to license an oral GLP-1 drug from China’s Hansoh Pharma. The deal: $112 million upfront, with potential milestone payments of up to $1.9 billion…

…These “bargain” deals are great for Big Pharma. But for U.S. biotech companies—and their venture-capital backers—they are creating real challenges. Investors increasingly struggle to value early-stage biotechs because it is difficult to predict what competition might emerge from China.

3. All of us could be wrong about DeepSeek and OpenAI – Chin Hui Leong

China’s DeepSeek has unleashed a new wave of AI hype.

But amid the noise, one thing is clear: everyone has an opinion, and no one has the answers….

…When Apple (NASDAQ: AAPL) unveiled its iPhone in 2007, many analysts dismissed its hardware-focused strategy.

Their argument hinged on a familiar pattern: over time, consumer hardware tends to become commoditised. If the iPhone becomes popular, they reasoned, its unique appeal would fade as competitors come in with cheaper imitations.

This wasn’t a baseless concern.

The personal computer (PC) era, the previous dominant computing platform, was marked by fierce price competition among hardware manufacturers. Even Apple’s Macintosh PC had fallen victim to the cutthroat competition in the 1980s and 1990s.

In short, the precedent was clear: hardware eventually becomes a commodity.

However, this time, things would be different.

Today, nearly 18 years later, Apple boasts over 2.35 billion devices in circulation, generating upwards of US$200 billion in annual iPhone revenue. Clearly, the popular smartphone has defied the conventional wisdom of hardware commoditisation.

Therein lies a lesson.

When considering the future of AI, the iPhone’s success serves as a crucial reminder: be wary of preconceived notions…

…Too often, we fall prey to the “Highlander” fallacy, assuming that one side can only win if the other loses.

This zero-sum mindset blinds us from a range of possible future scenarios.

Think about the mobile operating system (OS) market.

On one side, you’ve got Apple’s closed iOS, with 2.35 billion devices, and on the other, Google’s open-source Android, with a massive three billion devices.

Crucially, they’ve each found their own area to thrive in.

Apple continues to dominate in the premium smartphone market, while Android is all about getting Google services out there.

Going back to AI models: can OpenAI replicate this coexistence, thriving alongside open-source models?

Could we see large, proprietary models handling general use cases while smaller, specialised models address niche needs? Could there be a main AI model, featuring a supporting cast of smaller models?

Your guess is as good as mine…

…Do you know who were among the biggest “losers” in the shift from desktop to mobile?

In my book, it may be Microsoft and Nvidia.

Nvidia tried to break into the smartphone market but threw in the towel when it failed to get a foothold in the market. Microsoft, on the other hand, had long held a monopoly in the desktop OS market but failed to extend its dominance to mobile devices.

But are we really going to brand Microsoft and Nvidia as losers, even though they got the short end of the stick in the smartphone arena?

Today, both are at the forefront of the AI revolution, proving that setbacks don’t preclude future triumphs…

…Amid the noise, it’s important to remember that ChatGPT is barely two years old, a stark reminder of the industry’s infancy.

If history teaches us anything, we may want to put our egos aside and accept that there are developments that cannot be known ahead of time.

The AI landscape is still being written.

4. Deep Research and Knowledge Value – Ben Thompson

I found a much more beneficial use case the next day. Before I conduct a Stratechery Interview I do several hours of research on the person I am interviewing, their professional background, the company they work for, etc.; in this case I was talking to Bill McDermott, the Chairman and CEO of ServiceNow, a company I am somewhat familiar with but not intimately so. So, I asked Deep Research for help…

…I found the results eminently useful, although the questions were pretty mid; I did spend some time doing some additional reading of things like earnings reports before conducting the Interview with my own questions. In short, it saved me a fair bit of time and gave me a place to start from, and that alone more than paid for my monthly subscription.

Another compelling example came in researching a friend’s complicated medical issue; I’m not going to share my prompt and results for obvious reasons. What I will note is that this friend has been struggling with this issue for over a year, and has seen multiple doctors and tried several different remedies. Deep Research identified a possible issue in ten minutes that my friend has only just learned about from a specialist last week; while it is still to be determined if this is the answer he is looking for, it is notable that Deep Research may have accomplished in ten minutes what has taken my friend many hours over many months with many medical professionals.

It is the final example, however, that is the most interesting, precisely because it is the question on which Deep Research most egregiously failed. I generated a report about another friend’s industry, asking for the major players, supply chain analysis, customer segments, etc. It was by far my most comprehensive and detailed prompt. And, sure enough, Deep Research came back with a fully fleshed out report answering all of my questions.

It was also completely wrong, but in a really surprising way. The best way to characterize the issue is to go back to that famous Donald Rumsfeld quote:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.

The issue with the report I generated — and once again, I’m not going to share the results, but this time for reasons that are non-obvious — is that it completely missed a major entity in the industry in question. This particular entity is not a well-known brand, but is a major player in the supply chain. It is a significant enough entity that any report about the industry that did not include them is, if you want to be generous, incomplete.

It is, in fact, the fourth categorization that Rumsfeld didn’t mention: “the unknown known.” Anyone who read the report that Deep Research generated would be given the illusion of knowledge, but would not know what they think they know…

…What Deep Research reveals is how much more could be known. I read a lot of things on the Internet, but it’s not as if I will ever come close to reading everything. Moreover, as the amount of slop increases — whether human or AI generated — the difficulty in finding the right stuff to read is only increasing. This is also one problem with Deep Research that is worth pointing out: the worst results are often, paradoxically, for the most popular topics, precisely because those are the topics that are the most likely to be contaminated by slop. The more precise and obscure the topic, the more likely it is that Deep Research will have to find papers and articles that actually cover the topic well…

…There is a good chance that Deep Research, particularly as it evolves, will become the most effective search engine there has ever been; it will find whatever information there is to find about a particular topic and present it in a relevant way. It is the death, in other words, of security through obscurity. Previously we shifted from a world where you had to pay for the news to the news being fed to you; now we will shift from a world where you had to spend hours researching a topic to having a topic reported to you on command.

Unless, of course, the information that matters is not on the Internet. This is why I am not sharing the Deep Research report that provoked this insight: I happen to know some things about the industry in question — which is not related to tech, to be clear — because I have a friend who works in it, and it is suddenly clear to me how much future economic value is wrapped up in information not being public. In this case the entity in question is privately held, so there aren’t stock market filings, public reports, barely even a webpage! And so AI is blind…

…That, by extension, is why AI’s like Deep Research are one of the most powerful arguments yet for prediction markets. Prediction markets had their moment in the sun last fall during the U.S. presidential election, when they were far more optimistic about a Trump victory than polls. However, the potential — in fact, the necessity — of prediction markets is only going to increase with AI. AI’s capability of knowing everything that is public is going to increase the incentive to keep things secret; prediction markets in everything will provide a profit incentive for knowledge to be disseminated, by price if nothing else.

It is also interesting that prediction markets have become associated with crypto, another technology that is poised to come into its own in an AI-dominated world; infinite content generation increases the value of digital scarcity and verification, just as infinite transparency increases the value of secrecy. AI is likely to be the key to tying all of this together: a combination of verifiable information and understandable price movements may the only way to derive any meaning from the slop that is slowly drowning the Internet.

This is the other reality of AI, and why it is inescapable. Just as the Internet’s transparency and freedom to publish has devolved into torrents of information of questionable veracity, requiring ever more heroic efforts to parse, and undeniable opportunities to thrive by building independent brands — like this site — AI will both be the cause of further pollution of the information ecosystem and, simultaneously, the only way out…

…Secrecy is its own form of friction, the purposeful imposition of scarcity on valuable knowledge. It speaks to what will be valuable in an AI-denominated future: yes, the real world and human-denominated industries will rise in economic value, but so will the tools and infrastructure that both drive original research and discoveries, and the mechanisms to price it. The power of AI, at least on our current trajectory, comes from knowing everything; the (perhaps doomed) response of many will be to build walls, toll gates, and marketplaces to protect and harvest the fruits of their human expeditions.

5. AI and the Mag 7 – Daniel Rasmussen

Last summer, Goldman Sachs was estimating a $1T spend on AI capex in the coming years, and the numbers have only gone up since then, with most of it concentrated in the Mag 7 that dominate the public markets…

…It’s necessary as an investor to at least consider how these bets might go awry…

…The skeptic’s case starts with the possibility that the Mag 7 is suffering from a classic case of “competition neglect,” where “subjects in competitive settings overestimate their own skill and speed in responding to common observable shocks and underestimate the skill and responsiveness of their competitors,” as Robin Greenwood and Samuel Hanson put it in their paper, “Waves in Ship Prices and Investment.” When shipping prices increase, shipping companies all decide to invest in ships—after all, their models are all saying these investments will be profitable at current rates. That investment not only drives up the price of building new ships, it causes a glut of supply once they are built, resulting in poor returns on these pro-cyclical investments, as low as -36%, according to Greenwood and Hanson. Meanwhile, those who invest at the bottom of that cycle—when current shipping prices are low and there’s no one else building at the shipyards—earn returns as high as 24%.

Rather than ships, today’s AI capex “is a euphemism for building physical data centers with land, power, steel and industrial capacity,” as Sequoia Capital’s David Cahn puts it…

…OpenAI, SoftBank, and the federal government’s $500 billion Project Stargate is the culmination of this race to convert tech companies into industrial manufacturers. But even winning this race could be a Pyrrhic victory. Capex at these levels is an asset-heavy business model. Asset-heavy business models historically have lower returns on capital, especially when sunk costs meet increased competition.

In this scenario, perhaps Stargate is the AI equivalent of overinvesting in new ships at the same moment that everyone else is overinvesting in ships, leading to a supply glut, price drops, and poor investment returns…

…We still don’t have many economical use cases for AI. Even in low-compute mode, a single prompt on ChatGPT’s o3 model costs $20 to perform. High-compute mode can cost much more….

…While Anthropic CEO Dario Amodei is confident AI can beat humans at most things in 2-3 years, that doesn’t mean we will all be using AI that way. There’s a difference between what can be automated and what is cost-effective to automate. Daron Acemoglu, Institute Professor at MIT, estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years. An MIT research paper looked at jobs in non-farm businesses and found 36% of tasks in jobs they studied could be automated by AI vision models, but only 8% were economically worth automating.

Scaling laws are an assumption that brute force will get us more and more powerful AI. For AI investors, it’s a playbook to outspend the competition, win the market, and trust that, eventually, more infrastructure and better chips will bring costs down and make more tasks economical to automate. But shooting for scale and achieving high ROI are not usually achieved at the same time.

Shortly after Stargate was announced, it was soon overshadowed by bigger news about China’s DeepSeek model. While the exact specs are a subject of debate, DeepSeek shattered the cost-to-performance expectations that investors and the Mag 7 have been working from…

…We’ve only just entered the true product-building era for AI. How many people today think of the internet as a product? The internet is not a single thing but a collection of services and products on common digital infrastructure (e.g., TCP/IP protocol, which was built by DARPA with US taxpayer money and isn’t a business anyone is making money on). Similarly, AI models could, like other commodities, utilities, and infrastructure projects, become a part of everything we use rather than a distinct product. Usage patterns are starting to reflect this: we are using these models less directly and more through other services built on top of them.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 February 2025:

1. Robert Litan: An Economist Walks Into a Bar at TEDxKC (Transcript) – Robert Litan

First guy, he approaches the first woman that he sees, offers her a drink. She turns him down. He, then, decides to walk his way down the bar. And, of course, all the women watching this, they see what he’s up to. And they all turn him down…

…He hasn’t learned from this experience, in the real world. So he decides to go to the virtual world. He goes to the Internet and joins Cupid.com and he tries the same technique, and sure enough, with the same result. They all turn him down…

…Cupid.com is in trouble too. And the reason they are, is that the women who have joined Cupid.com are being inundated with offers for men for dates. They get turned off, they quit. And if they quit, men quit. Cupid is in trouble. Who are you going call, to solve this problem. Know the answer is more obvious than ghost busters. You call an economist. Don’t laugh, you call economists. In fact, you call two of them.

This is Muriel Niederle of Stanford, and Dan Ariely of Duke. And they spend a lot of time, studying the problem of artificial scarcity and abundance, in the online dating context, which is a reason Cupid call them up. And they wanted to know how to fix their problem and the two economists said they had an idea, that was as simple as it was profound. Just put a sharp limit on the number of date offers that men could make to women each month. This is the notion of artificial scarcity. Taking what looks like an abundant resource, which is date offers, and artificially constraining them.

And the economists said to Cupid that if you do this, the men will take their offer seriously. They’ll look at more than just the women’s pictures and they’ll actually look at their profiles. And the women will know this, and they’ll be more likely to accept date-proposals. Artificial scarcity helped save Cupid.com, and other dating sites that copied the technique…

…Google collects about $50 billion a year, from advertisers, large and small, seeking placement on that right hand side. They auction off the site. But that’s not how the system started, because when Google was launched, online advertising was in its infancy, and Google, believe it or not, went door to door, advertiser to advertiser, trying to get them to place for an ad next to a search term. Highly laborious, you quickly can see that this is not going to scale, as the number of searches explodes on Google.

And so the founder of Google asked two young engineers, Eric Veach and Salar Kamangar, to come up with an automatic system, that would solve this problem. Well, they were instinctively attracted to auctions. But they were thinking about another problem. That is if they auction off the sites, they fear that the advertisers would bid a very low price, and then incrementally raise their prices just a little bit, and keep the auctions going on forever. And if this happened, and a lot of searches were also going on at the same time, the whole site would crash.

So, as an engineering solution, they came up with this idea. That the winning auction, or the winning placement will be the price, the second highest price that was bid plus one penny. This will cut off the auctions, greatly simplify the process, and in the process also solve another problem called “the winner’s curse“. I’m sure that many of you that have participated in auctions may have regretted winning because you felt like you’ve paid too much. Pretty obvious point…

…“You know, those two engineers, they have reinvented what this guy came up with.” This is William Vickrey, he was an economist at Colombia, who proved mathematically, that the second price auction was the ideal solution to the winner’s curse. And you know what, that won him the Nobel Prize in Economics in 1996.

2. Emergent Layers, Chapter 2: Overserved and Underserved Customers – Alex Danco

Returning to disruption theory, the critical element we’re going to use from that framework is the idea of the overserved customer: the customer who is being served too much by incumbents. In mature industries, where everybody agrees what the scarce resource is and the core constraints are well understood and organized around, we see this happen a lot. As incumbent companies compete with each other for business, and customers are all being served adequately (for the understood job at hand), competition becomes a feature race where products improve or expand at a greater rate than customers’ capacity to use them. There’s a misalignment between what the customer needs and is getting, with that misalignment falling onto the side of “I’m spending way too much of my money or time for this.” Crucially, when customers are overserved for a particular job, it introduces the critical space and oxygen required for a new competitor with some sort of scalable, technological advantage to enter the market at the low end. The nature of over-service creates powerful incentives for incumbents to not engage with disruptive entrants, but rather to retreat upmarket towards higher profit margins…

…For a more recent but still “classic” example, let’s look at Airbnb. Airbnb was able to get off the ground because there was a critical subset of customers in the hospitality industry — initially young people, although not exclusively so — who were overserved by many aspects of the hotel industry. Hotels were serving customers along many axes of performance — comfort, privacy, loyalty reward programs, and so forth — that just weren’t very important to a specific subset of customers who didn’t care too much about all that stuff; they just want a place to stay. This gave Airbnb the critical oxygen necessary to get a foot in the door, and then expand upwards from a dramatically cheaper cost structure than Marriott can possibly compete with. The overserved customer is a very potent and dangerous one: they know what they’re looking for, and they don’t need to be educated when a new entrant comes along with the right proposition. If that new entrant gets a few critical things right, they’re looking at a large group of early adopters that need little prodding, little education and little advance notice. That’s a great basis to start a company.

Let’s now consider another kind of pain: underserved customers. Their pain appears to be more straightforward: they have some fundamental need that isn’t being met. But this situation is trickier than it seems: if a group of customers have a genuine need, then why aren’t companies stepping in to offer solutions? What’s the catch? It could be because the solutions are genuinely too hard, or face technical or feasibility obstacles. It could also be that customers aren’t aware they have the problem. Either way, that’s tough…

…Now let’s put these two types of customer pain together. What would happen if a customer were both overserved and underserved at the same time? Is this possible?

As it turns out, this situation is not only possible, but occurs regularly. And it’s highly volatile. The trick to figuring out how this works requires venturing one step beyond disruption theory, and recasting the job-to-be-done as a stack itself with a hierarchy of low-level to high-level needs…

…We can characterize the initial job where customers are being served as being at level j, where incumbents vie for customer dollars and products will inevitably trend towards over-service. Meanwhile, we can characterize the higher-order job as being at level j+1, which encompass the customer’s higher level objectives, and where companies are not, for whatever reason, currently serving anyone…

…Consider Uber: you have a large group of customers (myself included) who are overserved by owning their own vehicle. If your car sits idle & parked more than 95% of the time (which is about average in North America), you are clearly overserved by owning this car! Yet at the same time, that same set of customers is underserved at level j+1, or the reason why they own a car in the first place — “I need to get to specific places at specific times”. You have a schedule to keep, and it’s hard.

Notice that both of these conditions must hold true in order for Uber to work. If customers were not overserved, it would be difficult for them to abandon their current solution. (Consider someone who drives their vehicle for a living, many hours per day. They are significantly less overserved by their vehicle, and quite unlikely to switch to using Uber for the equivalent job.) At the same time, if they weren’t underserved for a higher-level job (get me places at a certain time), then the only way for a new solution to be truly compelling would be dramatically lower price — which makes for a tough business model. This is another thing outside observers get wrong about Uber when they exclaim, “I don’t see how this is cheaper than owning a car!” Well, here’s the thing — Uber doesn’t have to be cheaper than driving, because it’s superior to driving your own vehicle in many ways! You don’t have to worry about parking, insurance, drinking, maintenance, gas, or anything else. The simultaneous condition of being overserved and underserved by existing solutions is what made Uber so compelling, in a way that other ride-sharing services or carpooling didn’t quite get right. Uber works because it’s cheap, but its appeal is because it’s better…

…If customers only check off the “underserved” box, then it seems likely you’re dealing with a problem that’s a. very hard, or b. the customer isn’t aware they have. This isn’t a great position to be in — it’ll be very hard to build an initial solution and attract early adopters.

If they only check off the “overserved” box, then customers know what they want — but it may be that they’re only motivated by price. And that’s also not a great position to be in: you may get lots of adopters really quickly, but find it very difficult to extract any profit from them…

…The particular combination of customers overserved at level j while being underserved at level j+1, when it happens, explains how from time to time we see markets where the demand is zero and then all of a sudden a vertical line straight up.

3. Why Housing May Be In for Another Cost Shock Next Year – Tracy Alloway, Joe Weisenthal, and Lee Everett

Lee (04:44):

It’s interesting. I think stress is hitting sort of all sides of the market. You have your bigger, more well established shops that have been managing through this, able to handle the higher rate environment, but have obviously taken a very real valuation hit on their existing portfolios. Like 20% to 30% depending upon the portfolio composition. At the same time you’ve had record demand hitting the sector because cost to buy housing is exceptionally unattainable today. And then on the other side you’re having a very material impact on the supply side and I think that’s what’s really unique. If you think back to September, the 10-year was around a 3.6%, I think, the day Chair Powell cut us by 50 basis points. Well, we’re at almost a 4.6% today and I remember that night you heard reports about developers out at local dinners and they were calling it Fed Day and getting ready to put shovels in the ground.

Joe (05:37):

Drinking champagne and stuff like that.

Lee (05:38):

Exactly. And what you’ve seen instead is increased stress on both the short end and the long end of the curve. That’s given you trouble on the short end, to start new housing, and trouble on the long end to afford longer term for ownership housing…

…Lee (11:29):

Yes, I think frankly we’re about to transition from what has been a very renter friendly market to again a landlord friendly market over the course of the next two to three years. And that’s going to be particularly driven by what we’re seeing on the supply side. We’re going to have over a million units come to market over a two-year period here in ’24 and ’25, but peak supply is hitting in the next six months and if you look at relative time from a) peak supply and then b) to getting to a level of lower supply than you saw last cycle, every major market in the country will be there by the end of 2026.

Joe (12:13):

Be where?

Lee (12:15):

Delivering less housing units than they did on average from ’17 to ’19 in apartment buildings. So you’re going to go below prior cycle supply very quickly. At the same time, we do have exceptionally strong labor markets here and the demand story has been outstanding. So 2024 is going to end the year, depending upon the data provider you use, as the first or third highest year for rental demand ever. 2021 was the prior record. So we’re seeing people form rental households at unprecedented rate in the US and as that supply comes down, you’re going to see that demand struggle to frankly find high quality, well-located assets to move in, and you’re likely to see that relationship flip at that point.

Tracy (13:08):

So the other thing that affects multifamily housing construction other than interest rates has to be just general confidence, I guess, in the direction of the economy, the direction of the world and certainly there’s a lot going on right now. We’re recording this on January 28th and there’s news that the Trump administration is freezing a whole bunch of federal spending. I think it’s something like 20% of federal spending. That includes presumably stuff like Section 8 and other affordable housing measures. Would that be expected to hit multifamily as well?

Lee (13:46):

Yeah, and I think it’s probably easiest to sort of start at the top, right? When you’re building multifamily, you’re generally trying to build to an acceptable return on cost, but frankly what we’re doing is putting an investor’s money together and generating returns for them. Multifamily isn’t built for free and it can’t be in this sort of economic world and a general rule of thumb is a 6+% return on cost. So cost to build, you want to yield over 6% of that to get a building to pencil. That tracks up closer to 7% depending upon the institution, because you need to build to that yield on cost, you have to have rents that are high enough to generate enough rental revenue to drive that return. So in order to build today, you have to build it exceptionally high rent levels, because of the cost to build, because of the cost of interest rates.

The only way to drop that is to drop the cost and that cost drop typically comes for affordable housing from the federal government, be it HUD grants that are then deployed through the local housing agency, be it LIHTC, be it any sort of an ensemble of ways to cut costs. That’s how you can get to affordable rents on the supply side. And then on the demand side, you can cut rents by literally giving people a rent check, which is what Section 8 is. And that again comes from the federal government via grants given to the local housing agencies to deploy. And if that money dries up, you have immense problems in terms of a) fueling the demand for these people, because you’re cutting rent on the Section 8 side and b) encouraging future construction of affordable apartment buildings…

…Joe (17:47):

Let’s talk about deportation impacts on labor. What are the estimates for what percentage of the multifamily workforce, whether it’s construction or maintenance, whatever else, is undocumented labor?

Lee (18:01):

It’s estimated 20% of construction workers in this country are undocumented labor. I’d venture to guess it’s similar for the whole multifamily industry when you look at staffing and things along those lines, and I think when you look at a combination of deportation of construction workers as well as the sheer amount of labor it’s going to require to rebuild huge swaths of California, I think you could be looking at a massive deficit in labor within the construction space. And when you think about that, that’s going to be your strongest lever that’s going to hit your cost to build and that’s what’s going to drive up those rents that are necessary. Is all of this immense pressure you’re going to see in the labor costs.

4. Test-Time Search: A Path To AGI – Akash Bajwa

The GPT family of models performed poorly relative to o3 on the ARC benchmark because large models memorise knowledge rather than reasoning processes…

…As an example, Meta intentionally overtrained Llama 3 on 15 trillion tokens to lower inference costs (as they served their billions of users). The model weights become more optimised for common patterns and in-distribution tasks, trading off generalisability to novel tasks.

This architecture combined with ‘internet scale’ data has produced incredible recent advances, but the next leap will come from a new paradigm – instead of outputs, models will be trained on reasoning steps…

…This new vector of scaling will rely on a combination of synthetic and human generated reasoning data. As we’ll see, both will be expensive forms of reinforcement learning (o3’s performance of 87.5% on ARC AGI in high-compute mode cost thousands of $ per task)…

…Synthetic data will be most useful for domains where functional verification is possible, e.g. code, maths and engineering…

…Scaling inference time compute is in line with the Bitter Lesson – there are only 2 techniques that scale indefinitely with compute: learning & search.

DeepMind’s AlphaGo used Monte Carlo Tree Search during test time to attain superhuman status – if stripped of this capabilities, it drops in Elo from ~5,200 to 3,000 (top humans are around ~3,800)…

…The exorbitant costs stem from the many, many Chains Of Thought generated as the model searches for the chains that lead to the right answer – all of the other tokens are useless, but cost a lot to generate…

…Functionally verifiable domains are the most amenable to synthetic CoTs because engineering the reward is much easier than in domains where subjectivity is involved…

…Code execution provides an unambiguous, binary reward signal – either the code executes successfully or it fails, creating clearly defined success criteria for training.

In functionally verifiable domains, the correct CoT tokens become training data…

…Over time, this should have a deflationary effect on the inference cost of reasoning models, as we’ve seen with frontier models in the pre-training paradigm…

…As pre-training gains plateau (or become too expensive), we’ve found a new vector of scaling (test time search) that is demonstrating a path to truly general intelligence.

Data acquisition/generation remains the bottleneck on progress, not compute. Microsoft’s announcement of $80bn in capex for 2025 underscores the Street’s underestimation of hyperscaler capex and compute buildout.

The implications of inference scaling run up and down the stack. Instead of the densely interconnected supercomputers of the pre-training paradigm, we will see more distribution of workloads, perhaps some even running locally. How will market share evolve as companies look to optimise test time search workloads – will AI ASICs eat into Nvidia market share?

Instead of prohibitively expensive pre-training runs, enterprises developing their own models may opt to train smaller models with reasoning cores and decide when to scale up test time search for certain economically valuable tasks. The result is the alchemy of capex to opex and fixed costs to variable costs. CIOs will decide which tasks merit more investment and test time search – inevitably, this will still be cheaper than human labour.

5. Don’t Freak Out – Ben Carlson

The common theme across the Apollo missions was the sheer amount of planning involved.  There were months and months of simulations and training exercises to review every possible scenario. They wanted every process to be automatic.

But there was always the risk of an unplanned error, considering they were propelling these giant hunks of metal through space using rocket fuel that would allow them to reach speeds of more than 24,000 miles per hour…

…When Apollo 13 had an explosion mid-flight, it wasn’t something anyone thought could have been even a remote possibility. Astronaut Jack Swigert explained it after the fact like this:

Nobody thought the spacecraft would lose two fuel cells and two oxygen tanks. It couldn’t happen. If somebody had thrown that at us in the simulator, we’d have said, ‘Come on, you’re not being realistic.’

This is why NASA trained the astronauts in one skill more than any other leading up to their space flights — the art of not panicking. The only reason they could turn the Apollo 13 spacecraft around 200,000 miles from earth following an explosion onboard is because the astronauts and everyone on the ground remained levelheaded. No one freaked out.

Or if they were freaking out internally, they didn’t act on those emotions.

In a nutshell, that is successful investing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 February 2025:

1. DeepSeek: The View from China – Jordan Schneider, Irene Zhang, Angela Shen, and Yiwen

In this newsletter, we share a translation of insights from a January 26 closed-door session hosted by Shixiang 拾象, a VC spun out from Sequoia China. Attended by dozens of AI researchers, investors, and industry insiders, the event captures how the Chinese AI community is processing the DeepSeek shock…

…The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant…

…In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough…

…Why did DeepSeek catch up so fast?

Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.

One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier….

…AI is similar to a step function, where the compute requirements for followers have decreased by a factor of 10. Followers have historically had lower compute costs, but explorers still need to train many models. The exploration of new algorithms and architectures will not stop. Behind the step function, there are significant investments by many people, meaning compute investments will continue to advance. Many resources will also be allocated to products. Apart from reasoning, there are other directions that are compute-intensive. While the vast amount of compute resources spent by explorers may not be visible, without such investment, the next “step” might not occur. Additionally, many are dissatisfied with current architectures and RL methods, and progress will continue.

When exploring directions, performance achieved with 10,000 GPUs may not always be significantly better than that of 1,000 GPUs, but there is a threshold somewhere. It’s unlikely that meaningful results can be achieved with only 100 GPUs because the iteration time for each solution would be too long…

…The question of why OpenAI and Anthropic did not do work in DeepSeek’s direction is a question of company-specific focus. OpenAI and Anthropic might have felt that investing their compute towards other areas was more valuable.

One hypothesis for why DeepSeek was successful is that unlike Big Tech firms, DeepSeek did not work on multi-modality and focused exclusively on language. Big Tech firms’ model capabilities aren’t weak, but they have to maintain a low profile and cannot release too often. Currently, multimodality is not very critical, as intelligence primarily comes from language, and multimodality does not contribute significantly to improving intelligence…

…2025 will, first and foremost, see interest in new architectures beyond Transformers. Some initial exploration is already underway, aiming to reduce costs while pushing the boundaries of intelligence. Secondly, the potential of reinforcement learning (RL) has yet to be tapped into completely. On the product side, there is significant interest in agents, though they have yet to see widespread application…

…It is reported that Meta is still in the process of reproducing DeepSeek, but so far, this has not significantly impacted their infrastructure or long-term roadmap. In the long run, beyond exploring the boundaries of the technology, cost efficiency must also be considered. Lowering costs will let us have more fun…

…From the developer’s perspective, models like Claude-3.5-Sonnet have been specifically trained for tool use, making them highly suitable for agent development. In contrast, models like DeepSeek have not yet focused on this area, but the potential for growth with DeepSeek is immense…

…Currently, reinforcement learning (RL) solves problems with standard answers but has not achieved breakthroughs beyond what AlphaZero accomplished. In fact, it is often simpler. Distillation addresses problems with standard answers, and RL methods work effectively when training with such answers. This explains why distillation and RL have made rapid progress in recent years.

Humanity’s demand for intelligence is vastly underestimated. Many critical problems, such as cancer and SpaceX’s heat shield materials, remain unsolved. Existing AI primarily automates tasks, but there are numerous unsolved challenges ahead. Looking forward, the potential for explosive growth is immense, and the advancement of intelligence cannot stop…

…Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.

The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency. DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization…

…Open source controls the margins of the whole market. If open source can do 95% of what closed source can do and closed source is too expensive, then open source can be used completely. If the capabilities of open source and closed source do not differ greatly, then this presents a big challenge for closed source…

…AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.

2. Explaining International Valuations –  Daniel Rasmussen

Perhaps the single greatest divergence in equity markets has been the continued outperformance of US versus international equities—and thus the widening of the valuation gap between the US and the rest of the world…

…By far the most significant difference, explaining about half the valuation gap, is the domicile of listing. US-listed stocks are substantially more expensive than internationally listed stocks for no reason other than the place of listing.

It’s particularly interesting that the regression shows having a higher percentage of sales in the US results in cheaper valuations. A key driver of this is that several of the US tech giants most responsible for high US equity valuations having a relatively low percentage of sales in the US (Alphabet, Microsoft, and Tesla at around 50%; Apple, Netflix, Meta, and NVIDIA at around 40%). The big question, then, is why half the valuation gap is explained simply by being listed on US exchanges. Even large internationally listed companies with >40% of their revenue coming from the US, like Toyota, Mitsubishi, Roche or Deutsche Telekom (which owns T-Mobile), trade at steep value multiples relative to US peers.

Were a larger percentage of the valuation gap explained by fundamentals, we’d expect such a gap to persist. But given that the valuation gap is primarily explained simply by the location of listing, we think there’s a strong reason to expect a convergence—and therefore to favor international over US-listed stocks, despite their terrible relative performance over the past decade.

3. The Most Impressive Prediction of All Time – Jeffrey Emanuel

My candidate for the most impressive prediction of all time came from a person who is practically unknown in the West except for a relatively small group of historians and people interested in niche subjects. The person I’m thinking of is named Pyotr Durnovo, and he was an Imperial Russian government official who lived from 1842 to 1915.

We will discuss more about him later and how his life experience may have prepared him to be able to make such an impressive prediction, but the short version of it is that he initially studied to be in the Navy and served there for around a decade, and then became the Director of Police for the Ministry of Internal Affairs for the entire Russian Empire under Tsar Alexander III. Later, he served as the Minister of the Interior under Tsar Nicholas II (the one who was ultimately executed with his family by the Bolsheviks in 1917 during the Russian Revolution).

So what is this prediction he made, anyway, and why is it so impressive? Well, in 1914, six months prior to the outbreak of World War 1, Durnovo wrote a truly remarkable ~7,600-word memorandum for Tsar Nicholas II and his top 2 or 3 ministers, which we know was given to them, since it was found in Nicholas’ papers and later published in 1922 by communist historians after the revolution. If they had only read it carefully and took its warnings more seriously, the world we live in today might look very different!…

…For one, it predicted an imminent war on the horizon, which he ultimately blamed on the collision course between England and Germany, which were the two greatest industrial powers at the time. This was certainly not some earth shattering or special prediction; a lot of people predicted some kind of big conflict, and it was often said that “war was in the air” at the time…

…It’s how he analyzed the situation, and then used that reasoning to predict the exact groupings of countries that would participate in the conflict and on which side, and how the situation would evolve from there, that is so impressive…

…His predictions about alliances and national behaviors were almost unbelievably specific and ran counter to the conventional wisdom of the time:

  • He predicted that Italy would not side with Germany despite being part of the Triple Alliance, and would instead join the opposing side if victory seemed likely, seeking territory from both Austria and Turkey. This is exactly what happened; Italy joined the Allies in 1915 after negotiating for territorial concessions.
  • He predicted that Romania would remain neutral until it was clear which side would win, then join the victorious side to claim territory. This also came true— Romania entered the war in 1916 on the Allied side after significant Russian successes.
  • Most surprsingly, he predicted that Bulgaria would side against Serbia and by extension against Russia, despite Russia being Bulgaria’s historic liberator from Ottoman rule— a prediction that seemed almost unthinkable to most observers at the time. This came true exactly as he foresaw, with Bulgaria joining the Central Powers in 1915.
  • He correctly predicted that Serbia and Montenegro would side against Austria, while Greece would likely remain neutral until the outcome was more or less predetermined.
  • He predicted unrest among Muslims in the Caucasus and Turkestan (which occurred).
  • He predicted the possibility of Afghanistan moving against Russia (which happened in 1919).
  • He predicted serious complications in Poland (the Polish-Soviet War of 1919-1921).
  • He predicted an uprising in Finland if Sweden joined Germany (Finland did declare independence in 1917)

…If all of that weren’t already so ridiculous to get right, he went way beyond all that to realize that, regardless of who won, the war would lead to “social revolution” in both the defeated AND victorious countries, starting with the losing side and then spreading to the winners. This was perhaps his most extraordinary prediction, as it came true in spectacular fashion:

  • Russia, despite being on the winning side, experienced the Bolshevik Revolution in 1917; we will go into much more detail about these predictions below.
  • Germany, after losing the war, experienced the German Revolution of 1918-1919; Durnovo predicted that unrest and revolution would be specifically tied to economic factors and class interests rather than purely political ones: he outlined how German workers would turn against the agricultural interests that had dominated pre-war German policy once defeat cut off their export markets and industrial employment, and this exact dynamic played out in the German Revolution of 1918-1919.

Now, you might object here that “Well, it’s not that crazy to believe there might be a revolution in a country which suffered massive losses in a catastrophic war; lots of people might have predicted that.” But the thing is, Durnovo went so far beyond merely predicting that there would be a Russian Revolution. He basically predicted every contour of the Revolution, the driving forces behind it, how it impacted different segments of Russian society, and how it would all unfold, step by step!…

…So how was Durnovo able to accomplish this incredible feat of prediction? Obviously, he was a genius of the first order, which is perhaps not so surprising given that he was a close relative of the famous Tolstoy family. But raw IQ is certainly not enough, nor is being well informed and knowledgeable. What kind of man could see so clearly what virtually everyone else missed? He was a complex character whose very contradictions likely enabled his extraordinary insights; he was, at the same time:

  • A conservative police chief who often expressed liberal thoughts in private
  • A supposed reactionary who opposed anti-Semitic measures and defended Jews
  • A cynical operator who nevertheless would help others when he could
  • A man capable of both strict officialdom and surprising gentleness
  • A high official who preferred informal interactions (his subordinates would warn visitors not to address him as “Your Excellency”)

These contradictions suggest someone who wasn’t bound by conventional ideological frameworks or social expectations— a crucial trait for seeing beyond accepted wisdom. He also had a wide range of professional experience that prepared him to see things in a multi-faceted, sophisticated way, as by 1915, he had done the following:

  • Naval officer (9 years of far-sea cruises)
  • Military legal training
  • Assistant Prosecutor in various parts of Russia
  • Director of Police Department for 10 years
  • Assistant Minister of Interior under multiple ministers
  • Minister of Interior
  • Member of State Council

This combination of experiences was extraordinary and atypical to say the least:

  • His naval and legal background gave him insight into the military, maritime trade, and the Russian legal system.
  • His prosecutorial work exposed him to conditions across Russia, not just in the big cities.
  • His police work gave him unparalleled insight into social discontent and the strategies and thinking of professional revolutionaries like Lenin, Stalin, and Trotsky.
  • His ministerial positions showed him the workings (and limitations) of state power.

He also occupied a unique position as both an insider and an outsider: 

  • He was from old nobility but not wealthy or particularly influential
  • He reached high office but was temporarily dismissed in disgrace (a sordid story in which Durnovo had his secret police officers search the private letters of a foreign ambassador— inside an embassy building no less— so they could steal love letters sent by Durnovo’s mistress to the ambassador; when the ambassador complained to Tsar Alexander III, he was furious, ordering his minister to “remove this swine within twenty-four hours.”)
  • He was a conservative who often disagreed with other conservatives
  • He understood both state power and its limitations

This dual perspective may have freed him from the groupthink that afflicted both conservative and liberal circles.

4. USA, Inc – Michael Batnick

Consider this face blower of a stat from Goldman: “Since 1992, earnings growth in the US has outpaced earnings in non-US developed economies by an annual average of 2.4 percentage points.”

Most of the world is barely earning more than they were prior to the pandemic. The U.S. looks like an unstoppable freight train…

…The one sided performance has driven valuations between us and the rest of the world to record levels. We’ve all seen a version of these charts before…

…BUT! These charts aren’t comparing apples with apples. Goldman notes that only 1% of the U.K. market is in technology companies. Another example they cite is that energy is 5% of S&P 500 earnings, 19% of UK, and just 1% of Japan. We’re not comparing apples with apples.

They did a great job adjusting for differences in sector weights…

…The U.S. still trades at a premium to the rest of the world ex-India, but not as much as the prior chart would have you believe. Before any adjustments, the Eurozone trades at a 39% discount to the U.S. And after the adjustments, that falls to 23%.

5. DeepSeek FAQ – Ben Thompson

Let’s work backwards: what was the V2 model, and why was it important?

The DeepSeek-V2 model introduced two important breakthroughs: DeepSeekMoE and DeepSeekMLA. The “MoE” in DeepSeekMoE refers to “mixture of experts”. Some models, like GPT-3.5, activate the entire model during both training and inference; it turns out, however, that not every part of the model is necessary for the topic at hand. MoE splits the model into multiple “experts” and only activates the ones that are necessary; GPT-4 was a MoE model that was believed to have 16 experts with approximately 110 billion parameters each.

DeepSeekMoE, as implemented in V2, introduced important innovations on this concept, including differentiating between more finely-grained specialized experts, and shared experts with more generalized capabilities. Critically, DeepSeekMoE also introduced new approaches to load-balancing and routing during training; traditionally MoE increased communications overhead in training in exchange for efficient inference, but DeepSeek’s approach made training more efficient as well.

DeepSeekMLA was an even bigger breakthrough. One of the biggest limitations on inference is the sheer amount of memory required: you both need to load the model into memory and also load the entire context window. Context windows are particularly expensive in terms of memory, as every token requires both a key and corresponding value; DeepSeekMLA, or multi-head latent attention, makes it possible to compress the key-value store, dramatically decreasing memory usage during inference.

I’m not sure I understood any of that.

The key implications of these breakthroughs — and the part you need to understand — only became apparent with V3, which added a new approach to load balancing (further reducing communications overhead) and multi-token prediction in training (further densifying each training step, again reducing overhead): V3 was shockingly cheap to train. DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.

That seems impossibly low.

DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses; from the V3 paper:

Lastly, we emphasize again the economical training costs of DeepSeek-V3, summarized in Table 1, achieved through our optimized co-design of algorithms, frameworks, and hardware. During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs. Consequently, our pre- training stage is completed in less than two months and costs 2664K GPU hours. Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M. Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.

So no, you can’t replicate DeepSeek the company for $5.576 million.

I still don’t believe that number.

Actually, the burden of proof is on the doubters, at least once you understand the V3 architecture. Remember that bit about DeepSeekMoE: V3 has 671 billion parameters, but only 37 billion parameters in the active expert are computed per token; this equates to 333.3 billion FLOPs of compute per token. Here I should mention another DeepSeek innovation: while parameters were stored with BF16 or FP32 precision, they were reduced to FP8 precision for calculations; 2048 H800 GPUs have a capacity of 3.97 exoflops, i.e. 3.97 billion billion FLOPS. The training set, meanwhile, consisted of 14.8 trillion tokens; once you do all of the math it becomes apparent that 2.8 million H800 hours is sufficient for training V3. Again, this was just the final run, not the total cost, but it’s a plausible number.

Scale AI CEO Alexandr Wang said they have 50,000 H100s.

I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.

Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.

Meanwhile, DeepSeek also makes their models available for inference: that requires a whole bunch of GPUs above-and-beyond whatever was used for training…

Is this why all of the Big Tech stock prices are down?

In the long run, model commoditization and cheaper inference — which DeepSeek has also demonstrated — is great for Big Tech. A world where Microsoft gets to provide inference to its customers for a fraction of the cost means that Microsoft has to spend less on data centers and GPUs, or, just as likely, sees dramatically higher usage given that inference is so much cheaper. Another big winner is Amazon: AWS has by-and-large failed to make their own quality model, but that doesn’t matter if there are very high quality open source models that they can serve at far lower costs than expected.

Apple is also a big winner. Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU (neural processing unit) have access to a shared pool of memory; this means that Apple’s high-end hardware actually has the best consumer chip for inference (Nvidia gaming GPUs max out at 32GB of VRAM, while Apple’s chips go up to 192 GB of RAM).

Meta, meanwhile, is the biggest winner of all. I already laid out last fall how every aspect of Meta’s business benefits from AI; a big barrier to realizing that vision is the cost of inference, which means that dramatically cheaper inference — and dramatically cheaper training, given the need for Meta to stay on the cutting edge — makes that vision much more achievable.

Google, meanwhile, is probably in worse shape: a world of decreased hardware requirements lessens the relative advantage they have from TPUs. More importantly, a world of zero-cost inference increases the viability and likelihood of products that displace search; granted, Google gets lower costs as well, but any change from the status quo is probably a net negative…

...How did DeepSeek make R1?

DeepSeek actually made two models: R1 and R1-Zero. I actually think that R1-Zero is the bigger deal…

…R1-Zero, however, drops the HF part — it’s just reinforcement learning. DeepSeek gave the model a set of math, code, and logic questions, and set two reward functions: one for the right answer, and one for the right format that utilized a thinking process. Moreover, the technique was a simple one: instead of trying to evaluate step-by-step (process supervision), or doing a search of all possible answers (a la AlphaGo), DeepSeek encouraged the model to try several different answers at a time and then graded them according to the two reward functions.

What emerged is a model that developed reasoning and chains-of-thought on its own…

…Here again it seems plausible that DeepSeek benefited from distillation, particularly in terms of training R1. That, though, is itself an important takeaway: we have a situation where AI models are teaching AI models, and where AI models are teaching themselves.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms, Microsoft, Netflix, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 January 2025:

1. Thoughts On A Month With Devin – Hamel Husain, Isaac Flath, and Johno Whitaker

Unlike typical AI assistants, Devin operates through Slack and spins up its own computing environment. When you chat with Devin, you’re talking to an AI that has access to a full computing environment – complete with a web browser, code editor, and shell. It can install dependencies, read documentation, and even preview web applications it creates…

…The experience is designed to feel like chatting with a colleague. You describe what you want, and Devin starts working. Through Slack, you can watch it think through problems, ask for credentials when needed, and share links to completed work. Behind the scenes, it’s running in a Docker container, which gives it the isolation it needs to safely experiment while protecting your systems. Devin also provides a web interface, which also allows you to gain access to its envirnoment and watch it work with IDEs, Web Browsers and more in real time…

…Our first task was straightforward but real: pull data from a Notion database into Google Sheets. Devin tackled this with surprising competence. It navigated to the Notion API documentation, understood what it needed, and guided me through setting up the necessary credentials in Google Cloud Console. Rather than just dumping API instructions, it walked me through each menu and button click needed – saving what would typically be tedious documentation sleuthing. The whole process took about an hour (but only a few minutes of human interaction). At the end, Devin shared a link to a perfectly formatted Google Sheet containing our data.

The code it produced was a bit verbose, but it worked. This felt like a glimpse into the future – an AI that could handle the “glue code” tasks that consume so much developer time. Johno had similar success using Devin to create a planet tracker for debunking claims about historical positions of Jupiter and Saturn. What made this particularly impressive was that he managed this entirely through his phone, with Devin handling all the heavy lifting of setting up the environment and writing the code…

…Over the course of a month, we systematically documented our attempts across these categories:

  1. Creating new projects from scratch
  2. Performing research tasks
  3. Analyzing & Modifying existing projects

The results were sobering. Out of 20 tasks, we had 14 failures, 3 successes (including our 2 initial ones), and 3 inconclusive results. Even more telling was that we couldn’t discern any pattern to predict which tasks would work. Tasks that seemed similar to our early successes would fail in unexpected ways…

…Working with Devin showed what autonomous AI development aspires to be. The UX is polished – chatting through Slack, watching it work asynchronously, seeing it set up environments and handle dependencies. When it worked, it was impressive.

But that’s the problem – it rarely worked. Out of 20 tasks we attempted, we saw 14 failures, 3 inconclusive results, and just 3 successes. More concerning was our inability to predict which tasks would succeed. Even tasks similar to our early wins would fail in complex, time-consuming ways…

…This reflects a pattern we’ve observed repeatedly in AI tooling. Social media excitement and company valuations have minimal relationship to real-world utility. We’ve found the most reliable signal comes from detailed stories of users shipping products and services. For now, we’re sticking with tools that let us drive the development process while providing AI assistance along the way.

2. Transcript: The Hidden History of Eurodollars, Part 1: Cold War Origins – Joe Weisenthal, Tracy Alloway, Lev Menand, and Josh Younger

Tracy (01:30):
It can be admittedly confusing. So why don’t we just define it right away. So eurodollars are dollar-denominated bank deposits held at foreign banks or overseas branches of US banks. And you can think of them as basically offshore dollars that sit outside the US banking system and kind of away from the Federal Reserve. They’re basically a very special form of money. You could call them shadow money.

Joe (01:57):
And it’s totally gigantic. So it’s almost $10 trillion. And I just find it so interesting, right? Because when I think of dollars, they’re either coming from, you know, the government spends dollars into existence or US bank credit. US banks [have a] license to de facto create dollars or deposits at will. And yet, eurodollars are kind of this weird thing, I guess because they’re not that.

Tracy (02:21):
Yeah, they’re not either of those. And eurodollars didn’t just spring up fully formed out of thin air. They were the result of a series of decisions all aimed at solving particular problems…

…Josh (04:27):
So eurodollars are among the most important financial instruments in the world and they are really the backbone of the global dollar system. But they come from very humble beginnings, very idiosyncratic start. And really it all started in Yugoslavia…

…So in 1945 in November, there’s a communist revolution and the US is miffed in a bunch of ways, but one of them is that the old government owes them money. And so the question is, how are they going to get it? And a few months later, Tito asked for his gold back because the Yugoslavia government had $70 million worth of gold in New York. And the Secretary of State, who was George Marshall of the Marshall Plan, he realizes he’s got a bargaining chip, which is the gold. It’s in New York and they don’t get it back until they settle their claims.

Now, even people within the State Department were kind of skeptical of this, the Yugoslavian government is obviously furious. And so are the Russians who, at this point, you know, Tito and Stalin have a falling out eventually a few years later. But at this point, they’re quite closely aligned..

…The Russians get the sense that the US is willing to use gold as a bargaining chip. They’d previously actually been building up dollar balances in New York. This is this kind of a misnomer about the post-war period. There’s this sense that that the Russians are extracting all their resources from the US, but they’re actually building up reserves of dollars because the thought is ‘We’re probably going to need to trade with these people. We have a trading company based in the US and they need resources.’ And so they’re building up foreign currency deposits and gold, but in 1947, they realize it’s not going to go well, potentially. And they pull all the gold out. They actually just called banks in New York and they say ‘We want our gold back.’ A massive reversal of the policy.

And the question is, where’s it going to go? And so they need dollars because the US dollar is the currency of foreign exchange. If they want to trade with the West, they have to trade in dollars. They need gold because gold is the basis for the monetary system. And so the question is, where can they put gold and dollars in a safe place that’s still on the right side of what was then already known as the iron curtain?

And so it turns out Paris is the ticket. They’ve actually been secretly stockpiling cash in gold in Paris. They put it in briefcases. They would fly people to Paris and put it in the consulate offices. They would just build up piles of cash and gold. And in particular, there’s a bank — BCEN — I won’t try to do it in French. And BCEN is owned by, or run by, a notorious communist sympathizer, who has a very good relationship with the Politburo. And so this is a friendly bank. And so they take on deposit the Soviet money and BCEN’s moniker in the Telex system they used to communicate was “Eurobank.”

And so, eurodollars were initially, in the late forties, just deposits issued by Eurobank, BCEN, generally for the Soviets, although also for the Chinese. And slowly this starts to percolate. There’s another communist-owned bank in London. There’s one in Brussels, which DCIA just describes as run by ‘someone with few scruples, I think is the way they put it. And so there’s some friendlies across Europe who are willing to take their money and the eurodollar market begins this way, which is preemptive sanctions evasion, basically…

…And so the first use case of eurodollars is sanctions evasion. The second use is to facilitate cross-Iron Curtain trade, although that’s a pretty small business. And so the third, and much larger business, is cross-border interest rate arbitrage. And that sounds really technical, but what it’s really doing is using foreign exchange markets and derivative markets to source dollars that the UK in particular needs in this post-war environment.

So imagine a eurodollar bank, a euro bank, takes in a eurodollar deposit, which means it gets a dollar in cash — let’s think of a physical bill, that’s an asset. It issues a eurodollar liability. And then, what is it going to do next? Because it needs to do some sort of investing. And what it does is it exchanges that dollar asset for a sterling cash, and it invests that sterling cash in some short term sterling investment — short bills or something like that. And after it does that, it says ‘I want to hedge my foreign exchange risk, because now I have a dollar liability and a sterling asset. So I’m going to use the foreign exchange forward market to agree to sell that sterling back for dollars at some point in the future at a fixed price that we agree on today.’

So that’s the bank’s position. Who’s on the other side of that trade? Let’s say a corporation, a manufacturing entity, they make radios, and that radio production process requires inputs. Those inputs are imported. And so that radio production company needs dollars with which to buy the raw materials that it uses to make the radio that it then sells for dollars in foreign markets. And so, they get those dollars from the eurobank, in exchange for the sterling they have on hand, they go buy all the parts, but they want to make sure that they know how much they’re going to receive in local currency at the end of the production process. When they sell that radio abroad, they don’t want the value of the dollar to go down. So they sell those dollars forward in exchange for sterling. And so they’ve entered into a derivative agreement, which is the opposite of the one that the euro bank has or the euro banking system.

And so then they put together the radio, they sell it abroad, they receive dollar proceeds, they turn those into sterling, which is what they pay their employees in, that’s what they pay for their land and equipment in. And that exchange rate was the one they agreed upon in advance through the foreign exchange forward contract. And so, basically what’s happening is the euro banks are pulling in dollars from abroad, distributing them through the foreign exchange market that’s trading onshore to those that need dollars today, and then providing hedges to those that will receive dollars in the future. And in the case of the euro bank, the dollars they’ll owe in the future, potentially, to their eurodollar deposit holder.

Lev (18:32):
Think about this from the perspective of the City of London coming out of the war and those bankers and the world that they grew up in, which is a world that we’ve completely forgotten, but was the world of sterling dominance before the First World War and the role that the empire played in financing global trade.

What we’re looking at in the 1950s is a group of London-based financial institutions trying to figure out a way to continue their dominance in a global economy that runs on dollars now and not on sterling. And so, the eurodollars are sort of worth the risk to the City of London, and to some extent to UK financial regulators like the Bank of England, because they need to fix their business model for a dollar world, and they want to get in on the dollar world…

…Josh (20:43):
And so this cross-border interest rate arbitrage is really just the way markets distribute the currency according to who needs it and provide the hedges that facilitate the functioning of British corporations as well. It’s what we’d call now like a use case, right? This is like a real underlying use case that doesn’t involve the Soviet Union for dollar deposits issued by non-US banks, which is, you can’t emphasize enough how fundamentally strange that is because if I tried to make dollars by writing it on piece of paper, I don’t think I’d get very far. But at the time, that’s essentially what these banks are doing.

And in particular London is a more, let’s say, reputable locale, particularly banks that are not known to be communist sympathizers. There’s a little bit of a funny thing about being a communist bank, but we won’t get into that specifically, but these are blue chip banks in London issuing dollar deposits. And that means you can use them for things and you can feel more comfortable…

…Lev (26:54):
Although, just let’s size this a little bit, right? It was a billion dollars in, say, 1960, which is maybe the equivalent of $50 billion today…

…So we have way more to go in terms of the growth of this market subsequent to 1960. It’s still pretty nascent in 1960…

…Josh (31:08):
So the question at this point is, it’s a nascent market, it’s half a Tether, and it’s unclear whether or not it’s become a big major global actor. We know it eventually becomes that, but at the time, that’s super unclear, but it becomes eventually and soon the solution to a big problem. So eurodollars are the solution to big problem because, in the background of all of this buildup, there’s massive trouble brewing and the whole global edifice of the dollar system is starting to crack.

And the question is, you know, how are we going to save it? Or should we?

3. Emergent Layers, Chapter 1: Scarcity, Abstraction & Abundance – Alex Danco

One foundational principle of the tech world is that as it builds upwards and outwards into the rest of the world, it’s doing so by building on top of these abundant resources and progressively leveraging them. We can think about the world that we know and understand today — with its constraints, and business models and maturing industries that are generally understood by all — as forming a layer, which we’ll call layer i. In time, as certain elements become abstracted and subsequently abundant, others emerge as newly scarce, or in play for new reasons and in new business models. The critical skill for understanding how this works (which is worth practicing!) is being able to work one’s way up and down between stack layers so as to understand when an abundant and scalable element has blossomed at layer i of a stack, and its scarce, non-scalable counterpart has emerged at a new layer — which we’ll call layer i+1…

…Microsoft

The original scarce resource at layer i = PC hardware. In the early days of PCs, manufacturers could compete along many axes of performance — memory, speed, functionality, and so forth — while being sufficiently differentiated from one another. But it was very hard to standardize common functions and applications that people could run across any computer, making it difficult for these use cases to grow rapidly — until Bill Gates and Paul Allen realized, Hey, there isn’t a software industry yet but there’s gonna be, so we should start it. Microsoft abstracted away the capabilities of a computer into software, so now anyone else could write their own software on top of Microsoft’s software without having to worry about the underlying machinery. PCs became an abundantly available commodity, and Microsoft became dominant and mega-profitable. A new scarce resource emerged at layer i+1: the ability to connect these PCs and get them to talk to one another…

…Facebook

Scarce resource at layer i = connections between humans using the internet. The internet was awash in people and content, but authentic human interaction was still relatively scarce and difficult. As such, all of the attempts at connecting people to content and advertising and services were feature-stuffed, spammy, bloated and bad. The critical step forward that Facebook accomplished was abstracting away the “reciprocal friendship” into a functioning social graph. And we’ve seen what’s happened since: Facebook, and social connectivity in general, has exploded and become a newly abundant resource. Facebook became dominant and mega-profitable…

…One critical aspect of this layering is that at each higher level of abstraction, the lever with which one can create value and extract profit becomes successively longer. You can see this by looking at market cap per employee of these dominant companies:

Intel: 106k employees, 55B revenue, 149B mkt cap

Microsoft: 120k employees, 93B revenue, 429B mkt cap

Google / Alphabet: 60k employees 75B revenue, 510B mkt cap

Facebook: 13k employees, 6B revenue, 320B mkt cap…

…A non-obvious but critical point to appreciate here is that for of the first n movers mobilizing around a scarce element, the arrival and eventual dominance of the last mover will be seen as a Black Swan event of sorts. By abstracting away the scarce resource instead of organizing around its scarcity, these companies become the first to be fully playing in the sandbox at level i+1, as opposed to the non-scalable scarcity-governed sandbox at level i…

…The last decade saw plenty of startups go after the transportation market, and I’m sure all of them described themselves as “scalable” in their investor decks. Meanwhile, the whole valley was busy passing on Uber because it was initially just a better way to do a black car service, and few people understood the true scalable potential in abstracting away the driver-rider trust required for UberX. The take home lesson here should be taken to heart: when the first n companies go after an issue, no matter what language they use in their pitch, their business models typically don’t truly venture beyond the constraints at layer i that anybody can see and understand. They’re easier to work through, make more sense to “rational investors”, and require fewer non-linear leaps of thinking to understand. As such, when the last mover emerges at level i+1, they’re a Black Swan event: few people foresaw their opportunity, their impact is enormous, and everybody rationalizes what happened after the fact…

…At level i+1 of the stack, the newly valuable resource is that which emerges as scarce out of the transition from scarcity to abstraction to abundance at layer i.

4. The Default Position: LevFin’s Latest Game Just Got Shut Down…Sort Of – JunkBondInvestor

Serta was no small player. We’re talking about the company behind Serta and Beautyrest—the beds you see in every department store in America. But by 2020, they were in serious trouble. Drowning in debt and sales were tanking.

That’s when a group of savvy lenders saw their opportunity. Already holding a chunk of Serta’s debt, they approached with what would become lawyers’ new favorite playbook.

The deal? A group holding 51% of their term loans would provide new money, but only if they got to exchange their old loans for new “super-senior” debt that jumps to the front of the line. The other 49%? They didn’t even get a phone call.

Here’s a sobering fact: non-participating lenders saw their position so deeply subordinated that their recovery prospects plummeted. The new super-senior debt was worth nearly full value, while the excluded lenders saw their position crater.

But here’s where they screwed up.

Their loan agreement only allowed “open market purchases.” Serta’s lawyers tried arguing that their private backroom deal counted as “open market” because… well, just because.

The Fifth Circuit wasn’t having any of it. They said what everyone was thinking: A private deal with hand-picked lenders isn’t an “open market” any more than a private club is a public park…

…On the exact same day—I’m not making this up—a New York court looked at pretty much the identical deal from Mitel Networks and said “Sure, go right ahead.”…

…Mitel pulled the exact same move as Serta. They were drowning in debt, so they cut a deal with friendly lenders to jump them to the front of the line. New super-priority debt paper. Everyone else got pushed to the back.

So what made this different from Serta?

Three words. That’s it. Instead of requiring “open market purchases,” Mitel’s agreement just said they could “purchase by way of assignment.” No mention of open markets anywhere.

The New York court basically said: “Look, if you didn’t want the company doing private deals, you should have said so in the contract.” Those excluded lenders who were screaming about their “sacred rights”? The court told them their rights weren’t so sacred after all.

Here’s the brutal truth—the same transaction either flies or dies based entirely on a few words in your documents. If that doesn’t scare the hell out of every lender out there, it should.

5. Tyler Cowen – The #1 Bottleneck to AI progress Is Humans – Dwarkesh Patel and Tyler Cowen

Dwarkesh Patel 00:00:11
Why won’t we have explosive economic growth, 20% plus, because of AI?

Tyler Cowen 00:00:17
It’s very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can’t use AI very well.

Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I’m guessing education is 6 to 7%. The nonprofit sector, I’m not sure the number, but you add it all up, that’s half of the economy right there.

How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you’ll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.

Dwarkesh Patel 00:01:04
The mechanism behind cost disease is that there’s a limited amount of laborers, and if there’s one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?

Tyler Cowen 00:01:25
Cost disease is more general than that. Let’s say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.

Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.

Dwarkesh Patel 00:01:57
If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you’d have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?

Tyler Cowen 00:02:11
He and I would agree, I hope. I think I would tell him, “Hey, it’s going to take a long time.” And he’d say, “Hmm, I don’t see it happening yet. I think it’s going to take a long time.” And we’d shake hands and walk off into the sunset. And then I’d eat some of his rice or wheat or whatever, and that would be awesome.

Dwarkesh Patel 00:02:29
But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don’t just eat it away, you could agree with that, right?

Tyler Cowen 00:02:38
I don’t know what the word “could” means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you’d call prediction markets are not forecasting super rapid growth anytime soon…

…Dwarkesh Patel 00:03:13
In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.

Tyler Cowen 00:03:26
I don’t agree.

Dwarkesh Patel 00:03:27
Why not buy the models?

Tyler Cowen 00:03:28
His model is far too much a one-factor model, right? Population. I don’t think it’s very predictive. We’ve had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.

So it’s really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it’s unclear what’s happened, but it’s also fragile. There’s the perspective of the economist, but also that of the anthropologist, the sociologist.

They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there’s any simple lever you can push on, intelligence or not, that’s going to give you breakaway economic growth.

Dwarkesh Patel 00:04:11
What you just said, where you’re bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you’re going to be bottlenecked by the restaurants…

…Here’s a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.

We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.

Dwarkesh Patel 00:04:53
So about a year ago, your co-writer on Martial Revolution, Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That’s why you have to do semiconductors in Taiwan, because that’s where they’re putting their nominal amount of geniuses. We’re putting ours in finance and tech.

If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn’t going to grow any faster?

Tyler Cowen 00:05:32
I didn’t agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they’re amazingly low. They’re pretty insignificant.

People who are very successful, they’re very smart, but they’re people who have say eight or nine areas where they’re like, on a scale of 1 to 10, there are nine. Like they have one area where they’re just like an 11 and a half on a scale of 1 to 10. And then on everything else, they’re an eight to a nine and have a lot of determination.

And that’s what leads to incredible success. And IQ is one of those things, but it’s not actually that important. It’s the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 January 2025:

1. OpenAI o3 Breakthrough High Score on ARC-AGI-Pub – François Chollet

OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.

This is a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models. For context, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All intuition about AI capabilities will need to get updated for o3…

…The high-efficiency score of 75.7% is within the budget rules of ARC-AGI-Pub (costs <$10k) and therefore qualifies as 1st place on the public leaderboard!

The low-efficiency score of 87.5% is quite expensive, but still shows that performance on novel tasks does improve with increased compute (at least up to this level.)

Despite the significant cost per task, these numbers aren’t just the result of applying brute force compute to the benchmark. OpenAI’s new o3 model represents a significant leap forward in AI’s ability to adapt to novel tasks. This is not merely incremental improvement, but a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain.

Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: you could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy. Meanwhile o3 requires $17-20 per task in the low-compute mode. But cost-performance will likely improve quite dramatically over the next few months and years, so you should plan for these capabilities to become competitive with human work within a fairly short timeline.

o3’s improvement over the GPT series proves that architecture is everything. You couldn’t throw more compute at GPT-4 and get these results. Simply scaling up the things we were doing from 2019 to 2023 – take the same architecture, train a bigger version on more data – is not enough. Further progress is about new ideas…

…Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible…

…To adapt to novelty, you need two things. First, you need knowledge – a set of reusable functions or programs to draw upon. LLMs have more than enough of that. Second, you need the ability to recombine these functions into a brand new program when facing a new task – a program that models the task at hand. Program synthesis. LLMs have long lacked this feature. The o series of models fixes that.

For now, we can only speculate about the exact specifics of how o3 works. But o3’s core mechanism appears to be natural language program search and execution within token space – at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is presumably guided by some kind of evaluator model. To note, Demis Hassabis hinted back in a June 2023 interview that DeepMind had been researching this very idea – this line of work has been a long time coming.

So while single-generation LLMs struggle with novelty, o3 overcomes this by generating and executing its own programs, where the program itself (the CoT) becomes the artifact of knowledge recombination. Although this is not the only viable approach to test-time knowledge recombination (you could also do test-time training, or search in latent space), it represents the current state-of-the-art as per these new ARC-AGI numbers.

Effectively, o3 represents a form of deep learning-guided program search. The model does test-time search over a space of “programs” (in this case, natural language programs – the space of CoTs that describe the steps to solve the task at hand), guided by a deep learning prior (the base LLM). The reason why solving a single ARC-AGI task can end up taking up tens of millions of tokens and cost thousands of dollars is because this search process has to explore an enormous number of paths through program space – including backtracking.

2. Energy Cheat Sheet – Brian Potter

Most energy we consume gets wasted. Of the 93.6 quads (~27,400 TWh) the US consumed in 2023, only around 1/3rd of that went towards producing useful work. The rest was lost due to various inefficiencies, such as heat engine and transmission losses…

…Another obvious fact is that despite the burgeoning construction of renewable energy infrastructure, the majority of our energy still comes from burning hydrocarbons. Petroleum, coal, and natural gas combined are responsible for roughly 82% of total energy consumption in the US.

Related to this fact is that electricity generation is a relatively small fraction of our energy system: roughly ⅓ of energy inputs go towards generating electricity. For residential and commercial consumption, only around half of energy use comes from electricity. For industrial and transportation energy (the two largest sources of consumption), electricity is around 13% and less than 0.1%.

What this chart makes clear, but also sort of abstracts away, is the enormous amount of infrastructure we’ve built for moving around hydrocarbons. The US has close to 1 million oil and natural gas wells, 3 million miles of natural gas pipeline, 145,000 gas stations, and capacity to refine 18.4 million barrels of oil a day.

This is why environmental advocates often focus on electrifying everything: decarbonizing energy infrastructure requires much more than just building low-carbon sources of energy like solar panels and wind turbines — it requires fundamentally reworking how our society moves energy around. It’s also why eliminating roadblocks and bottlenecks to energy infrastructure construction is so important.

We can also dive deeper and look at a sector-by-sector breakdown of energy use. The residential sector uses around 11.5 quads (3370 TWh) of energy, a little over 12% of total US energy consumption…

…One major takeaway here is that most residential energy consumption goes into heating things up: Space heating (5.74 quads), water heating (1.69 quads), and clothes dryers (0.26 quads) together account for ⅔rds of residential energy consumption.4 You sometimes see air conditioners decried as wasteful by energy-minded environmentalists, but air conditioning is a much smaller share of energy consumption than heating…

…Most transportation energy in the US is consumed in the form of gasoline and diesel fuel, with a relatively small amount of jet fuel. If we look at it by transportation mode, most energy (~78%) is consumed by cars, trucks, and motorcycles…

…The huge amount of energy used by transportation also means that households are using a lot of energy that isn’t captured by the residential energy consumption statistics above. In fact, in a year, the average US household consumes more energy from burning gasoline (~24,000 kilowatt-hours) than what’s used by the entire rest of the house (~22,500 kilowatt-hours).

The commercial sector is not that different from the residential sector, with heating air and water using the largest fraction, with cooling and ventilation (ie: moving air around) also using large fractions.5 As with residential, its energy consumption is roughly split between electricity and natural gas…

…With industrial energy use, we see a lot of the same patterns that we see in other sectors. One is that utility electricity is a relatively small amount of industrial energy consumption (less than 20%). Most industrial energy comes from burning fuel (mostly natural gas) directly. Once again, we see that heating things up accounts for a huge fraction of energy consumption: roughly half of all manufacturing energy goes into process heating: If we add process heat to residential and commercial air and water heating, we find that roughly 20% of total US energy consumption goes towards heating things up…

…It’s clear that most energy used in the US is ultimately wasted, with only a small fraction being used to perform useful work (moving cars, heating homes, operating electronics, and so on). Moving energy around and changing its form can’t be done perfectly efficiently (thanks in part to the 2nd law of thermodynamics), and all those conversions we require to get energy where it needs to be and in the form we need it whittle away the energy available to get things done…

…The biggest source of losses is probably heat engine inefficiencies. In our hydrocarbon-based energy economy, we often need to transform energy by burning fuel and converting the heat into useful work. There are limits to how efficiently we can transform heat into mechanical work (for more about how heat engines work, see my essay about gas turbines).

The thermal efficiency of an engine is the fraction of heat energy it can transform into useful work. Coal power plant typically operates at around 30 to 40% thermal efficiency. A combined cycle gas turbine will hit closer to 60% thermal efficiency. A gas-powered car, on the other hand, operates at around 25% thermal efficiency. The large fraction of energy lost by heat engines is why some thermal electricity generation plants list their capacity in MWe, the power output in megawatts of electricity…

…The low thermal efficiency of ICE cars and heat engines in general and the high efficiency of electrical equipment (especially things like heat pumps) are the biggest counterweight to the high energy capacity of hydrocarbons. The gas tank on an ICE car technically stores much more energy than a Tesla battery pack but only a small fraction of that gasoline energy can be converted into useful motion. Switching to EVs, even if that electricity is still provided by burning fossil fuels, could save large amounts of energy (and thus carbon emissions), as it could mean switching from a 25% efficient gasoline engine to a 60% efficient combined cycle gas turbine. And of course, with electric vehicles, there’s the possibility of powering them by non-carbon emitting sources of electricity like solar or wind. 

3. Stocks Are More Expensive Than They Used to Be – Michael Batnick

In January 2018, they wrote an article, CAPE Fear: Why CAPE Naysayers Are Wrong. The article featured yours truly…

…It’s hard to believe seven years have passed since this article. It’s harder to believe that the S&P 500 is up almost 100% since their article came out, and delivered the highest 7-year performance for any CAPE starting at 33x. I did not see this coming. At all.

My whole thing was, yes, valuations are high. But companies are better today and deserve the premium multiple. I was not saying that a high CAPE is bullish. In fact, I ended most of my posts on this topic with the message of, “Expect lower returns.” I’ve never been happier to be wrong.

I want to return to some of the arguments I made, and what the CAPE zealots missed.

To use a long-term average that goes back to the late 1800s is foolish for three reasons. First, we didn’t have CAPE data back in 1929. It was first “discovered” in the late 90s. The discovery of data in financial markets changes the very essence of it. Markets are not governed by the laws of physics. They’re alive. They adapt and evolve and adjust, like an micro organism.

Second, the CAPE ratio has been rising over time since the 1980s. We’ve only visited the long-term average once in the last 25 years, and that was at the bottom of the GFC. If that’s what it takes to return to the long-term average, maybe you should reconsider what an appropriate comp level really is.

Third, and most important, the companies are far better today than they were in the past.

4. AI’s Uneven Arrival – Ben Thompson

What o3 and inference-time scaling point to is something different: AI’s that can actually be given tasks and trusted to complete them. This, by extension, looks a lot more like an independent worker than an assistant — ammunition, rather than a rifle sight. That may seem an odd analogy, but it comes from a talk Keith Rabois gave at Stanford:

So I like this idea of barrels and ammunition. Most companies, once they get into hiring mode…just hire a lot of people, you expect that when you add more people your horsepower or your velocity of shipping things is going to increase. Turns out it doesn’t work that way. When you hire more engineers you don’t get that much more done. You actually sometimes get less done. You hire more designers, you definitely don’t get more done, you get less done in a day.

The reason why is because most great people actually are ammunition. But what you need in your company are barrels. And you can only shoot through the number of unique barrels that you have. That’s how the velocity of your company improves is adding barrels. Then you stock them with ammunition, then you can do a lot. You go from one barrel company, which is mostly how you start, to a two barrel company, suddenly you get twice as many things done in a day, per week, per quarter. If you go to three barrels, great. If you go to four barrels, awesome. Barrels are very difficult to find. But when you have them, give them lots of equity. Promote them, take them to dinner every week, because they are virtually irreplaceable. They are also very culturally specific. So a barrel at one company may not be a barrel at another company because one of the ways, the definition of a barrel is, they can take an idea from conception and take it all the way to shipping and bring people with them. And that’s a very cultural skill set.

The promise of AI generally, and inference-time scaling models in particular, is that they can be ammunition; in this context, the costs — even marginal ones — will in the long run be immaterial compared to the costs of people, particularly once you factor in non-salary costs like coordination and motivation…

…What will become clear once AI ammunition becomes available is just how unsuited most companies are for high precision agents, just as P&G was unsuited for highly-targeted advertising. No matter how well-documented a company’s processes might be, it will become clear that there are massive gaps that were filled through experience and tacit knowledge by the human ammunition.

SaaS companies, meanwhile, are the ad agencies. The ad agencies had value by providing a means for advertisers to scale to all sorts of media across geographies; SaaS companies have value by giving human ammunition software to do their job. Ad agencies, meanwhile, made money by charging a commission on the advertising they bought; SaaS companies make money by charging a per-seat licensing fee. Look again at that S-1 excerpt I opened with:

Our business model focuses on maximizing the lifetime value of a customer relationship. We make significant investments in acquiring new customers and believe that we will be able to achieve a positive return on these investments by retaining customers and expanding the size of our deployments within our customer base over time…

The positive return on investment comes from retaining and increasing seat licenses; those seats, however, are proxies for actually getting work done, just as advertising was just a proxy for actually selling something. Part of what made direct response digital advertising fundamentally different is that it was tied to actually making a sale, as opposed to lifting brand awareness, which is a proxy for the ultimate goal of increasing revenue. To that end, AI — particularly AI’s like o3 that scale with compute — will be priced according to the value of the task they complete; the amount that companies will pay for inference time compute will be a function of how much the task is worth. This is analogous to digital ads that are priced by conversion, not CPM.

The companies that actually leveraged that capability, however, were not, at least for a good long while, the companies that dominated the old advertising paradigm. Facebook became a juggernaut by creating its own customer base, not by being the advertising platform of choice for companies like P&G; meanwhile, TV and the economy built on it stayed relevant far longer than anyone expected. And, by the time TV truly collapsed, both the old guard and digital advertising had evolved to the point that they could work together.

If something similar plays out with AI agents, then the most important AI customers will primarily be new companies, and probably a lot of them will be long tail type entities that take the barrel and ammunition analogy to its logical extreme. Traditional companies, meanwhile, will struggle to incorporate AI (outside of whole-scale job replacement a la the mainframe); the true AI takeover of enterprises that retain real world differentiation will likely take years.

None of this is to diminish what is coming with AI; rather, as the saying goes, the future may arrive but be unevenly distributed, and, contrary to what you might think, the larger and more successful a company is the less they may benefit in the short term. Everything that makes a company work today is about harnessing people — and the entire SaaS ecosystem is predicated on monetizing this reality; the entities that will truly leverage AI, however, will not be the ones that replace them, but start without them.

5. Don’t let interest-rate predictions dictate your investment decisions – Chin Hui Leong

A little over a year ago, the US Federal Reserve signalled its intention to cut interest rates three times in 2024. This commentary sparked a flurry of predictions, with market watchers vying to outguess the Fed on the number, timing, and size of these cuts. Goldman Sachs, for instance, boldly predicted five cuts.

We ended up with just three interest-rate cuts in 2024 – a significant miss, to say the least…

…According to Visual Capitalist, four firms – Morgan Stanley, Bank of America, Citigroup and Nomura – pencilled in a one-percentage-point cut for 2024. Credit should be given where it’s due: their forecasts were right.

However, did getting these predictions right matter in the end? As it turns out, not so much.

Morgan Stanley, Bank of America and Citi set 2024’s S&P 500 price targets at 4,500, 5,000 and 5,100 respectively… 

…The S&P 500, of course, closed the year at 5,881…

…Forecasts and expectations may look similar, but they are different. My friend Eugene Ng puts it best: Forecasts rely on knowing when something will occur. Expectations, on the other hand, are the acknowledgement of what’s likely to occur without professing insight into when it will happen.

For example, it’s reasonable to expect the stock market to fall by 10 per cent or more sometime in the future. After all, history has shown that corrections are a common occurrence…

…In my eyes, calmness can be achieved by having the right expectations, and preparing well for any market turbulence even when we don’t know when the market will fall.

If you are prepared, you will have fewer worries. If you worry less, you will stand a better chance of doing better than average. And that’s more than any investor can hope for, whether the forecasts are right or wrong.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Deepmind), Meta Platforms (parent of Facebook), and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 January 2025:

1. Mike Alkin – Talking Uranium (Transcript here) – Bill Brewster and Mike Alkin

Alkin: So coming to this market, I did that. I spent a good almost couple of years doing supply/demand on my own. There’s 430 reactors around the world. And understanding the country where they operate, the attitude towards nuclear, understanding the math involved. Often as investors, you look for heuristics. How many reactors are there? How many pounds per reactor would there be? You’re looking for rules of thumb. As you start peeling the onion back, I realize that rules of thumb don’t apply here because the amount of uranium needed for the reactor fleet around the world is not always the same. It depends upon enrichment capacity. We won’t go down that rabbit hole, but there’s a whole other segment you need to learn.

As I was doing that, I would go to these conferences and I would talk to nuclear fuel buyers, people who buy this stuff. It was hard for me at first to really understand what I was dealing with because as somebody at that time having well over 20 years of experience as a hedge fund investor, I talked to people in all industries that were on all sides of the equation. But the people buying it typically were curious as to what we were thinking when we were questioning them. If we were talking to a buyer at a company that was buying a product, they would say “What are you as an investor hearing? What are you hearing from the other side? What are my competitors saying? What are you hearing about inventories?” They were inquisitive. That was not this cohort. As I started speaking to nuclear fuel buyers, I was met with an enormous wall put in front of me telling me, “I’m an outsider, I’m not a nuclear engineer, I don’t know what I’m doing, I should basically stay away and they’ve got it.”

I thought it was that attitude that just said to me, “Something’s not right here because the numbers I’m coming up with, whether I’m looking at inventories or the amount of the cost of the supply, or the actual demand” – for context, at the time the price of uranium was $17, $18, $19 a pound. It would say what it was trading for in the market. As I did the analysis, I realized that the average cost was somewhere in the mid-$50s. I’m not that sharpest tool in the shed but I know that if something costs you mid-$50s to make, you can’t sell it for $17 for very long. So it was then that I had to peel back the onion saying, “Why are they producing it at that price?” Then you start to understand that the uranium market is one driven mostly by long term contracts. Well north of 80% on average will trade in a long-term window with contracts that cover 5, 7, 10, 12, 15 years depending on the contract. But that’s where most of the pounds trade. After the Fukushima event, a lot of these uranium producers, when the spot market had declined precipitously, were still selling into much higher prices. My understanding of that when I was talking to fuel buyers at these nuclear conferences, they were telling me that the price of uranium was $17 and $18, it was going to $10, it was going to $5. There was all this uranium out there.

That’s not what my math was showing me. What my math was showing me was that the model was that the long term contracts that had been signed before Fukushima melted down in 2011 were going to start to expire and rather rapidly. Uranium producers could not sell $17, $18, $20 uranium when it cost him 2.5 times that. At some point, production would have to start to shut down.

So you ask, “Do you think you’re crazy?” Yes, because as I’m talking to people who are obviously very sharp – they’re nuclear engineers – but it’s understanding, as you realize, as an investor, you have to understand incentives and you have to understand market structure. Charlie Munger would always say, “Show me the incentive, I’ll show you the outcome.” It was as I was starting to go and talk to these folks and realizing a couple of things. Number one is, they had no interest in what I was learning on my journey. Even though I’m not a nuclear engineer, I’m still somebody who’s a market participant. I’m still somebody that while I don’t speak their language, sitting at a dinner table or a lunch table or at a bar having a beer with them, I certainly could hold my own in supply/demand conversation. And as I would talk about what I was learning and uncovering, I was shot down at every step. I thought, “Wow, that’s interesting because I’m seeing a recency bias. What is now will always be.” So they were kind of latched onto that.

Then as I started peeling that, I’m thinking, “Why is this?” I’ve been doing this a very long time. Over the years, I’ve been wrong many times. I’ve been right more often than not. But you’re wrong and you try and understand where you’ve been wrong. I was thinking, “What is it? Why are they so uninterested in hearing what an outsider’s view is?” As I started to explore that more, you start to understand the makeup and the cost structure of a nuclear reactor, which I have known, but it really started to come into clear vision for me was the fuel. Uranium is just one part of the fuel cycle that goes in. You have uranium, they convert uranium from a powder into a gas. It then gets enriched, it then gets fabricated into pellets. That takes 18 to 24 months to do this stuff. There’s many different stages of the fuel cycle. As I was starting to think about what are the costs of that, all those stages are probably around 20% to 25%. What’s the cost of the uranium? That depends on the price. But it could be mid-single digits, high-single digits, somewhere around that. As you start talking to them about that, you realize it’s not a meaningful cost.

For comparative purposes, if I’m running a natural gas power plant or a coal power plant, my feedstock, the natural gas and the coal are 80% to 90% of the cost of operating it. Here, the uranium is single digits cost of operating it. The vision that started to come to me was uninterested market participants. They’re in the market very infrequently. Why are they uninterested? Because the cost is de minimis. Not to say it’s meaningless, but it’s de minimis. Then as I started to explore and ask questions, “Why are you not as concerned about this?” I was obviously met with a wall.

But what started to come to me was – and I asked flat out at a particular dinner at a World Nuclear Conference – I asked one, actually there were four fuel buyers at a dinner, I said, “If you all had a really enterprising fuel buyer that did the supply/demand work and said, “I think consensus is wrong. Here we are, $17, $18, $20 a pound. We should be buying uranium because the forecasts going out of the future are for deficits to be forming.” Let me ask you a question. Do you all, if the price were to go parabolic and you had all these great cost savings for your plant, do you participate that in any way, shape or form? Are you rewarded financially? Are you rewarded with a promotion?” The answer was I got laughed at. “What are you talking about? We’re paid to secure fuel.” These were buyers. As you come to a market as an investor, you think buyers are traders – they’re commercial creatures. These aren’t. These are really smart nuclear engineers that happen to buy a product that happens to not be a major cost component. There’s infrequent price discovery on their part and so it’s a lesson in understanding incentives and market structure…

Alkin: One of the things you see now is you have expert networks who provide hedge funds and mutual funds experts to speak to in any industry. If you’re a hedge fund wanting to get up to speed right now on the nuclear power industry, you’re going to say, “Get me three nuclear fuel buyers. I’d like to speak to them about uranium.” They’re going to get on the phone and they’re going to speak to them. For years – though I’m sure they’ve been doing this – they can get on the phone and speak to three fuel buyers and they say, “Yeah, there’s plenty of uranium out there.” Those are the same folks, when the price was $17 was telling me that, versus here you’re seeing floors and ceilings at $125 and $135. They are the gift that keep on giving. Yet the way the structure of the research process is, they’re going to expert networks. They find these people, and if you don’t understand how the sausage is made, you’re going to be misled. They’re not purposely misleading you. It’s just what their own beliefs are. For me, that’s a beautiful thing. I’ve been doing this a long time now, almost 30 years as a professional investor, and I’ve never seen a cohort of people who are so uninterested in hearing the other side of the story. So far I’ve seen them prices move up 4x in there against them and they still have the same attitude.

Brewster: To your point, it doesn’t sound like they’re very incentivized to care.

Alkin: There’s very little to no incentive to care, other than maybe you would think pride? I don’t know. But it doesn’t matter. It’s just not a thing. We actually chuckle because when we go to these conferences, you talk to them in a hallway or in a bar, it’s as though you’re an adversary. It’s very bizarre. They don’t have an incentive. It doesn’t matter what they pay. So that’s the bizarre thing.

2. Chip Cities Rise in Japan’s Fields of Dreams – Gearoid Reidy

In Chitose, a city of 100,000 in the northernmost main island of Hokkaido, billboards seek recruits for the Self-Defense Forces, which saw a 50% shortfall last year. When I arrived on a fully booked plane from Tokyo packed with salarymen in cheap suits and expensive watches, it was easy to see where the competition was coming from: a half-dozen towering cranes jutting into the sky, a jarring contrast against the surrounding countryside…

…Those cranes are building the first fab for Rapidus Corp., a public-private venture that aims to skip Japan to the head of the chip production queue. Founded just two years ago, it hopes to produce cutting-edge, 2-nanometer chips by 2027, in cooperation with IBM Corp. It’s fraught with risks, and the government’s record in promoting industry is spotty. But this is just the latest and most ambitious example of a series of bets on chips, with Prime Minister Shigeru Ishiba recently pledging an extra ¥10 trillion ($66 billion) on top of ¥3.9 trillion invested since 2021. Near the other end of the Japanese archipelago, 1,500 kilometers (930 miles) to the southwest, is another. In Kumamoto, on the island of Kyushu, mass production is soon set to begin at a $7 billion semiconductor plant.

Here, Taiwan Semiconductor Manufacturing Co., drawn by government subsidies and the region’s supply chain, opened its first Japanese plant in February. A second is in the works, with authorities lobbying for a third. It’s triggered an influx of Taiwanese workers into a city where until recently almost everyone was Japanese…

…As many as 6,000 laborers are employed to build Rapidus. But talk is of the arrival of permanent workers once test production begins. That’ll bring at least 1,000 high-earning jobs, along with their supply chains. On my visit, ASML Holding NV, the Dutch maker of chip-testing tools, had just opened offices, with 50 staff expected. Every second building seems to be being torn down and rebuilt…

…The scale of the ambition creates the risk of spectacular failure, one many in Japan’s media fully expect. Skepticism is warranted, considering previous government-led efforts, from DRAM maker Elpida Memory Inc., sold to Micron Technology Inc. after its 2012 bankruptcy, to troubled Japan Display Inc.

The economy was already doing well even before talk of Rapidus, Mayor Ryuichi Yokota told me, describing the fab as a “Big Bang” that has the city scrambling. Yet at night, when the construction crews leave, the silence is deafening. I couldn’t feel the billions I expected to find flowing, just a cold wind that would soon begin to turn to snow…

…The risk from disaster is unpredictable; but what if these experiments simply don’t work out? Japan has spent billions on subsidies to bring a foreign company in Kumamoto. And when it comes to Rapidus, the risks are immense. Even if the company can find the talent it needs (the country is expected to have a shortfall of 40,000 engineers), the technology succeeds and yields are acceptable, it still has to outcompete rivals — including TSMC — to attract customers with an unproven product.

Chitose mayor Yokota shrugged off these concerns. “I’m convinced it will succeed,” he said, resolute that researchers currently studying with IBM in the US will return, like Meiji-era scholars, with secrets Japan can use to rebuild.

3. Before Berkshire: Warren Buffett’s Tab Card Triumph – Kingswell and Alice Schroeder

He decided that he would come in and invest in this company — Mid-Continent Tab Card Co. — but, interestingly, he did not take Wayne and John’s word for it. The numbers they gave him were really enticing, but again he went through and he acted like a horse handicapper.

Here’s another point of departure from what almost anybody else would do. Everybody that I know — or knew as an analyst — would have created a model for this company and would have projected out its earnings and would have looked at its return on investment in the future. Warren didn’t do that. In fact, in going through hundreds of his files, I’ve never seen anything that resembled a model.

What he did is he did what you would do with a horse. He figured out the one or two factors that could make the horse succeed or fail — and, in this case, it was sales growth and making the cost advantage continue to work. Then, he took all of the historical data, quarter by quarter for every single plant, he got the similar information as best he could from every competitor they had, and he filled pages with little hen scratches of all this information and he studied that information.

And, then, he made a yes/no decision. He looked at it: They were getting 36% margins [and] they were growing over 70% a year on a million of sales. Those were the historic numbers. He looked at them in great detail — just like a horse handicapper studying the tip sheet — and then he said to himself, “I want a 15% return on $2 million of sales.” And then he said, “Yeah, I can get that.” And he came in as an investor.

So what he did is he incorporated his whole earnings model and compounding discounted cash flow into that one sentence. “I want 15% on $2 million of sales.”

Why 15%? Because Warren is not greedy. He always wants a mere 15% day one return on an investment and then it compounds from there. That’s all he has ever wanted. He’s happy with that. It’s a very simple thing. There’s nothing fancy about it…

…The $2 million of sales was pretty simple, too. It had $1 million [and] it was growing 70%. There was a big margin of safety built into these numbers. It had a 36% profit margin and he said, “I’ll take half that.”

He ended up putting $60,000 of his personal non-partnership money into this company, which was about 20% of his net worth at the time. He got 16% of the company’s stock, plus some subordinated notes.

4. China’s Bond Yields Scream the ‘D’ Word – Lingling Wei

Over the past week, just as Chinese leaders tried to get the public—and markets—excited with another round of stimulus talk, China’s 10-year sovereign yield kept falling to fresh lows. Now, the yield is around 1.7%, a full percentage-point plunge from a little over a year ago. The return on the 30-year government bond has also dropped below 2%.

The sovereign-debt yield still has a ways to go before falling to zero, but the speed of the drop is astonishing. The lower the yield falls, the deeper the market is signaling economic stress.

…In reality, Beijing is sticking to the formula of boosting demand through investment. The official thinking is, investment creates jobs, which would in turn create demand. That means more roads will be built, factories will be expanded and debts will continue to rise. Already, residents in some cities are complaining about the inconvenience from old roads being dredged up as authorities search for ways to invest.

One big irony is the source of bond buying—the force pushing down the yields.

State-owned banks, insurance firms and funds, the very institutions Beijing is counting on to support the economy, are the major purchasers of government bonds. These institutions would rather park their money in the safety of bonds than financing business projects or otherwise putting it to work.

“What’s good to invest in these days when demand is so low?” a Chinese banker told me, referring to weak business and consumer spending.

5. An Interview with Gregory Allen About the State of China Chip Export Controls – Ben Thompson and Gregory Allen

Here’s the question though. China doesn’t generally seem to be operating, and for good reason under the circumstances, under a real stringent return on invested capital calculation. I mean the 7nm chips that are being produced, we know with I think a pretty high degree of certainty, the yields are terrible.

GA: The yields are dreadful.

But they’re doing it anyway just because it needs to be done and this sort of ties into another thing. You referenced Dylan Patel and SemiAnalysis, who have been pretty strident critics of the enforcement of chip controls. But I think a good point he has made is that China, unlike the US, is not necessarily constrained in power or in the ability to build a ton of data centers, and so there’s a bit where they could just sort of — it’s not great, but they could just be way less efficient and accomplish similar things. Is there a bit where these expert controls are fashioned with Western/US constraints and concerns about how you go about building this stuff that might make them less impactful in the long run?

GA: Yeah, the export controls have not achieved their wildest dreams. There was a faction in the Biden administration that says, “Bwahaha, we found the secret weapon, and China’s AI dreams are gone” — that theory is just dead. Where we are now is at more of a cost imposition strategy. “We are going to make this as expensive and complicated as possible for you to do it, we’re going to try and slow you down, we’re going to try and increase your costs, and that is the race that we’re going to run”.

I mean, if you think about it, we’re switching from a mode in which the US AI ecosystem and the Chinese AI ecosystem were largely fused such that if we’re running a race, you can imagine there’s US people giving China Gatorade and those new Nike shoes that make you run faster. Now we’re moving to a moment where we’re trying to trip them in the race, that’s the change in mindset that we’ve experienced, and it’s not working to its most extreme form, but there is real cost imposition takes the form of the fact that SMIC has to operate at these dreadful yields. The economics are terrible, the fact that when they’re building all of these data centers, they’re having to use lousy chips, they’re having to buy more of them, and they’re having to deal with the higher energy costs of all of that.

It’s true that China does have just this extraordinary willingness to spend, but the point is we’re in this race, we’re in this competition, and it gives us an edge, not an infinite edge, but a meaningful edge.

This is a field, maybe you don’t have an answer to this, but there are some that argue that actually the better approach to some of these chips is a much more expensive, a much more high speed memory approach that has much lower latency using SRAM instead of High Bandwidth Memory. Is there a possibility that we actually pushed China down a different route towards developing these chips that maybe ends up being better because we thought HBM was the right way?

GA: I think that’s probably not what’s going to happen. It’s definitely worth saying that that could happen, a version of that kind of happened with YMTC and their NAND memory. There were multiple different approaches they could have taken technologically. All the Western and US allied Asian firms picked one way because it was obviously the best economics, and they held all the intellectual property, they held all the patents and so YMTC basically said, “Okay, we’re going to go down this other road and because we’re so heavily subsidized, it doesn’t really matter that it’s going to be more expensive”, and they did ultimately figure out how to get it work.

I think what you’re describing, the SRAM in massive quantities thing verges on the neuromorphic architecture, and it’s not that that’s impossible, and it’s not that that’s never going to happen, but it’s clearly not the right step for China right now. I think they have a path to domestic HBM production and that’s so much easier for them to chase than a SRAM revolution. I think traditionally they would just wait for somebody else to try and figure out and demonstrate that it’s possible and then they would throw infinite resources at it…

...For all of these chip controls, all this stuff that you’ve covered and written about, does any of it matter, if you add it all up, in comparison to that point that they don’t have EUV?

GA: EUV is the highest return on investment export control that we have had and are likely to have. It’s definitely the case that some of the other stuff hurts. If you talk about SMIC, for example, increasing their yields on their 7nm line and expanding the capacity of their 7nm line, they actually are bottlenecked by US equipment, a lot of US metrology equipment, etc. But if you want to talk about why they can’t—

But they do have the equipment, they just need to figure out how to duplicate it. The challenge with EUV is they don’t even have one, so duplicating it is that much harder.

GA: Yes exactly, it’s a lot harder to reverse engineer something that you don’t have a copy of, it really helps to have a copy of it. So I would say the EUV thing really matters, but there’s areas where China is facing headwinds that aren’t part of the EUV story.

So just to take one example, in DRAM, Micron still doesn’t use EUV in their production of DRAM, and they’re a globally competitive firm. So CXMT, the Chinese domestic champion of DRAM, the reason why they’re not currently globally competitive is not the absence of EUV, but I do think you could make a story that it is the absence of all this other stuff that we’ve been refusing to sell…

You’re not necessarily like a geopolitical analyst, but the thing that scares me about all this, I think I’ve asked you this every time, it still scares me, is we’re talking and saying the administration needs to do better at enforcing these laws that guarantee a power imbalance in the long run, that is usually very destabilizing. China might think, if we’re going to have a fundamental power imbalance, then how about we take Taiwan off the board because that will screw everyone? Now we’re equal again. Do you worry about this? You’re a strong advocate for doing this better.

GA: So. Number one is, I don’t know that I ever agree that the balance of power is the stable universe. In 1994, the Taiwanese defense budget was half of that of the Chinese defense budget, now the Chinese defense budget is infinity times that of the Taiwanese defense budget. And by contrast, in 1997, I think there was a single U.S aircraft carrier battle group that was more than capable of defeating the entire Chinese Navy and the entire Chinese Air Force, that was a massive power imbalance and it was a very stable relationship. And by the way, it was a relationship in which a lot of people got rich and had productive free trade and all these kinds of happy relationships. So the idea that power parity is the path to peace here, don’t know that I necessarily agree with that, I don’t think the historical record really bears that out.

Now, you could argue if we’re going to make bold moves and try and seize a decisive advantage, could those bold moves be destabilizing? Yeah, I think definitely think so.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 December 2024:

1. Meet Willow, our state-of-the-art quantum chip – Hartmut Neven

Errors are one of the greatest challenges in quantum computing, since qubits, the units of computation in quantum computers, have a tendency to rapidly exchange information with their environment, making it difficult to protect the information needed to complete a computation. Typically the more qubits you use, the more errors will occur, and the system becomes classical.

Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes…

…This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits…

…There are other scientific “firsts” involved in this result as well. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system — crucial for any useful computation, because if you can’t correct errors fast enough, they ruin your computation before it’s done. And it’s a “beyond breakeven” demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits do, an unfakable sign that error correction is improving the system overall.

As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built…

…As a measure of Willow’s performance, we used the random circuit sampling (RCS) benchmark. Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today…

…Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch…

…Willow was fabricated in our new, state-of-the-art fabrication facility in Santa Barbara — one of only a few facilities in the world built from the ground up for this purpose. System engineering is key when designing and fabricating quantum chips: All components of a chip, such as single and two-qubit gates, qubit reset, and readout, have to be simultaneously well engineered and integrated. If any component lags or if two components don’t function well together, it drags down system performance…

…The next challenge for the field is to demonstrate a first “useful, beyond-classical” computation on today’s quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.

2. X (previously Twitter) thread on quantum computing and Google’s Willow – Jeffrey Scholz

Like a regular computer, a quantum computer keeps bits in groups. So a 64 bit quantum computer would have a vector of 64 2d vectors serving as it’s “word.”

Here is where the speedup happens: in a regular computer, each of the 64 bits don’t know anything about the value of any of the other 64 bits.

If we want one bit to affect another bit, we have to explicilty combine them with a logic gate.

However, in a quantum computer, each of the 64 qbits can “talk to each other” via “quantum entanglement.”

Running a quantum circuit means you plug in a quantum vector, run it through a bunch of matrix multiplications, then collapse the output.

The final vector will be the correct answer. Technically, quantum computers can give wrong answers, but if you run the computation multiple times, then you will get the correct answer on average…

…The current problem with quantum computers is that as the circuit gets bigger, they become less correct on average. All of the “talking to each other” creates so much noise the system stops working.

Once your probability of being correct drops below a certain threshold your quantum computer becomes useless. This is a major blocker for current quantum compute.

Let’s look at a specific (oversimplified but helpful) example. Suppose you shine a laser beam into an ice cube.

Actually simulating what the laser will do when it exits the ice cube is very hard to predict because some quantum phenomena is involved.

To actually compute what the laser will do means you have to explicilty compute quantum entanglement, which is slow for classical computers but “built in” to a quantum computer.

However, you can *estimate* the distribution of how the laser will scatter without a quantum computer, so you can have at least a rough idea if your answer might be correct…

…By analogy, this is what Google was doing. The computation Google was doing was a “pseudo-random quantum circuit” (think pseudoranom ice cube) but we know a quantum circuit is just matrix multiplications (on crack). Therefore, it is a bunch of random matrix multiplications with an output that looks right.

Google’s actual breakthrough was that the output of the circuit “looks correct” — which sounds underwhealming — and compared to the headlines, it definitely is. The academic breakthrough is that Google was able to use a larger circuit and notice an apparent *increase* in accuracy when modeling how a laser shines through an ice cube. That is noteworthy.

You can definitely tell if a computation has failed, and it seemed to be failing less as the circuit got bigger…

…However, note that the problem is “rigged” in favor of quantum computers. The benchmark is explicitly modeling a quantum phenomenon, so *of course* we get a speedup.

In other words, Google created a random distribution on the output that “seems correct.” Why does it “seem correct?” well because by design, the computation cannot be run on a classical computer. But if we can’t run it on a classical computer, how do we know the quantum computer is actually giving the right answer? The answer is we don’t, and this is a serious gap…

…Quantum computing is kind of at the stage right now where some smart teenager wired a few logic gates together in a random fashion and said “hey look, my circuit made a random output and didn’t explode!” Compared to previous attempts, it is an improvement. But he is still a long way from training an LLM.

3. Volatility: A Double-Edged Sword for Long-Term Equity Investors – Daniel Crowley

The ability to measure risk in a portfolio has long been a puzzle for the financial world. When Harry Markowitz introduced Modern Portfolio Theory in 1952, he revolutionized how institutions approached risk and return. His use of standard deviation as a proxy for volatility offered a clean, mathematical way to quantify the unpredictability of markets. It gave investors a seemingly precise tool to compare assets and assess portfolio risk. Over time, this approach became gospel, with concepts like beta and the Sharpe ratio reinforcing volatility as the core measure of risk.

But here’s the problem: volatility tells only part of the story. Financial markets don’t follow the neat patterns of a normal distribution, which is what these models assume. Extreme events occur far more often than traditional models predict. We’ve seen this play out time and again—from the collapse of Long-Term Capital Management to the Great Financial Crisis. The models couldn’t account for the market’s tendency to behave irrationally and with far greater extremes than the math suggested. That’s why I’ve come to view volatility not as risk itself but as a signal, an invitation to investigate further…

…Volatility is often misunderstood because it treats upward and downward price movements as equal. A stock with erratic upward swings may have high volatility but poses little risk if the business fundamentals are sound. Conversely, a stock that steadily declines might appear “safe” on paper but can quietly destroy wealth.

The market’s reliance on volatility as a measure of risk often misses these nuances.

This misunderstanding creates a divide among investors. On one side are those who cling to volatility as the ultimate arbiter of risk, building models that rely on neat equations and assumptions about market behavior. On the other are those who dismiss it entirely, treating volatility as irrelevant noise.

My view lies somewhere in the middle. Volatility is neither good nor bad—it’s just a clue. It’s a signal to dig deeper and assess whether the market’s movements are justified by changes in a business’s intrinsic value.

What I’ve come to appreciate about volatility is its ability to surface opportunity. Markets are emotional, driven by fear, greed, and short-term thinking. Prices frequently diverge from reality, creating moments where high-quality businesses are available at steep discounts. When markets panic, as they did during the COVID-19 pandemic or the Great Financial Crisis, those who can stay calm and look beyond the noise can identify extraordinary opportunities.

Volatility, far from being a risk, is often the price of admission for outsized returns.

4. The AI nuclear renaissance – SMRs role – Rihard Jarc

The global nuclear power market is about 10% of global electricity (about $350-$400B annually) and around 32% of zero-carbon electricity generation.

As of 2023, nuclear energy accounted for about 18.6% of total electricity generation in the United States. The International Energy Agency (IEA) highlights that global nuclear power output must more than double by 2050 to meet net-zero emission targets. Most of the U.S.’s nuclear power plants are over 50 years old and nearing the end of their operational lives. While their lifespans have been extended to support the grid, they will need to be replaced in the coming decades…

…The introduction of ChatGPT and the AI boom that we have experienced in the last 2 years have only accelerated as AI workloads and AI chips consume much more energy than traditional data center workloads. This Nuclear Energy expert gives a good example:

» If you provide a simple search in Google, you consume 0.3 W per hour of electricity. If you do the same with ChatGPT or Alexa or Gemini, any AI that we can imagine, this 0.3 W transforms into 2.9 W, so it means 10X the consumption.«…

…Driven by artificial intelligence (AI), cloud computing, and digital transformation, U.S. data centers consumed an estimated 150 TWh of electricity in 2023, equivalent to around 3% of the nation’s power demand. According to Goldman Sachs estimates, data center demand hovered at 340 TWh in 2023 globally, which is about 1.3% of worldwide electricity use. U.S. data center power use is expected to triple between 2023 and 2030 roughly and will require about 47 gigawatts of new generation capacity…

…Nuclear energy has become very attractive because companies want to be carbon-neutral and have stable power. An additional benefit of nuclear power is that it can provide more stable long-term contracts that are less sensitive to inflation and supply chain problems…

…Interest in nuclear energy, particularly Small Modular Reactors (SMRs), is growing as they have been heralded as a solution to streamline nuclear power production, offering flexibility, lower upfront costs, and modular deployment. The simplest way to imagine SMR is that it is a smaller version of the traditional nuclear reactor. One of their most significant benefits is that they are modular. They are designed to be built in factories, not on-site. Because they are built in factories, they are easier to assemble and control. From quality checks to a more predictable supply chain and quality of workers. When assembled, they are then shipped to the site of the nuclear plant, where they are stacked together to form the whole plant. In terms of energy output, traditional nuclear plants have outputs between 1,000-1,600 megawatts of electric (MWe) per reactor, while SMRs are around 50-300 MWe per module. Some SMRs are also said to be safer due to passive safety features, which rely on natural processes like convection to prevent meltdowns in emergencies. But they also come with cons. The primary one is that they are much smaller than traditional nuclear plants, so they do not have the cost benefits of economy of scale. Because of that, producing the same amount of energy is more expensive than on a traditional nuclear plant…

…Over 25 countries, according to the International Atomic Energy Agency (IAEA), are investing in SMRs. In March, Wood Mackenzie estimated the pipeline of SMR projects was worth more than $176 billion and that SMRs could account for as much as 30% of the global nuclear fleet by 2050…

…We can look at the example of NuScale, which has its Pressurised Water Reactor design. Their levelized cost of electricity ranges from $89-135/MWh, while traditional nuclear plants are in the $110-160/MWh. However, looking at the most traditional alternative in data centers, which is combined solar and gas, gas costs $45-70/MWh, and solar plus storage costs $30-60/MWh…

…State-backed projects in countries like China and Russia have made more progress, leveraging integrated supply chains, controlled costs, and assured revenue streams. But even for them, the costs to build these reactors compared to first estimates are still much bigger…

…We must also face reality, which says that only 2 SMRs are operational right now, one of which is in Russia and the other one in China.

Another important topic when assessing nuclear energy is the problem of nuclear waste and its storage. Most SMR designs produce a similar amount of nuclear waste on a unit production basis than traditional nuclear plants, so the problem of storing nuclear waste stays.

5. How to invest without relying on target prices – Chin Hui Leong

The US stock market is soaring to new heights. But what does that mean for your stock returns in 2025? I would like to give you a definite answer but if I did so, I would be lying to you. In fact, you should view anyone who gives you target prices with suspicion.

Here’s the hard truth: No one can control where the market is headed in the short term. Yet, the allure of target prices persists…

…The answer lies in the inherent difficulty in predicting the future of rapidly evolving technologies.

The best example is Amazon.com. In mid-2010, when I first invested in the company, it had just reported US$24.5 billion in annual revenue, primarily from its online retail business. Here is the twist: it was impossible to know what the business would look like a decade later…

…Fast forward to 2023, and AWS had become a financial cash cow with nearly US$90 billion in annual revenue and an impressive US$24.6 billion in operating income. In other words, AWS, an insignificant division back in 2009, had generated more operating income in 2023 than the entire company’s revenue in 2009…

…I like to go back to the reason why valuation is used in the first place: to reduce your investment risk. The way I see it, valuation is one of the many ways you can employ to manage risk. But valuation is not the only risk in investing.

A weak, shrinking business can pose risks that no amount of stock valuation can solve. Hence, starting with high-quality businesses is my preferred approach.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Amazon. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 December 2024:

1. Why China’s Economy Opened Up in the 1970s – Joe Weisenthal, Tracy Alloway, and Odd Arne Westad

Joe (13:32):

What does it mean when you talk about history being “contingent?” You used that word a couple of times and I actually don’t know if I fully understand what that means, but when you’re telling these stories, or this story, and you’re keeping in mind the contingency in history, can you talk a little bit more about this idea?

Odd (13:48):

So you’ll see from the book that we go in and out from the sort of micro to the macro level of telling history. And if you look at the night when the coup against the radicals — the so-called Gang of Four within the party — took place, which we describe in some detail, you know, what happens from hour to hour…

Joe (14:10):

Right, this was the moment in which the left faction, after Mao dies, was arrested, and allowed for a sort of more moderate path to emerge.

Odd (14:21):

That’s right. And it was in effect a military coup. I mean, it was undertaken by the military and the security forces against the people who Mao himself had put in charge of the party, including his widow who was most prominent of all, Jiang Qing. Now that night, and the following few days, things could have ended up very differently. I mean, Shanghai, the biggest city in China by far, was still under control of the radicals. There were military units that supported the radical approach to politics. This could have ended up very differently from what it did.

And as we describe in the book, some of the plotters, some of the coup-makers themselves, in those days that followed the coup itself, were completely surprised by how little resistance there had been from the left. And how little chaos there had been on the streets. So that’s what I mean with it being contingent. I mean, this is something that obviously connects to the larger picture that we see today — going back to your sort of three level version of what happened in China. But it didn’t seem that obvious at the time. And it could have gone in very different directions from what we’re seeing today.

Tracy (15:30):

How important was the fraying of the relationship between China and the Soviet Union in the 1960s, early 1970s to spurring or catalyzing that opening up? Because it does feel like the sudden emergence of the Soviet Union as an external enemy, it feels like that led China in some respects to open up to the US and some other countries.

Odd (15:56):

This is a sort of trajectory that I think it’s really important to get right, because what Mao and his group of leaders did in the late 1960s was to turn to the United States as an ally — a pseudo ally, security ally — against the Soviet Union because they were so deadly afraid that there would be a war with the Soviets — a war that China certainly would have lost, given the state that Chinese communists themselves had pulled China into during the Cultural Revolution. So what Mao did was to turn to the enemy far away, the United States, to help back him against an enemy much closer to home, the Soviet Union, which they had this falling out with mainly for ideological reasons.

From Mao’s perspective, this was always intended to be a strictly security oriented pseudo alliance. It was directed against the Soviet Union. Mao to the end of his days was puzzled that United States would support the real communists, meaning him, against the fake communists, meaning the Soviet Union. But as long as they were willing to do that, he was certainly willing to reap the benefits. But he never intended that this would have any effect in terms of the increasingly radical communist direction that he was taking for China internally, domestically.

So that’s when what happens in 1976, after Mao’s death, becomes so significant, because the people who then took over, they thought, ‘Aha! We have this relationship between United States. They are supporting us for their own reasons in the Cold War against the Soviet Union. We can now also make use of this to supercharge Chinese reform.’ If it hadn’t been for that relationship, strictly security oriented, that already existed between China and the United States, I doubt that that would be possible. So it’s very important when about the longer term US-China relationship to think about that origin and how this actually got started. Very different from the way most people think about it, where the security element and the reform element are sort of conflated into one…

…Odd (36:05):

I think it was both. I mean in the Xi Jinping case, I think he was picked by the party as the, what Chinese would call, the core leader, back in the early twenty-teens, in response to what was seen as a bunch of real problems, from a Chinese Communist Party perspective, over liberalization, decentralization, corruption, strength of private companies that meddled in a lot of things that the communists didn’t want them to meddle in. They wanted to get a strong leader in who could deal with those issues, in a way that his predecessors, Jiang Zemin [and] Hu Jintao, had not been able to do it. So they wanted a strong leader. It’s just that, I think even for many communist leaders of that generation, they got more than they bargained for. So that’s where the personality aspect comes in. They got a leader who really wanted to return, at least on some issues, to the Maoist or even the sort of pre-Mao period, in terms of the CCP’s history and emphasizes the party’s position over what even many party leaders back 10 [or] 15 years ago thought would be good for China.

And it’s a classic example of responding to real world problems — not unknown in this country, right? — by going very far in one direction, hoping that that would resolve the problem that is there, and then getting stuck in a way with the kind of leader that you have in this case, in Xi Jinping. So I think that’s the story, the way we can tell it now. I hope at some point to be able to tell that story based on archives and primary documents, as an historian, we can’t do that yet. But I think at some point, we’ll be able to do that, and then it’ll be fascinating to test that hypothesis about how this happened.

Tracy (37:54):

So just on the revolution from below point, one of the things that you emphasize in the book is a lot of the stuff that happens in this time period is a result of people feeling that they are heading somewhere, that there’s a grander Chinese vision that can be achieved. And so that motivates people to actually do something. I’m curious, just going up to the present day, do you get a sense that people feel that? That there’s like a direction that China is heading in that it’s clear to people what they are trying to do?

Odd (38:33):

At the moment, absolutely not. I think it’s very, very clear that a lot of people in China do not understand where the country is heading and what the reasons are. And you know, you don’t spend much time in Beijing before you realize that these days. I think it was very different in the time period that we are talking about, which was generally a time of uplift, at least in economic and and social terms. And it’s right to say, I mean as many historians have said, that there was an element of a bargain in this. That, at least for some Chinese, not not everyone, but for some Chinese, maybe particularly in business, that would accept a dictatorship for what it was and then went on getting rich and and establishing some of these great or middling fortunes that you find so many of in China today. And that is good. I mean that was positive. It was much, much better than the dark past that we described at the beginning of the book.

It was just that, China wasn’t able to take what, in our view, is a necessary step to improve its political system, its overall attempt at trying to become a more open, more pluralistic country in the period when the going was good, when there was a general sense that China was making advances, domestically and internationally. Now, I think even if people from within the Chinese Communist Party after Xi Jinping would try to move in a direction of increased liberalization — which I think they will have to do at some point because people are just very unhappy with the kind of system that is there at the moment — it would be much more difficult, because the going is not that good. And probably it’s never going to be that good again. I mean, it was a remarkable period of economic transformation, 10% per year growth rates. It would’ve been possible to carry out necessary reform. But these people didn’t want to do it because they had become so preoccupied with holding onto power themselves. And I think, historically, that that might turn out to be the biggest mistake that the Chinese Communist Party has made.

2. Tim Cook Wants Apple to Literally Save Your Life – Steven Levy and Tim Cook

Some companies charge for AI-enhanced services. Did you consider that?

We never talked about charging for it. We view it sort of like multitouch, which enabled the smartphone revolution and the modern tablet.

You’ve personally been using Apple Intelligence for a while. What has been most useful for you?

We’re an email-based company, and I get enormous numbers from users, employees, partners, and so forth. Having it summarize author responses is a game changer, and having it prioritize things for you so you’re not doing your usual triage. Then, of course, there are fun things like the Image Playground.

I’ve heard you say that Apple Intelligence could make you funnier, which seems strange.

I think it can make you friendlier, which, in many ways, can be funnier as well.

Having AI speak for people makes me wonder whether the nature of communication will degrade. If Apple Intelligence writes something funny, who’s being funny, the sender or the AI?

It’s still coming from you. It’s your thoughts and your perspective. You and I both remember the productivity that came from the advent of the personal computer. It was no longer you punching your calculator, you were doing something on a spreadsheet. It was no longer you at the typewriter, you were using a word processor. Logic Pro helps musicians create music, but they’re still the author.

One of your demos involves a fictional recent graduate applying for a job. The cover letter is colloquial and somewhat sophomoric, but with Apple Intelligence a single click changes it to look like a savvy, smart person wrote it. If I’m a recruiter who hired that person, maybe I will feel tricked if they don’t live up to the professionalism of that letter.

I don’t think so. By using the tool, it comes across as more polished. It’s still your decision to use the tool. It’s like you and I collaborating on something—one plus one can equal more than two, right?…

When you’re thinking about things late at night, don’t you sometimes ask what it would mean if computers had superhuman intelligence?

Oh, of course. Not just for Apple, but for the world. There’s so much extraordinary benefit for humanity. Are there some things you have to have guardrails on? Of course. We’re very deeply considerate about things that we do and don’t do. I hope that others are as well. AGI itself is a ways away, at a minimum. We’ll sort out along the way what the guardrails need to be in such an environment…

Meta and Snap are leading us to mixed-reality glasses that we’d wear continually. Is the bigger, heavier Vision Pro ultimately headed that way?

Yes, it’s a progression over time in terms of what happens with form factors. AR is a huge deal. With Vision Pro, we’ve progressed to what is clearly the most advanced technology we’ve ever done, and I think the most advanced technology in the world in terms of electronics problems. We’ll see where it goes.

Apple has created a lot of consumer tools for medical technology. What’s the strategy for biological metrics and prosthetics?

It’s clear to me that if you zoom out way into the future, and you look back and ask what Apple’s biggest contribution was, it will be in the health area. That’s what I really believe. When we started pulling that string with the Apple Watch, it was a cascade of events. We started with something simple, like monitoring your heart rate, and then figured out we could pick up heart signals to get to an EKG and an AFib determination. Now we are monitoring sleep apnea. I’ve gotten so many notes over time from people who would have not survived had it not been for the alert on their wrist.

Apple plans to give AirPods the ability to correct for hearing loss. I bet the makers of expensive hearing aids are freaking out.

It’s not about competing against hearing aids on the market. It’s about trying to convince people who have hearing loss to use their AirPods. The vast majority of people with hearing issues have not been diagnosed. For some people, hearing aids have a stigma, and we can counter that with AirPods. And we can have people diagnose themselves. It’s the democratization of health…

We’re doing this interview at Apple Park, which is now seven years old. Have you been surprised by anything that couldn’t have been anticipated when it was just blueprints?

It’s promoted collaboration even more than I thought. That was a key component of the design, but there are so many places here where you just unexpectedly run into people. In the cafeteria, at the coffee bar, outside when you’re going across the pathway. Also, there’s a connection here to Steve that is incredible and very deep. We have the theater named after him and think about him all the time, but I can feel him in other spaces too.

3. 2024: The State of Generative AI in the Enterprise – Tim Tully, Joff Redfern, Derek Xiao, with Claude Sonnet 3.5

AI spending surged to $13.8 billion this year, more than 6x the $2.3 billion spent in 2023—a clear signal that enterprises are shifting from experimentation to execution, embedding AI at the core of their business strategies…

…Today, 60% of enterprise generative AI investments come from innovation budgets, reflecting the early stages of generative AI adoption. However, with 40% of generative AI spending sourced from more permanent budgets—58% of which is redirected from existing allocations—businesses are demonstrating a growing commitment to AI transformation…

…While foundation model investments still dominate enterprise generative AI spend, the application layer is now growing faster, benefiting from coalescing design patterns at the infrastructure level. Companies are creating substantial value by using these tools to optimize workflows across sectors, paving the way for broader innovation…

…In 2024, much of the action happened at the application layer. With many architectural design patterns established, app layer companies are leveraging LLMs’ capabilities across domains to unlock new efficiencies and capabilities. Enterprise buyers are seizing the moment, pouring $4.6 billion into generative AI applications in 2024, an almost 8x increase from the $600 million reported last year…

…Code copilots lead the charge with 51% adoption, making developers AI’s earliest power users…

…Support chatbots have captured significant usage, with 31% enterprise adoption…

…Enterprise search + retrieval and data extraction + transformation (28% and 27%, respectively) reflect a strong drive to unlock and harness the valuable knowledge hidden within data silos scattered across organizations…

…Meeting summarization ranks fifth in use cases (24% adoption), saving time and boosting productivity by automating note-taking and takeaways…

…When selecting generative AI applications, enterprises have clear priorities: Return on investment and industry-specific customization matter most when selecting new tools. Surprisingly, price isn’t a major issue; just 1% of the enterprise leaders we surveyed mentioned price as a selection concern. Buyers are playing the long game: They are far more focused on tools that can deliver measurable value (30%) and that understand the unique context of their work (26%) over those offering the lowest price tag (1%)…

…When AI pilots stutter or stall, it’s often due to challenges not adequately considered during the selection process. Although buyers aren’t checking price tags, implementation costs, cited in 26% of failed pilots, frequently catch them off guard. Data privacy hurdles (21%) and disappointing return on investment (ROI) (18%) also throw pilots off course. Technical issues, especially around hallucinations (15%), round out the top reasons for failure…

…Traditionally slow to adopt tech, healthcare is now leading generative AI adoption with $500 million in enterprise spend…

…Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows…

…With its complex data, strict regulations, and critical workflows, financial services ($100 million in enterprise AI spend) are primed for AI transformation…

…From Hollywood screens to creators’ smartphones, generative AI is reshaping media and entertainment ($100 million in enterprise AI spend)…

…Foundation models still dominate. The LLM layer commands $6.5 billion of enterprise investment…

…Rather than relying on a single provider, enterprises have adopted a pragmatic, multi-model approach. Our research shows organizations typically deploy three or more foundation models in their AI stacks, routing to different models depending on the use case or results…

…Among closed-source models, OpenAI’s early mover advantage has eroded somewhat, with enterprise market share dropping from 50% to 34%. The primary beneficiary has been Anthropic,* which doubled its enterprise presence from 12% to 24% as some enterprises switched from GPT-4 to Claude 3.5 Sonnet when the new model became state-of-the-art. When moving to a new LLM, organizations most commonly cite security and safety considerations (46%), price (44%), performance (42%), and expanded capabilities (41%) as motivations…

…To power RAG, enterprises must store and access relevant query knowledge efficiently. While traditional databases like Postgres (15%) and MongoDB (14%) remain common, AI-first solutions continue to gain ground. Pinecone,* an AI-native vector database, has already captured 18% of the market.

4. An Interview with Understanding AI Author Timothy B. Lee – Ben Thompson and Timothy B. Lee

As a side note, just as you sort of referenced it in passing, there is always the question of where are the productivity gains, when it came to, first the PC, and then the Internet? Is your sense that those just take a while to show up? Is there just a massive amount of consumer surplus that is not measured? What’s your big picture take on that question?

TL: There’s a couple of things. One is it takes a while to show up because to really get the big gains from a new general purpose technology, often you need to reorganize a lot of other business processes. There’s a famous analogy economists like to use for when they originally electrified the economy. The first thing they try to do is they tried to take the old steam-powered factories that just had one big crank shaft and put an electric motor in and that didn’t get you much improvement because the electricity was not cheap.

It was arguably worse.

TL: But then ten to twenty years later, people figured out, “Oh, we can have a bunch of small electric motors, one at each workstation, and now factories can be a lot more efficient”, but you had to build new factories and new businesses to do that…

Believe me, I think we’re around the same age, I know exactly what you mean and feel. That said, I feel like the big company — Wikipedia came out back when I was in college, or around that time and of course everyone, professors or teachers, banned the use of it. But what you quickly realized is that the key way to use Wikipedia is the sources. You go to Wikipedia, and then it has links to all the sources, then you have your original source documentation. I do feel like ChatGPT is just such a better version of that, particularly with the search version, and when it does sources, it’s just like, “What if we make a Wikipedia that just fills all sort of weight and space about knowledge”, and it’s pretty tough to beat in that regard.

TL: Yeah, absolutely. And as with Wikipedia, you have to be smart about it. You can’t assume that everything is accurate, you have to check your work. But I definitely find, anytime I have, if I’m trying to make a list of things and I want to know all the companies in a particular category, it’s a pain in the ass to find that on Google. Whereas if you ask ChatGPT, “Here’s like three companies in this category, give me more on the list”, it’ll know a bunch more of them. There’s so many things like that. So yeah, definitely, I don’t want to say never use it or it’s not useful. It’s definitely useful, but it’s 1% to 2% more productive over the course of a week rather than really transformational…

...Again, to go back to your perspective of looking at it over the last 18, 20 months since you started, do you think we’ve hit a wall with AI? You started wondering this publicly actually last December when Gemini came out and you felt a little underwhelmed, particularly given Google’s advantages. You weren’t sure at the time, was Google underperforming for Google specific reasons, maybe have we gotten as far as we can with GPT-4? What’s your evaluation 11 months on from that article?

TL: The thing I’ve noticed is that we keep hearing about there’s going to be a GPT-5—

It’s not here.

TL: There’s going to be a new big model and it hasn’t been released and I don’t have enough sources in the inside to those companies to know why that’s happening. But it could be they’re just still working on it and it’s going to come out next month and blow my mind, but every month that ticks by makes me a little more skeptical. Especially because the other thing trend we’ve seen is these companies are releasing these smaller models that are almost as good as the big models.

And then even to some extent, I was pretty impressed by o1, but what o1 did is kind of different. It wasn’t like scaling up the model, it’s like we’re going to do more inference time compute. In certain ways, it was much better, but it wasn’t better overall.

So my still pretty rough hypothesis, but my hypothesis is that there’s kind of a limit to what the current LLM architectures can do and we’re sort bumping up against that in various — I mean, another thing, we’ve had multimodal models that are much better, so we can do real-time voice and we can do images, so there’s new things it can do. But in terms of just the increase of overall reasoning capability, it doesn’t seem like we’ve had a big jump, really since March of 2023 when GPT-4 came out, and so I’m not going to make a strong prediction because again, it could come out next month and amaze me, but every month that ticks by I get a little bit more wondering what’s going on.

What do you think is the limitation? Is it data, compute or is it just a fundamental limitation of the transformer architecture?

TL: My guess is it’s a fundamental limitation of the transformer architecture, and I think the main issue is that the transformer architecture requires all of the model state to be in these vectors for individual words, and then it keeps a record of that forever — the whole context, there’s no process where you summarize and abstract a way. If you think about your life, you think about something that happened ten years ago, you don’t remember every single thing you said, everything that others said, you have a abstract memory that, “Oh, in 2014 I remember I lived in this place and I had this job”, and things you learn kind work their way into the brain, but it’s organized in a good way. LLMs just don’t have a way to do that.

So if I think about how people expect that at some point you’re going to have an LLM who’s like a personal assistant who maybe will work with you over your career and know all your habits and make all your appointments stuff and to do that, I just think this architecture where you remember every token exactly and do attention over that whole corpus, I don’t have any way of synthesizing and abstracting and forgetting unimportant things, just as a computer scientist, that doesn’t seem viable to me…

Do you think there’s a bubble now then?

TL: That’s always a hard question to say. Part of what’s hard about bubbles is that often people start calling a bubble pretty early and then the bubble keeps growing and people keep saying there’s a bubble.

Right. If people think there’s a bubble, there is not a bubble, that’s my heuristic.

TL: Well, there’s that, but also, at some point, the stock or the house price or whatever will peak and then go down, and the people who said it was a bubble right at the top will be right, but some people who called it way at the beginning were probably wrong.

I do expect a period where AI gets overly frothy and then crashes. Whether we’re currently there or just headed for that, is a little hard to say. I do not expect a dot-com bust level expansion, because as you were saying, I do think that this technology has clear benefits, it’s mostly big technology companies, it’s not as venture-funded. In fact, some of the early really crazy-funded companies have already been acquired.

So, yeah, I think the level of hype right now is a little too high and there’ll be some pullback, but I don’t think you’ll see a big crash and I don’t think you’ll see much of a pullback from deployment, because I think there really is enough value here that there’s going to be a big market for a lot of people working on it, and a lot of valuable stuff will come out of it in a pretty direct way.

I saw a new theory this week that actually really resonated with me. So this might be new to you, so I’m going to drop it to you on the spot. I think the big question on if you’re thinking about bubbles, you go back to a Carlota Perez model of the importance of bubbles and driving, you go back to the dot-com era, the really important part was the telecoms build out, which was, at the time, some people called it, and in retrospect, clearly insane. If you’re rolling out all this fiber and everyone’s doing it, the costs are going to go to zero, you’re all going to go bankrupt because it’s all financed by debt, as large infrastructure usually is. But the long-term payoff from that was massive, right? That, basically, booted off the whole Web 2.0 era where now everyone, suddenly, had broadband. Recessions suck, but there was a huge societal benefit that did come from that build out.

You go back to previous ones, whether it be electricity or steam, you had these similar cycles and the big question was, “What’s the societal beneficial output of an AI bubble if there is a bubble?” and chips never quite fit, because chips wear out and chips get better. So, if you buy a bunch of chips, but they’re five-year-old chips, what’s the benefit there? Doug O’Laughlin put this tweet out here, that has been really striking to me. He said, “Internet Bubble:Telecom::AI:Power/DCs”, and to me, that makes sense. If you’re going to actually build more nuclear power, or you’re going to do massive investments in solar and batteries, or whatever it might be to fuel these sorts of things, those are investments that, 1) can definitely make you go bankrupt because you’re taking out a bunch of debt to fund it, but 2) will retain value for many, many, many years to come. What do you think of that analogy? To me, it seems pretty compelling.

TL: Yeah, I one hundred percent agree with that. I mean, I was actually going to say the part of it that seems most bubbly is this stuff about Microsoft leasing out Three Mile Island for 20 years. Again, we were talking before is, “Do I think scaling law thing is going to run out of steam?”, my guess is it probably will. I don’t know if we’re on the verge of that, but, anyway, so I would not be surprised if people look back ten years from now, and say, “Oh, man, all that money companies spent on data centers and power is, that was kind of a waste of money”. But then, like you said, the country needs more power, and at some point, probably, we’ll want to be training really big models and so, if we have a bunch of huge data centers that we can use to train models, probably, we’ll get some value out of that. It’s tech companies spending the money so the social cost is not probably that high.

5. 7% of Book Value; 1x EBITDA; Cash is 2.5x Larger than Market Cap – Dirtcheapstocks

Highlands REIT, Inc. (Ticker HHDS) was created in 2016 when it was spun out of InvenTrust Properties Corp.

HHDS was formed to hold non-core assets of InvenTrust.

Today, HHDS owns 13 apartment houses, 3 retail properties, 1 office property and 1 correctional facility…

…HHDS has:

  • $205MM of book value.
  • $16.7MM of net operating income (NOI) in 2023.
  • $17MM of NOI in 2022.
  • $85MM of net debt.
  • 57% of NOI generated from multifamily assets

What do you think? Is Highlands worth book value? Is it worth half of book value?

If we want to value the business at an 8 cap, the equity must be worth $124MM.

Within the last two weeks, HHDS has been valued as low as $14.4MM.

That’s less than 1x NOI, and 7% of book value…

…Most companies valued at $14MM might have a few hundred shareholders of record. Apple is valued at $3.5 Trillion, and it has 23,000 record holders.

Highlands has 143,000 record holders…

…Here’s my theory: When Highlands was spun out of InvenTrust, every shareholder was given ownership individually. There are 143,000 separate people/entities that own this stock. And this stock was an afterthought. It was just a few noncore assets being spun out of a $2 billion REIT…

…HHDS, perhaps wanting to ward off future material purchases by Mackenzie, announced a tender offer in October 2023. While Mackenzie was tendering at $0.04/share earlier that summer, HHDS was willing to pay $0.12 – $0.17/share. What’s more, HHDS was committing $20MM to the share buyback.

HHDS would repurchase 13-19% of its shares if fully subscribed.

A few weeks later, HHDS increased the buyback to $25MM!

In the end, $23.7MM was spent to buy in 169MM shares – nearly 20% of the outstanding share count…

…HHDS showed up as an expert market security, even though it’s SEC registered.

But I found that the traditional expert market brokers couldn’t buy shares.

Then I went to alternative market brokers. They’d be happy to take my money, and told me I could get as much volume at $0.10 as my heart desired.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta, Microsoft, and MongoDB. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 December 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 December 2024:

1. America, China, and the Death of the International Monetary Non-System – Russell Napier

Something changed in America in the 1990s. The U.S. federal funds rate began a decline from above 5 percent to reach the effective zero bound by 2009. U.S. ten-year Treasury yields declined from above 6 percent to levels not even recorded during the Great Depression. Credit to the U.S. nonfinancial corporate sector rose from 56 percent of GDP to a new all-time high of 87 percent, and U.S. Government debt rose from 60 percent of GDP to a recent high of 106 percent, very near the peak level recorded during World War II. The valuation of U.S. equities rose from a cyclically adjusted price-to-earnings ratio (CAPE) of 15x to the current level of 34x, having reached a new all-time high of 44x in 2000. U.S. tangible investment declined from 7 percent of GDP to as low as just 1 percent of GDP, a level only previously recorded in the Great Depression and briefly in the hiatus of investment after World War II…

…Today, we have an international monetary system that does not have a name…

…It is a non-system to the extent that its terms and conditions were never agreed upon by all the parties involved, but instead it was born from choices made by a few, most notably China, that the other parties accepted and adjusted to. The extremes of interest rates, debt levels, asset price valuation, and investment in tangible assets in the United States are just part of that global adjustment to the new international monetary system that grew from China’s unilateral decision to manage its exchange rate beginning in 1994. This system would never have been agreed to in any negotiation, as it was a system replete with distortions that would lead to dangerously large imbalances with dangerous political ramifications…

…The crucial distortion imposed by China’s decision in 1994 was a decoupling of developed world growth rates from interest rates, the discount rates used in asset valuations, which many assumed to be a new normal. When interest rates appear to be permanently depressed relative to growth rates, asset valuations rise, leverage increases, and investors are incentivized to pursue gain through rising asset prices rather than through investment in new productive capacity. The decoupling of growth and interest rates was driven by the People’s Bank of China’s (PBOC) appearance as a non-price-sensitive buyer of U.S. Treasury securities, and indirectly by the role China’s excessive fixed-asset investment played in reducing global inflation and hence interest rates…

…For developed-world companies facing the cheap resources, cheap finance, and cheap exchange rate of China, there was little incentive to invest in tangible assets at home. In the United States, in particular, where companies are managed to maximize return on equity and returns to shareholders, the corporation was able to benefit from both cheap Chinese production and the low interest rates that allowed balance sheets to be levered to buy back equity. In other countries, with different social contracts and less focus on rewarding management via stock options, closing productive capacity and pursuing financial engi­neering were more difficult. Thus, it was U.S. corporations that most fully adapted to the new international monetary system.

When the Bretton Woods system was established, severe restrictions were placed on the free movement of capital. The architects of that system recognized that maintaining exchange rate stability would not be possible if capital were allowed to move freely. Our current system permits, at least into and within the developed world, the free movement of capital. In this system, the private sector capital that left the developed world for China was transformed, via PBOC exchange rate inter­vention, into an accumulation of developed-world debt securities financed by the creation of renminbi reserves…

…. China’s inability to run sufficient surpluses since 2014 to generate sufficient broad money growth and prevent the escalation of its already high debt-to-GDP ratio is not widely recognized as a similar problem. Yet China’s move to a flexible exchange rate to avoid a debt deflation and create sufficient growth in broad money to reduce its debt burden will end the non-system as surely as President Nixon’s announcement that the U.S. dollar was no longer linked to gold ended Bretton Woods. Few analysts understand the impact that this move will have on the international monetary system and the long-accumulating distortions to credit, money, asset prices and the global economy.

When China moves to a flexible exchange rate, it is difficult to foresee how just one new international monetary system could replace the non-system. Given current geopolitical tensions, the prospect of China and the United States hashing out a new Bretton Woods–style agreement is highly unlikely…

…Predicting how any new U.S.-centric monetary system will develop is not easy, but such a system must allow for excessively high debts, the legacy of the non-system, to be inflated away. While much of the focus is on the high U.S. total nonfinancial debt-to-GDP ratio of 255 percent, there are many countries in the world struggling under even higher debt ratios: Canada, 311 percent; France, 315 percent; Japan, 400 percent; Netherlands, 316 percent; Switzerland, 297 percent, etc.15 The rise and rise of debt-to-GDP levels, a product of the gap between interest rates and growth rates under the non-system, will now have to be addressed.

With austerity, default, hyperinflation, or very high real GDP growth unlikely to be the solution, a new global monetary system will have to be created that offers a path of moderation toward reducing debt‑to-GDP levels. That path of moderation is likely to take the form of financial repression—such as that imposed upon savers in the after­math of World War II, to force their savings to fund the investment needed for postwar reconstruction, but at interest rates that did not reward them for the current and expected levels of inflation. That is a world in which bankers will create more credit and more money and more inflation than they have in recent decades. Higher nominal GDP growth combined with imposed purchases of low-yielding debt securi­ties will, over time, reduce debt-to-GDP levels, just as it did in the decades following World War II. Whatever the new international monetary system looks like, it will have to accommodate the financial repression that will finally begin to reduce debt-to-GDP levels…

…In the long period in which developed-world debts will have to be inflated away, policymakers will have to take a view as to which section of society will bear the heaviest cost. One of the quickest and least painful ways to enforce a deleveraging is through encouraging a rapid re‑equitization of the private sector. The ability of all corporations to deduct interest expense in calculating their taxes has to be reconsidered. In an era when much greater fixed-asset investment is essential, the tax privilege of deducting interest expense should not be available to cor­porations using debt to lever up an existing income stream; rather, the tax code should reward corporations using debt to build new businesses and new income streams. There are of course losers from such a change in taxation, but they are those who have been the winners from the prolonged period of falling interest rates and rising asset prices that have been the key feature of our now failing non-system. A long financial repression is in nobody’s interest, and the longer it prevails, the more likely it will create wealth redistributions that threaten social stability. Proactive intervention to force re-equitization upon a small section of society through the withdrawal of a tax privilege is painful for some but is a more equitable path to reducing high debt-to-GDP levels while facilitating greater investment.

To reduce the high and dangerous debt-to-GDP ratios of the developed world, nominal GDP must grow faster than total credit. This can be achieved by increasing the growth rate in bank credit while limiting the growth in nonbank credit. While the non-system was a key driver of the rise and rise of debt-to-GDP, the disintermediation of credit also played a key role. It is commercial bankers who create money, and if nominal GDP growth is to remain at a high enough level to reduce debt-to-GDP levels, bank balance sheets must grow faster than they have over the past three decades. Commercial banks create money when they expand their balance sheets, and if they do not create enough money, nominal GDP growth will remain low while credit growth, spurred by the growth in nonbank credit, can remain high.18 A combination of faster growth in bank credit combined with the re­striction of the growth in nonbank credit will be at the core of reducing debt-to-GDP ratios. The targeted ending of interest deductibility in the computation of corporate income tax, mentioned earlier, can assist in promoting the growth in bank credit and hence money at the expense of growth in nonbank credit. If it is bankers who are at the vanguard of funding the necessary investment renaissance in the United States, and not credit markets, then the move to lower debt-to-GDP levels will be less painful than if we are forced to take the hard path of austerity, default, hyperinflation, or a very long financial repression. A new focus on the growth of bank credit and therefore money is at the core of any policy to reduce dangerously high debt-to-GDP ratios.

2. Are U.S. Stocks Overvalued? – Ben Carlson

The S&P 500 is up nearly 90% since election day 2020 yet valuations are essentially identical.

How can that be?…

…Stock prices are up a lot but fundamentals2 have kept pace. In fact, the stock market has actually gotten less expensive over the past couple of years because of earnings growth…

…It’s also important to point out that much of the valuation premium on the S&P 500 comes from the largest stocks…

…These stocks have high valuations for good reason — they’re some of the best-run corporations in the world…

…The good news for valuation-conscious investors is there is plenty of value outside of the mega-cap stocks. Valuations for small and mid cap stocks are still pretty cheap. They are far less expensive now than they were before the pandemic. Maybe there’s a reason for that but stocks don’t get cheap for no reason.

3. Amazon’s Moonshot Plan to Rival Nvidia in AI Chips – Matt Day, Ian King, and Dina Bass

Nvidia’s biggest customers — cloud providers like Amazon Web Services, Microsoft Corp.’s Azure and Alphabet Inc.’s Google Cloud Platform — are eager to reduce their reliance on, if not replace, Nvidia chips. All three are cooking up their own silicon, but Amazon, the largest seller of rented computing power, has deployed the most chips to date…

…Fifteen years ago, the company invented the cloud computing business and then, over time, started building the infrastructure that sustains it. Reducing its reliance on one incumbent after another, including Intel Corp., Amazon ripped out many of the servers and network switches in its data centers and replaced them with custom-built hardware. Then, a decade ago, James Hamilton, a senior vice president and distinguished engineer with an uncanny sense of timing, talked Jeff Bezos into making chips…

…After almost four decades in the business, Hamilton knows taking Amazon’s chip ambitions to the next level won’t be easy. Designing reliable AI hardware is hard. Maybe even harder is writing software capable of making the chips useful to a wide range of customers. Nvidia gear can smoothly handle just about any artificial intelligence task. The company is shipping its next-generation chips to customers, including Amazon, and has started to talk up the products that will succeed them a year from now. Industry observers say Amazon isn’t likely to dislodge Nvidia anytime soon…

… The unit’s first chip was designed to power something called inference — when computers trained to recognize patterns in data make a prediction, such as whether a piece of email is spam. That component, called Inferentia, rolled out to Amazon’s data centers by December 2019, and was later used to help the Alexa voice assistant answer commands. Amazon’s second AI chip, Trainium1, was aimed at companies looking to train machine learning models. Engineers also repackaged the chip with components that made it a better fit for inference, as Inferentia2.

Demand for Amazon’s AI chips was slow at first, meaning customers could get access to them immediately rather than waiting weeks for big batches of Nvidia hardware. Japanese firms looking to quickly join the generative AI revolution took advantage of the situation. Electronics maker Ricoh Co., for example, got help converting large language models trained on English-language data to Japanese.

Demand has since picked up, according to Gadi Hutt, an early Annapurna employee who works with companies using Amazon chips. “I don’t have any excess capacity of Trainium sitting around waiting for customers,” he says. “It’s all being used.”

Trainium2 is the company’s third generation of artificial intelligence chip. By industry reckoning, this is a make-or-break moment. Either the third attempt sells in sufficient volume to make the investment worthwhile, or it flops and the company finds a new path. “I have literally never seen a product deviate from the three-generation rule,” says Naveen Rao, a chip industry veteran who oversees AI work at Databricks Inc., a purveyor of data and analytics software.

Databricks in October agreed to use Trainium as part of a broad agreement with AWS. At the moment, the company’s AI tools primarily run on Nvidia. The plan is to displace some of that work with Trainium, which Amazon has said can offer 30% better performance for the price, according to Rao. “It comes down to sheer economics and availability,” Rao says. “That’s where the battleground is.”…

…Amazon’s Trainium2 will likely be deemed a success if it can take on more of the company’s internal AI work, along with the occasional project from big AWS customers. That would help free up Amazon’s precious supply of high-end Nvidia chips for specialized AI outfits. For Trainium2 to become an unqualified hit, engineers will have to get the software right — no small feat. Nvidia derives much of its strength from the comprehensiveness of its suite of tools, which let customers get machine-learning projects online with little customization. Amazon’s software, called Neuron SDK, is in its infancy by comparison.

Even if companies can port their projects to Amazon without much trouble, checking that the switch-over didn’t break anything can eat up hundreds of hours of engineers’ time, according to an Amazon and chip industry veteran, who requested anonymity to speak freely. An executive at an AWS partner that helps customers with AI projects, who also requested anonymity, says that while Amazon had succeeded in making its general-purpose Graviton chips easy to use, prospective users of the AI hardware still face added complexity.

“There’s a reason Nvidia dominates,” says Chirag Dekate, a vice president at Gartner Inc. who tracks artificial intelligence technologies. “You don’t have to worry about those details.”…

…  “We’re particularly impressed by the price-performance of Amazon Trainium chips,” says Tom Brown, Anthropic’s chief compute officer. “We’ve been steadily expanding their use across an increasingly wide range of workloads.”

Hamilton says Anthropic is helping Amazon improve quickly. But he’s clear-eyed about the challenges, saying it’s “mandatory” to create great software that makes it easy for customers to use AWS chips.

4. Key Square Capital 2024 January letter – Scott Bessent and the Key Square team

In essence, a second Trump administration would be expected to embrace a “Peace Through Strength” trade policy. Of course, in the case of recalcitrant trade partners, Trump can always offer them a negotiating session with former US Trade Representative Robert Lighthizer who will likely play a prominent role in his second term.

Our base case is that a re-elected Donald Trump will want to create an economic lollapalooza and engineer what he will likely call “the greatest four years in American history.” Economist Ed Yardeni believes that post-Covid America has the potential to have a boom similar to the “Roaring Twenties” of a century ago. We believe that a returning President Trump would like this to be his legacy. In this scenario, the greatest risk factor, in our opinion, would be a sudden rise in long-end rates.

The talk of revenge will likely be limited to a small group of political enemies, and the wider policies of the administration will be oriented toward de-regulation, energy independence, reviving U.S. manufacturing and extending the tax cuts. We find it unlikely that across-the-board tariffs, as currently reported by the media, would be enacted at the same time as he moves to fix the immigration crisis. The tariff gun will always be loaded and on the table but rarely discharged. Of course, strategic and national security issues around China will remain.

Another differentiated view that we have is that Trump will pursue a weak dollar policy rather than implementing tariffs. Tariffs are inflationary and would strengthen the dollar–hardly a good starting point for a US industrial renaissance. Weakening the dollar early in his second administration would make U.S manufacturing competitive. A weak dollar and plentiful, cheap energy could power a boom. The current Wall Street consensus is for a strong dollar based on the tariffs. We strongly disagree. A strong dollar should emerge by the end of his term if the US reshoring effort is successful.

5. Scott Bessent Sees a Coming ‘Global Economic Reordering.’ He Wants to Be Part of It – Peter Rudegeair and Gregory Zuckerman

In his first interview following his selection, Bessent said his policy priority will be to deliver on Trump’s various tax-cut pledges. Those include making his first-term cuts permanent, and eliminating taxes on tips, social-security benefits and overtime pay…

…Bessent became one of Trump’s closest advisers by adding depth to his economic proposals and defending his plans for more-activist trade policies. He has argued that the president-elect’s plans to extend tax cuts and deregulate parts of the U.S. economy would create an “economic lollapalooza.”…

…Bessent has long been worried about the U.S.’s heavy debt and thinks the main way it can be reduced is by boosting growth, which increases tax revenues.

He has advised Trump to pursue a policy he calls 3-3-3, inspired by former Japanese Prime Minister Shinzo Abe, who revitalized the Japanese economy in the 2010s with his “three-arrow” economic policy. Bessent’s “three arrows” include cutting the budget deficit to 3% of gross domestic product by 2028, spurring GDP growth of 3% through deregulation and producing an additional 3 million barrels of oil or its equivalent a day.

To get government spending under control, Bessent has advocated extending the 2017 Tax Cuts and Jobs Act but with what are called pay-fors to lower its cost. That would involve either reducing spending or increasing revenue elsewhere to offset the impact. He also proposed freezing nondefense discretionary spending and overhauling the subsidies for electric vehicles and other parts of the Inflation Reduction Act.

Earlier this year, Bessent thought about tariffs as a negotiating tool, telling investors in a letter that the “tariff gun will always be loaded and on the table but rarely discharged.” He has since argued for them more forcefully, especially as a source of tax revenue.

In a speech last month titled “Make the International Economic System Great Again,” Bessent argued for increasing tariffs on national-security grounds and for inducing other countries to lower trade barriers with the U.S.  


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 24 November 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 November 2024:

1. Cash! – The Brooklyn Investor

Over the past few years, people have kept talking about mean reversion to value and whatnot, but I have ignored that for the most part for the reasons I’ve been saying here. The growth / value spread just seems to me so much reflecting values being taken away from the old economy into the new one. Yes, sounds like 1999 bubble, but it just seems true. Retail just seems to be going down the drain, old school marketing / advertising just seems to be losing to online marketing etc…

…The massive transfer of wealth has been going on for decades, or more than a century. Industrialization just sucked the wealth and value out of skilled workers / craftsman and transferred it to large corporations via factories. Formerly skilled workers were transferred into factories that required no skill (therefore, lower income). All the value-added accrued to the owners of the factories (capitalists). Same with national chain restaurants and retail. WMT transferred wealth from the local shops / restaurants to Arkansas; former store-owners end up having to work at WMT for lower pay (as unskilled workers). This is nothing new.

Now, the same thing is happening at so many levels at the same time that it is quite frightening. Just as a simple example, I’ve mentioned this before, but companies like Squarespace and Wix (or free options like WordPress) have sort of wiped out a large part of the web development world. People who knew a little HTML / CSS / Javascript might have been able to make a living not too long ago, but not now. All that ‘wealth’ is transfered to the companies that provide the platform for people to build it themselves.

Photographers are complaining for similar reasons. You no longer need to hire a photographer for low-end projects. You can just buy photos from various photos sites for very low prices, or even have AI generate the exact photo you need. I have used AI to generate artwork, photos and text in various volunteer work, and it is scary. I thought to myself, jeez, I would have paid an art student $300 for this 3 years ago; now I do it for free online via AI…

…This is why when people say the stock market as a percentage of GDP is going up, the concentration of stocks in the market is getting too high etc., I think it is obvious that this is happening because the wealth and value is actually being more and more focused and concentrated, so the market is only reflecting reality…

…A similar group of very rich and smart people are saying that long term rates can’t stay low and they must move substantially higher due to these unsustainably large and growing federal deficits. Do I worry about that? Yes. But, I look to Japan as the model of an aging society and growing government deficits. Sure, there are plenty of differences (Japan is a high savings nation), but I still can’t get around the fact that slowing population growth and maturity of the U.S. economy would make growth harder to achieve going forward. Almost certainly, we can’t get back to the growth of the post-war baby boom generation. So given that, how do interest rates go up? Deficit-driven inflation? We haven’t really seen that in Japan, and even in the U.S. until Covid and Ukraine. So is the recent inflation really deficit-driven inflation? Or exogenous event-driven inflation? Maybe a combination of both.

This is not to say I don’t care about deficits. Of course it’s a problem, and we need to deal with it at some point. My opinion is just seeing things as an investor. I am just telling you why, as an investor, I am not yet concerned too much with the deficit and inflation.

2. Off The Beaten Path Investing – David Katunarić and Lawrence J. Goldstein

Goldstein: I started at Burnham when they had about 22 senior analysts following every industry in America, or so they thought. One day, after discovering the pink sheets, or actually, I found the pink sheets afterwards. I saw a list of trucking companies. It was in the Standard & Poor’s transportation manual, which came out weekly, supplements to put in the looseleaf book. I got a list of every trucking company in the United States and there must have been well over 50, maybe more, and every one of them had lower earnings or losses, except for four companies. Those four were Roadway Express, Denver Chicago Trucking, Merchant Fast Motor Lines and Overnite, spelled N-I-T-E. I called them first, and I ended up making a friend of J. Howard Cochran, the founder and president. At the beginning, he sent me a copy of his monthly financial statement. There were no rules against doing that. I remember they were printed in purple ink on a ditto machine. His first report he sent me was the five months ended May. He had earned in those five months, per share, I remember $1.86. He told me also that in the trucking business, the second half of the year is better than the first half. I said, “Let’s see, five months $1.86, times 2 is over $3.60, and I’m missing a month and the second half is better, so it’s got to be higher than that.” The stock was $1.75 or $1.25 off it. I couldn’t believe it. So I wrote a report, gave it to my boss, head of research.

He said to me, and I can hear it to this day, “Listen, kids, this is an institutional research department. We don’t write or recommend reports on dollar-stocks.” So I knew I was onto something. My boss was crazy. It ended up, by the way, they earned almost $4 a share that year. I got to laugh, it’s funny – I could buy the first share at $1.75, and I did. A number of years later, I think two decades later, or less than, Overnite sold out to, I think it was the Southern Pacific Railway, they sold out for $100 million. This thing was worth $500,000 when I met them. So the pink sheets made sense to look there. Basically, what I came to do was to look left when everybody’s looking right, look down when everybody’s looking up, and find companies that are off the beaten path, overlooked or ignored by otherwise intelligent investors…

…Katunaric: What would you say, Larry, in these 40-some years that you’re managing Santa Monica Partners, how has your investing approach changed since then? What are some lessons that sparked the change?

Goldstein: It’s not changed at all, except that you don’t write to the SEC and ask for the 10-Ks and Qs and the proxy and have it take two weeks if you get it. Now you hit a keyboard and you get it all. That’s changed. The second thing is now there are people like you. There are a lot of people – I don’t mean you personally – who are on top of what’s called microcaps. So everybody’s searching for the goal. Obviously you’ve developed a business and you want to develop a bigger business. But that’s what happened. Competition that didn’t exist. When I did it, there was one firm that got big, Tweedy Browne. You know them? What happened to them was terrible. They got so big they had to buy ordinary stocks…

…Goldstein: When I bought Mastercard, it was not a huge company. When they went public, if I remember right, it was $39, $38, $37. I can’t remember the exact price, and it’s since split 10-for-1. So my cost is, I guess, $3 and change. I forget the exact split. I have to look it up. Let’s say it’s $10, $15 – but I think my cost is less than $15.

Katunaric: I saw somewhere that it was a hundred-bagger since the IPO. Maybe I read it last year. I think it was one of the best performing ones, but I’m not sure also.

Goldstein: I’ll focus on that for a second. The reason I bought it, was in 1971, I went to my boss, Tubby Burnham, and I said, “There’s a business that’s going public, Madison Avenue.” Madison Avenue is where all the advertising agencies were in New York, every one of them. The company that was going public, it was the second company to go, ad company. The first one was a company called Puppet, Koning, and Lois. They had been public for some period of time and the stock did okay. The second one was Batten, Barton, Durstein, and Osborne, which subsequently changed their name to BBD&O, which subsequently changed their name, and it’s the same company to Omnicom, which is the world’s first and second largest advertising agency. Why did I want to buy it? I said to my boss, “Advertising companies are required if you have a consumer product to sell. It’s a royalty company. They get a royalty on every new consumer product that’s marketed to the world.” That’s what I think it was. If you’re going to sell a new widget, you want to advertise it. They get a cut of that. So, a great business. I said, “That’s exactly what Mastercard is.” Everything that anybody buys, they get a cut. By the way, there’s no risk to their business. They don’t make loans. Banks make loans. They get a cut. Banks have risk, but Mastercard, it’s like every time you turn on the water, you get a free glass…

…I tell you, the biggest recommendation to me, and the biggest thing I don’t believe or understand is, Warren Buffett, he has never bought it, except for himself when he was a kid. He bought Oxy. I don’t know that much about Occidental, but there’s nothing better than TPL if you want to be in the oil business. They just own the stuff and you can take it out at your cost and pay them not only for that, but the right to get to the well and leave the well and for the water for fracking. If you run a hose or a pipeline, pay them. What better business is there than that? None.

Katunaric: I agree. You pitched me TPL extensively yesterday and the asset light nature of the business was really attractive.

3. Here’s How Trump Could Lose the Coming Trade War – Paul Krugman

All indications are that China’s era of torrid economic growth is behind it. For decades, Chinese growth was fueled mainly by two things: a rising working-age population and rapid productivity growth driven by borrowed technology. But the working-age population peaked around a decade ago and is now falling. And despite some impressive achievements, the overall rate of technological progress in China, which economists measure by looking at “total factor productivity,” appears to have slowed to a crawl…

…China, however, has built an economic system designed for the high-growth era — a system that suppresses consumer spending and encourages very high rates of investment.

This system was workable as long as supercharged economic growth created the need for ever more factories, office buildings and so on, so that high investment could find productive uses. But while an economy growing at, say, 9 percent a year can productively invest 40 percent of G.D.P., an economy growing at 3 percent can’t.

The answer seems obvious: redistribute income to households and reorient the economy away from investment toward consumption. But for whatever reason, China’s government seems unwilling to move in that direction…

…So what do you do if you have lots of capacity but your consumers can’t or won’t buy what you make? You try to export the problem, keeping the economy humming by running huge trade surpluses…

…China appears to be exporting close to $1 trillion more than it imports, and the trend is upward.

Hence the coming trade war. The rest of the world won’t passively accept Chinese surpluses on that scale…

…That’s why the Biden administration has been quietly pursuing a quite hard line on China, retaining Trump’s tariffs and trying to limit its progress in advanced technologies. It’s why the European Union has imposed high tariffs on electric vehicles made in China, which is probably only the beginning of expanded trade conflict…

…Trump’s insistence that tariffs don’t hurt consumers — even as businesses across America are planning to raise prices when his planned tariffs hit — strongly suggests that neither he nor anyone he listens to understands how global trade works. Not a good thing at a time of trade conflict.

4. Is the United States Going Broke? – Ben Carlson

There seem to be two extreme views when it comes to government debt levels.

One is the view that government debt doesn’t really matter all that much since we have the global reserve currency and the ability to print as much of that currency as we’d like.

The other view is that government debt levels are reaching a tipping point that will lead to calamity…

…It is true that U.S. government debt is enormous…

…Total government debt in the United States was around $23 trillion heading into the pandemic so debt levels are up 50% or so this decade alone.

It’s also true that the interest we pay on government debt has risen considerably because we’ve taken on so much and interest rates are so much higher than they were in the 2010s…

…But you can’t look at debt levels on their own. You have to think of them through the lens of a $30 trillion U.S. economy.

Here is interest expense as a percentage of GDP:..

…It’s shot up considerably in recent years but it’s still below 1990s levels. The Fed cutting interest rates should help on the margins…

…Spending was 45% of GDP during the pandemic. That was obviously unsustainable but things are now back to normal…

…The thing you have to understand is the United States government does not operate like a household when it comes to debt. You pay your mortgage off over time and eventually retire that debt.

The government’s budget is not at all like a household budget. First of all, the government can print its own currency. That helps in a pinch and it’s the main reason our government can’t go broke. Inflation is the true constraint when it comes to politicians spending money.

As long as the economy is growing, debt should be growing too…

…I would be more worried if you told me government and consumer debt were down in the coming decades. That would mean something is seriously wrong with the economy.

Debt grows because assets grow (remember government debt is an asset in the form of bonds for investors). Debt grows because the economy grows. Income grows. Prices grow. So of course debt will rise. 

5. Wall Street’s Elites Are Piling Into a Massive AI Gamble – Neil Callanan, Gillian Tan, Tasos Vossos, Carmen Arroyo, and Immanual John Milton

While much of the speculative hype around AI has played out in the stock market so far, as seen in chipmaker Nvidia Corp.’s share price, the giddiness is spreading to the sober suits of debt finance and private equity.

Analysis by Bloomberg News estimates at least $1 trillion of spending is needed for the data centers, electricity supplies and communications networks that will power the attempt to deliver on AI’s promise to transform everything from medicine to customer service. Others reckon the total cost could be double that…

…Further proof of the “unsatiable demand” for computing horsepower, according to real-estate broker Jones Lang LaSalle Inc., is the more than sevenfold increase over two years in construction work on US co-location centers, which lease out rack space to tech firms. Asking rents in those facilities have jumped as much as 37% in 12 months, the firm estimated in an August report.

All of this unbridled spending is revving up the issuance of both investment-grade debt and riskier leveraged loans, especially in the US, handily for private lenders and fee-starved investment bankers alike. Hedge funds are looking as well to profit from AI hysteria with novel types of debt structures.

It’s also opened up a new corner of the asset-backed securities market, where sales of debt backed by data centers have already jumped to a near-record $7.1 billion this year, according to data compiled by Bloomberg News. Chuck in fiber networks and other bits of kit, and it’ll be much higher. Matt Bissonette, who heads Guggenheim Securities’ business in this area, says the number of buyers for his data-center ABS products has roughly doubled in four years…

…While Blackstone hasn’t risked that kind of capital on construction before, developers of data centers can make stellar returns if all goes well. Property researcher Green Street reckons profit margins on London sites are about 65%.

Financiers are eager to back these grand projects because future occupants have usually pre-signed long leases, making them safer bets. Some banks are offering to lend as much as 70% or 80% of the cost and occasionally more when a lease is already signed, according to a person with knowledge of the matter…

…Lenders are more twitchy, however, about data centers explicitly earmarked for AI rather than more general purposes, according to a banker who works in the sector. Such deals can carry costlier debt and less leverage, he says, because the technology still has to prove its worth.

Separately, a senior partner at a leading private equity firm says he’s troubled by the emergence of speculative development, meaning construction takes place before a tenant has been found, as it’s hard to be sure of final demand. Some lawyers talk of “zombie projects” that may never be finished.

And not everyone believes that the “if you build it, they will come” approach is a surefire winner for those gambling on an era-changing AI breakthrough. Massachusetts Institute of Technology professor Daron Acemoglu says a lot of capital will be wasted.

Despite the misgivings, the appetite for deals from bankers and private lenders — especially for sites with blue-chip, signed-up occupants — is giving most data-center owners and developers a strong hand when pricing debt. A site leased long term by a tech giant can snag bank funding at a margin below two percentage points, says Brookland’s Hussain. Co-locators typically pay 2.5 percentage points or less, he adds.

“Recently, we raised €850 million ($907 million) in nine-year bonds at below 4% and refinanced and upsized our revolving credit facilities to $4.5 billion,” says Jordan Sadler, senior vice president at Digital Realty Trust Inc., a tech property firm that has signed joint ventures with Blackstone and others for almost $9 billion of hyperscale data-center developments…

…Across the Atlantic, one utility told the Federal Reserve Bank of Atlanta that electricity usage by data centers rose 17% in recent months. In Virginia, host to the world’s highest concentration of these sites, records for peak power demand were set six times in July, according to Dominion Energy Inc.

Trying to satisfy energy-devouring data centers means the utility sector’s capital spending is set to exceed $200 billion by next year, about double what it was a decade earlier. That would have stressed utility balance sheets, but a recent easing of how Moody’s Ratings views some of the industry’s riskier hybrid bonds — letting them be treated as half equity — has opened the floodgates to companies raising capital without being downgraded.

Sales of these bonds have risen almost eightfold this year to $15 billion, data compiled by Bloomberg shows. Only issues by bulge-bracket banks match that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Wix. Holdings are subject to change at any time.