What We’re Reading (Week Ending 23 March 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 March 2025:

1. Investing Politics: Globalization Backlash and Government Disruption! – Aswath Damodaran

Globalization has taken different forms through the ages, with some violent and toxic variants, but the current version of globalization kicked into high gear in the 1980s, transforming every aspect of our lives…

…The biggest winner from globalization has been China, which has seen its economic and political power surge over the last four decades. Note that the rise has not been all happenstance, and China deserves credit for taking advantage of the opportunities offered by globalization, making itself first the hub for global manufacturing and then using its increasing wealth to build its infrastructure and institutions…

…China’s share of global GDP increased ten-fold between 1980 and 2023…

…Between 2010 and 2023, China accounted for almost 38% of global economic growth, with only the United States having a larger share…

…Consumers have benefited from globalization in many ways, starting with more products to choose from and often at lower prices than in pre-globalization days. From being able to eat whatever we want to, anytime of the year, to wearing apparel that has become so cheap that it has become disposable, many of us, at least on the surface, have more buying power…

…Over the last few decades, not only have more companies been able to list themselves on financial markets, but these markets has become more central to public policy. In many cases, the market reaction to spending, tax or economic proposals has become the determinant on whether they get adopted or continued. As financial markets have risen in value and importance, the cities (New York, London, Frankfurt, Shanghai, Tokyo and Mumbai) where these markets are centered have gained in importance and wealth, if not in livability, at the expense of the rest of the world…

…The rise of China from globalization also illustrates the fading of Japan and Europe over the period, with the former declining from 17.8% of global GDP in 1995 to 3.96% in 2023 and the latter seeing its share dropping from 25.69% of global GDP in 1990 to 14.86%…

…I listed consumers as winners from globalization, and they were, on the dimensions of choice and cost, but they also lost in terms of control of where their products were made, and by whom. To provide a simplistic example, the shift from buying your vegetables, fish and meat from local farmers, fishermen and butchers to factory farmers and supermarkets may have made the food more affordable, but it has come at a cost…

…While there are a host of other factors that have also contributed to the decline of small businesses, globalization has been a major contributor, as smaller businesses now find themselves competing against companies who make their products thousands of miles away, often with very different cost structures and rules restricting them. Larger businesses not only had more power to adapt to the challenges of globalization, but have found ways to benefit from it, by moving their production to the cheapest and least restrictive locales. In one of my data updates for this year, I pointed to the disappearance of the small firm effect, where small firms historically have earned higher returns than large cap companies, and globalization is a contributing factor…

…The flip side of the rise of China and other countries as manufacturing hubs, with lower costs of operation, has been the loss of manufacturing clout and jobs for the West…

…In the United States, the number of manufacturing jobs peaked at close to 20 million in 1979 and dropped to about 13 million in 2024, and manufacturing wages have lagged wage growth in other sectors for much of that period…

…I believe that globalization has been a net plus for the global economy, but one reason it is in retreat is because of a refusal on the part of its advocates to acknowledge its costs and the dismissal of opposition to any aspect of globalization as nativist and ignorant…

…Trump, a real estate developer with multiple international properties, is an imperfect spokesperson of the anti-globalization movement, but it is undeniable that he has tapped into, and benefited from, its anger. While he was restrained by norms and tradition in his first term, those constraints seem to have loosened in this second go around, and he has wielded tariffs as a weapon and is open about his contempt for global organizations. While economists are aghast at the spectacle, and the economic consequences are likely to be damaging, it is not surprising that a portion of the public, perhaps even a majority, are cheering Trump on.

To those who are nostalgic for a return to the old times, I don’t believe that the globalization genie can go back into the bottle, as it has permeated not only every aspect of business, but also significant portions of our personal lives. The world that will prevail, if a trade war plays out, will be very different than the one that existed before globalization took off…

…On the revenue growth front, companies that derive most or all of their revenues domestically will benefit and companies that are dependent on foreign sales will be hurt by tariff wars…

…Collectively, about 28% of the revenues, in 2023, of the companies in the S&P 500 came from foreign markets, but technology companies are most exposed (with 59% of revenues coming from outside the country) and utilities least exposed (just 2%) to foreign revenue exposure. It is also worth noting that the larger market cap companies of the S&P 500 have a higher foreign market revenue exposure than smaller market cap companies…

…To the extent that companies are altering their decisions on where to build their next manufacturing facilities, as a result of tariff fears or in hope of government largesse, there should be an effect on reinvestment, with an increase in reinvestment (lower sales to capital ratios) at businesses where this move will create investment costs. Looking across businesses, this effect is likely to be more intense at manufacturing companies, where moving production is more expensive and difficult to do, than at technology or service firms…

…While it is easy to blame market uncertainty on Trump, tariffs and trade wars for the moment, the truth is that the forces that have led us here have been building for years, both in our political and economic arenas. In short, even if the tariffs cease to be front page news, and the fears of an immediate trade war ease, the underlying forces of anti-globalization that gave rise to them will continue to play out in global commerce and markets. For investors, that will require a shift away from the large cap technology companies that have been the market leaders in the last two decades back to smaller cap companies with a more domestic focus.

2. Big Retailers’ Hardball Tariff Playbook: Haggle, Diversify, Raise Prices – Hannah Miao and Sarah Nassauer

Some suppliers say Walmart, Home Depot and other retailers are pushing a variation of the same demand: Make a price concession or shift production out of China. Otherwise, the suppliers risk losing some business…

…Some of the requests have raised the ire of Chinese officials. Authorities in China summoned Walmart for a meeting in recent days after some suppliers complained the largest U.S. retailer by annual revenue was pressuring them to cut prices and absorb the tariff cost…

…Some pricing negotiations are hitting an impasse because many of these manufacturers are often already operating on razor-thin margins, according to suppliers. And retailers don’t want to raise prices for shoppers so they can continue to compete for market share…

…After the first 10% tariff on Chinese goods in February, Home Depot asked one of its U.S. suppliers of lighting and home decor to absorb the cost, according to an executive at the supplier. The supplier agreed to a two-month, 10% discount, part of which would be covered by its Chinese manufacturer.

After the second 10% tariff in March, the supplier declined another request from Home Depot to lower prices again. Instead, the supplier is moving production to Southeast Asia so it can eventually charge the home-improvement retailer the original price, the executive said…

…The tariff planning is especially complicated because companies have little sense of which tariff threats will materialize and where new ones could emerge, retailers and suppliers say…

…In some cases, retailers and manufacturers have decided it is worth it to keep production in China to maintain quality. Costco, the warehouse chain, plans to continue selling patio furniture made in China—even at an elevated price—because it is higher quality than versions made in other countries, Costco executives said. Costco and its supplier will absorb some of the cost increase and pass some on to shoppers, they said.

3. An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company – Ben Thompson and Sam Altman

What’s going to be more valuable in five years? A 1-billion daily active user destination site that doesn’t have to do customer acquisition, or the state-of-the-art model?

SA: The 1-billion user site I think.

Is that the case regardless, or is that augmented by the fact that it seems, at least at the GPT-4 level, I mean, I don’t know if you saw today LG just released a new model. There’s going to be a lot of, I don’t know, no comments about how good it is or not, but there’s a lot of state-of-the-art models.

SA: My favorite historical analog is the transistor for what AGI is going to be like. There’s going to be a lot of it, it’s going to diffuse into everything, it’s going to be cheap, it’s an emerging property of physics and it on its own will not be a differentiator.

What will be the differentiator?

SA: Where I think there’s strategic edges, there’s building the giant Internet company. I think that should be a combination of several different key services. There’s probably three or four things on the order of ChatGPT, and you’ll want to buy one bundled subscription of all of those. You’ll want to be able to sign in with your personal AI that’s gotten to know you over your life, over your years to other services and use it there. There will be, I think, amazing new kinds of devices that are optimized for how you use an AGI. There will be new kinds of web browsers, there’ll be that whole cluster, someone is just going to build the valuable products around AI. So that’s one thing.

There’s another thing, which is the inference stack, so how you make the cheapest, most abundant inference. Chips, data centers, energy, there’ll be some interesting financial engineering to do, there’s all of that.

And then the third thing is there will be just actually doing the best research and producing the best models. I think that is the triumvirate of value, but most models except the very, very leading edge, I think will commoditize pretty quickly.

So when Satya Nadella said models are getting commoditized, that OpenAI is a product company, that’s still a friendly statement, we’re still on the same team there?

SA: Yeah, I don’t know if it came across as a compliment to most listeners, I think he meant that as a compliment to us…

It doesn’t. But to that point, you just released a big API update, including access to the same computer use model that undergirds Operator, a selling point for GPT Pro. You also released the Responses API and I thought the most interesting part about the Responses API is you’re saying, “Look, we think this is much better than the Chat Completions API, but of course we’ll maintain that, because lots of people have built on that”. It’s sort of become the industry standard, everyone copied your API. At what point is this API stuff and having to maintain old ones and pushing out your features to the new ones turn into a distraction and a waste of resources when you have a Facebook-level opportunity in front of you?

SA: I really believe in this product suite thing I was just saying. I think that if we execute really well, five years from now, we have a handful of multi-billion user products, small handful and then we have this idea that you sign in with your OpenAI account to anybody else that wants to integrate the API, and you can take your bundle of credits and your customized model and everything else anywhere you want to go. And I think that’s a key part of us really being a great platform.

Well, but this is the tension Facebook ran into. It’s hard to be a platform and an Aggregator, to use my terms. I think mobile was great for Facebook because it forced them to give up on pretensions of being a platform. You couldn’t be a platform, you had to just embrace being a content network with ads. And ads are just more content and it actually forced them into a better strategic position.

SA: I don’t think we’ll be a platform in a way that an operating system is a platform. But I think in the same way that Google is not really a platform, but people use sign in with Google and people take their Google stuff around the web and that’s part of the Google experience, I think we’ll be a platform in that way…

From my perspective, when you talk about serving billions of users and being a consumer tech company. This means advertising. Do you disagree?

SA: I hope not. I’m not opposed. If there is a good reason to do it, I’m not dogmatic about this. But we have a great business selling subscriptions.

There’s still a long road to being profitable and making back all your money. And then the thing with advertising is it increases the breadth of your addressable market and increases the depth because you can increase your revenue per user and the advertiser foots the bill. You’re not running into any price elasticity issues, people just use it more.

SA: Currently, I am more excited to figure out how we can charge people a lot of money for a really great automated software engineer or other kind of agent than I am making some number of dimes with an advertising based model…

Especially Deep Research, it’s amazing. But I am maybe more skeptical about people’s willingness to go out and pay for something, even if the math is obvious, even if it makes them that much more productive. And meanwhile, I look at this bit where you’re talking about building memory. Part of what made the Google advertising model so brilliant is they didn’t actually need to understand users that much because people typed into the search bar what they were looking for. People are typing a tremendous amount of things into your chatbot. And even if you served the dumbest advertising ever, in many respects, and even if you can’t track conversions, your targeting capability is going to be out of this world. And, by the way, you don’t have an existing business model to worry about undercutting. My sense is this is so counter to what everyone at OpenAI signed up for, that’s the biggest hurdle. But to me, from a business analyst, this seems super obvious and you’re already late.

SA: The kind of thing I’d be much more excited to try than traditional ads is a lot of people use Deep Research for e-commerce, for example, and is there a way that we could come up with some sort of new model, which is we’re never going to take money to change placement or whatever, but if you buy something through Deep Research that you found, we’re going to charge like a 2% affiliate fee or something. That would be cool, I’d have no problem with that. And maybe there’s a tasteful way we can do ads, but I don’t know. I kind of just don’t like ads that much…

…SA: Totally. I think DeepSeek was — they made a great team and they made a great model, but the model capability was, I think, not the thing there that really got them the viral moment. But it was a lesson for us about when we leave a feature hidden, we left chains of thought hidden, we had good reasons for doing it, but it does mean we leave space for somebody else to have a viral moment. And I think in that way it was a good wake-up call. And also, I don’t know, it convinced me to really think differently about what we put in the free tier and now the free tier is going to get GPT-5 and that’s cool.

Ooo, ChatGPT-5 hint. Well, I’ll ask you more about that later

In your recent proposal about the AI Action Plan, OpenAI expressed concern about companies building on DeepSeek’s models, which are, in one of the phrases about them, “freely available”. Isn’t the solution, if that’s a real concern, to make your models freely available?

SA: Yeah, I think we should do that.

So when-

SA: I don’t have a launch to announce, but directionally, I think we should do that.

You said before, the one billion destination site is more valuable than the model. Should that flow all the way through to your release strategy and your thoughts about open sourcing?

SA: Stay tuned.

Okay, I’ll stay tuned. Fair enough.

SA: I’m not front-running, but stay tuned…

Is there a bit where isn’t hallucination good? You released a sample of a writing model, and it sort of tied into one of my longstanding takes that everyone is working really hard to make these probabilistic models behave like deterministic computing, and almost missing the magic, which is they’re actually making stuff up. That’s actually pretty incredible.

SA: 100%. If you want something deterministic, you should use a database. The cool thing here is that it can be creative and sometimes it doesn’t create quite the thing you wanted. And that’s okay, you click it again.

Is that an AI lab problem that they’re trying to do this? Or is that a user expectation problem? How can we get everyone to love hallucinations?

SA: Well, you want it to hallucinate when you want and not hallucinate when you don’t want. If you’re asking, “Tell me this fact about science,” you’d like that not to be a hallucination. If you’re like, “Write me a creative story,” you want some hallucination. And I think the problem, the interesting problem is how do you get models to hallucinate only when it benefits the user?…

I think some skeptics, including me, have framed some aspects of your calls for regulation as an attempt to pull up the ladder on would-be competitors. I’d ask a two-part question. Number one, is that unfair? And if the AI Action Plan did nothing other than institute a ban on state level AI restrictions and declare that training on copyright materials fair use, would that be sufficient?

SA: First of all, most of the regulation that we’ve ever called for has been just say on the very frontier models, whatever is the leading edge in the world, have some standard of safety testing for those models. Now, I think that’s good policy, but I sort of increasingly think the world, most of the world does not think that’s good policy, and I’m worried about regulatory capture. So obviously, I have my own beliefs, but it doesn’t look to me like we’re going to get that as policy in the world and I think that’s a little bit scary, but hopefully, we’ll find our way through as best as we can and probably it’ll be fine. Not that many people want to destroy the world.

But for sure, you don’t want to go put regulatory burden on the entire tech industry. Like we were calling for something that would have hit us and Google and a tiny number of other people. And again, I don’t think the world’s going to go that way and we’ll play on the field in front of us. But yes, I think saying that fair use is fair use and that states are not going to have this crazy complex set of differing regulations, those would be very, very good.

You are supporting export controls or by you, I mean, OpenAI in this policy paper. You talked about the whole stack, that triumvirate. Do you worry about a world where the US is dependent on Taiwan and China is not?

SA: I am worried about the Taiwan dependency, yes…

Okay, sure. Intel needs a customer. That’s what they need more than anything, a customer that is not Intel. Get OpenAI, become the leading customer for the Gaudi architecture, commit to buying a gazillion chips and that will help them. That will pull them through. There’s your answer.

SA: If we were making a chip with a partner that was working with Intel and a process that was compatible and we had, I think, a sufficiently high belief in their ability to deliver, we could do something like that. Again, I want to do something. So I’m not trying to dodge…

So Dario and Kevin Weil, I think, have both said or in various aspects that 99% of code authorship will be automated by sort of end of the year, a very fast timeframe. What do you think that fraction is today? When do you think we’ll pass 50% or have we already?

SA: I think in many companies, it’s probably past 50% now. But the big thing I think will come with agentic coding, which no one’s doing for real yet.

What’s the hangup there?

SA: Oh, we just need a little longer.

Is it a product problem or is it a model problem?

SA: Model problem.

Should you still be hiring software engineers? I think you have a lot of job listings.

SA: I mean, my basic assumption is that each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers…

What is AGI? And there’s a lot of definitions from you. There’s a lot of definitions in OpenAI. What is your current, what’s the state-of-the-art definition of AGI?

SA: I think what you just said is the key point, which is it’s a fuzzy boundary of a lot of stuff and it’s a term that I think has become almost completely devalued. Some people, by many people’s definition, we’d be there already, particularly if you could go like transport someone from 2020 to 2025 and show them what we’ve got.

Well, this was AI for many, many years. AI was always what we couldn’t do. As soon as we could do it, it’s machine learning. And as soon as you didn’t notice it, it was an algorithm.

SA: Right. I think for a lot of people, it’s something about like a fraction of the economic value. For a lot of people, it’s something about a general purpose thing. I think they can do a lot of things really well. For some people, it’s about something that doesn’t make any silly mistakes. For some people, it’s about something that’s capable of self-improvement, all those things. It’s just there’s not good alignment there.

What about an agent? What is an agent?

SA: Something that can like go autonomously, do a real chunk of work for you.

To me, that’s the AGI thing. That is employee replacement level.

SA: But what if it’s only good at like some class of tasks and can’t do others? I mean, some employees are like that too…

Given that, does that make you more optimistic, less optimistic? Do you see this bifurcation that I think there’s going to be between agentic people? This is a different agentic word, but see where we’re going. We need to invent more words here. We’ll ask ChatGPT to hallucinate one for us. People who will go and use the API and the whole Microsoft Copilot idea is you have someone accompanying you and it’s a lot of high talk, “Oh, it’s not going to replace jobs, it’s going to make people more productive”. And I agree that will happen for some people who go out to use it. But you look back, say, at PC history. The first wave of PCs were people who really wanted to use PCs. PCs, a lot of people didn’t. They had one put on their desk and they had to use it for a specific task. And really, you needed a generational change for people to just default to using that. Is AI, is that the real limiting factor here?

SA: Maybe, but that’s okay. Like as you mentioned, that’s kind of standard for other tech evolutions.

But you go back to the PC example, actually, the first wave of IT was like the mainframe, wiped out whole back rooms. And because actually, it turned out the first wave is the job replacement wave because it’s just easier to do a top-down implementation.

SA: My instinct is this one doesn’t quite go like that, but I think it’s always like super hard to predict.

What’s your instinct?

SA: That it kind of just seeps through the economy and mostly kind of like eats things little by little and then faster and faster.

You talk a lot about scientific breakthroughs as a reason to invest in AI, Dwarkesh Patel recently raised the point that there haven’t been any yet. Why not? Can AI actually create or discover something new? Are we over-indexing on models that just aren’t that good and that’s the real issue?

SA: Yeah, I think the models just aren’t smart enough yet. I don’t know. You hear people with Deep Research say like, “Okay, the model is not independently discovering new science, but it is helping me discover new science much faster.” And that, to me, is like pretty much as good.

Do you think a transformer-based architecture can ever truly create new things or is it just spitting out the median level of the Internet?

SA: Yes.

Well, what’s going to be the breakthrough there?

SA: I mean, I think we’re on the path. I think we just need to keep doing our thing. I think we’re like on the path…

Do humans have innate creativity or is it just recombining knowledge in different sorts of ways?

SA: One of my favorite books is The Beginning of Infinity by David Deutsch, and early on in that book, there’s a beautiful few pages about how creativity is just taking something you saw before and modifying it a little bit. And then if something good comes out of it, someone else modifies it a little bit and someone else modifies it a little bit. And I can sort of believe that. And if that’s the case, then AI is good at modifying things a little bit.

To what extent is the view that you could believe that grounded in your long-standing beliefs versus what you’ve observed, because I think this is a very interesting — not to get all sort of high-level metaphysical or feel, like I said, theological almost — but there does seem to be a bit where one’s base assumptions fuel one’s assumptions about AI’s possibilities. And then, most of Silicon Valley is materialistic, atheistic, however you want to put it. And so of course, we’ll figure it out, it’s just a biological function, we can recreate it in computers. If it turns out we never actually do create new things, but we augment humans creating new things, would that change your core belief system?

SA: It’s definitely part of my core belief system from before. None of this is anything new, but no, I would assume we just didn’t figure out the right AI architecture yet and at some point, we will.

4. The Last Decision by the World’s Leading Thinker on Decisions – Jason Zweig

Kahneman was widely mourned nearly a year ago when his death was announced. Only close friends and family knew, though, that it transpired at an assisted-suicide facility in Switzerland. Some are still struggling to come to terms with his decision…

…But I never got to say goodbye to Danny and don’t fully understand why he felt he had to go. His death raises profound questions: How did the world’s leading authority on decision-making make the ultimate decision? How closely did he follow his own precepts on how to make good choices? How does his decision fit into the growing debate over the downsides of extreme longevity? How much control do we, and should we, have over our own death?…

…I think Danny wanted, above all, to avoid a long decline, to go out on his terms, to own his own death. Maybe the principles of good decision-making that he had so long espoused—rely on data, don’t trust most intuitions, view the evidence in the broadest possible perspective—had little to do with his decision.

His friends and family say that Kahneman’s choice was purely personal; he didn’t endorse assisted suicide for anyone else and never wished to be viewed as advocating it for others.

Some of Kahneman’s friends think what he did was consistent with his own research. “Right to the end, he was a lot smarter than most of us,” says Philip Tetlock, a psychologist at the University of Pennsylvania. “But I am no mind reader. My best guess is he felt he was falling apart, cognitively and physically. And he really wanted to enjoy life and expected life to become decreasingly enjoyable. I suspect he worked out a hedonic calculus of when the burdens of life would begin to outweigh the benefits—and he probably foresaw a very steep decline in his early 90s.”

Tetlock adds, “I have never seen a better-planned death than the one Danny designed.”…

…As I wrote in a column about Kahneman last year: “I once showed him a letter I’d gotten from a reader telling me—correctly but rudely—that I was wrong about something. ‘Do you have any idea how lucky you are to have thousands of people who can tell you you’re wrong?’ Danny said.”…

…Kahneman knew the psychological importance of happy endings. In repeated experiments, he had demonstrated what he called the peak-end rule: Whether we remember an experience as pleasurable or painful doesn’t depend on how long it felt good or bad, but rather on the peak and ending intensity of those emotions.

“It was a matter of some consternation to Danny’s friends and family that he seemed to be enjoying life so much at the end,” says a friend. “‘Why stop now?’ we begged him. And though I still wish he had given us more time, it is the case that in following this carefully thought-out plan, Danny was able to create a happy ending to a 90-year life, in keeping with his peak-end rule. He could not have achieved this if he had let nature take its course.”

Did turning 90 play a role in his decision? Kahneman and Tversky’s early research showed that when people are uncertain, they will estimate numbers by “anchoring,” or seizing on any figure that happens to be handy, regardless of how relevant it is to the decision.

Another of Kahneman’s principles was the importance of taking what he called the outside view: Instead of regarding each decision as a special case, you should instead consider it as a member of a class of similar situations. Gather data on comparable examples from that reference class, then consider why your particular case might have better or worse prospects.

One possible approach: Kahneman could have gathered data to determine whether people who live to the age of 95 or beyond tend to regret not dying at the age of 90—adjusting for the difficulty of getting reliable reports from patients with dementia and other debilitating conditions. Perhaps he did something along those lines; I don’t know…

…As Danny’s final email continued:

I discovered after making the decision that I am not afraid of not existing, and that I think of death as going to sleep and not waking up. The last period has truly not been hard, except for witnessing the pain I caused others. So if you were inclined to be sorry for me, don’t be.

As death approaches, should we make the best of whatever time we have left with those we love the most? Or should we spare them, and ourselves, from as much as possible of our inevitable decline? Is our death ours alone to own?

Danny taught me the importance of saying “I don’t know.” And I don’t know the answers to those questions. I do know the final words of his final email sound right, yet somehow feel wrong:

Thank you for helping make my life a good one.

5. Dead of Winter – Doomberg

After a somewhat colder-than-average winter and the cessation of gas flows through the very Sudzha pipeline that Russian special forces just snaked through for their surprise assault, European natural gas storage levels are at dangerously low levels for this time of year:…

…The last energy crisis began many months before Russia’s military crossed into Ukraine. Europe’s vulnerability—driven by a foolish decision to forgo topping off its reserves in the summer of 2021—almost certainly convinced Russian President Vladimir Putin that he held sufficient leverage to risk war in early 2022. Three years later, with the conflict seemingly entering its final stages, surely the continent isn’t repeating the mistakes of the recent past? Perhaps it is:

“As the first proper winter in Europe in three years is drawing to an end, the continent faces a race against time—and prices—to restock with natural gas for next winter…

Europe could need as many as 250 additional [liquefied natural gas] LNG cargoes to arrive in the summer to refill its inventories back up to 90% by November 1, as the current EU regulation stipulates, per Reuters calculations reported by columnist Ron Bousso… The LNG market appears to be tightening, with supply not rising fast enough in early 2025 to meet demand.”…

…Europe’s vulnerability is now measurably higher compared to three years ago. Russian natural gas no longer flows through the Nord Stream and Yamal pipelines, nor various connections through Ukraine, eliminating access to a total capacity of 18 billion cubic feet per day (bcf/d). Only the two pipelines entering Turkey via the Black Sea—TurkStream and Blue Stream—are still pumping gas. The balance of European demand will need to be met by expensive LNG imports, primarily from the US and Qatar.

Unfortunately for Europe, the LNG market has been facing challenges just as the continent appears poised to rely on it more than ever…

…Despite these many delays, some relief is in sight for 2025 with two major LNG expansions activating in the US. Cheniere’s Corpus Christi Stage 3 expansion produced its first cargo in February, adding 1.3 bcf/d of capacity. The first phase of Plaquemines LNG—built by the controversial firm Venture Global and itself a 1.3bcf/d facility—is in the commissioning process, a milestone celebrated last week by Chris Wright, Trump’s new Secretary of Energy…

…The one event that could significantly disrupt energy markets and pose a serious challenge to Brussels would be a major terrorist attack on European infrastructure. For example, if either of the large pipelines passing through Turkey were taken offline, prices would likely spike sharply. The loss of a large LNG import terminal, such as those in Spain or France, would also create severe strain. When operating on the edge, even small disturbances can knock the system out of equilibrium.

While Europe is playing an extremely risky game, absent the edge cases, it is likely to muddle through the next year without a full-blown disaster.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Costco, Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 March 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 March 2025:

1. Want even tinier chips? Use a particle accelerator – The Economist

A more radical solution is to use a free-electron laser (FEL), where electrons travelling near the speed of light are manipulated to emit EUV radiation. The FEL process begins with a powerful electron gun that injects a beam of the particles into a miniature racetrack. The electrons then pass through a linear accelerator, which propels them to nearly the speed of light. Once accelerated, they enter a roughly 200-metre-long structure called an undulator, where a series of magnets generate a field whose polarity flips periodically. This wiggles the electrons, causing them to emit a beam of EUV photons with a specific wavelength.

Nicholas Kelez, the boss of xLight, a Silicon Valley startup developing FEL-based lithography, described the technology as a more powerful and tuneable “new light bulb” that he believes can be swapped into existing optical lithography machines. xLight expects to deliver the first commercial system within four years…

…Generating light using a FEL has some advantages over using lasers. The first is power: a lithography machine based on a FEL-based light source can be around six times more energy-efficient than a laser-plasma tool. Dispensing with molten-tin droplets also reduces the risk of contamination. Tuning such a machine for smaller wavelengths is also, at least theoretically, much easier: all that needs doing is tweaking the settings on the electron gun and the undulator. It would also be cheaper. A single FEL system can be repurposed to provide light for multiple lithography machines, allowing its operator to distribute the fixed costs across multiple chip-etching tools. Nakamura Norio from KEK estimates that the construction cost is around half that of a laser-based EUV tool and the running costs are around a fifteenth.

2. Is Manus a technological breakthrough or a marketing bubble? – Yuan Gan and Robert Wu

So, is Manus an empty marketing frenzy or a technological breakthrough comparable to DeepSeek? To answer this question, I reviewed every official example provided by Manus, the GAIA benchmark paper with its 467 questions, compared Manus’ performance with competitors on GAIA, and looked at the code of the “open-source versions of Manus.” Here are my findings:

  1. Is Manus an empty marketing frenzy or a technological breakthrough comparable to DeepSeek? Neither. Manus is not a marketing gimmick nor a fundamental technological revolution; it represents a breakthrough at the product level. Unlike DeepSeek, which focuses on a fundamental breakthrough in foundational model capabilities, Manus has made significant achievements in the direction of AI agents—reaching SOTA (State of the Art) levels on the authoritative evaluation metric GAIA, significantly ahead of peer products.
  2. Can the various open-source alternatives emerging these past few days replace Manus? No. The current open-source alternatives have a clear gap compared to Manus. Actual testing and data metrics show that Manus’ success rate in executing complex tasks far exceeds that of various open-source versions, by several times. Moreover, Manus has been specifically optimized for specific application scenarios, a fine-tuning that simple open-source replication cannot achieve.
  3. Is Manus a mature universal agent? No, Manus has not yet become a truly universal agent. To achieve this goal, it still needs to overcome three “major mountains”: a fundamental improvement in foundational model capabilities, a rich and diverse ecosystem of partners, and a solid and scalable engineering infrastructure…

…Whether it’s from the test experiences of netizens over the past few days or from the scores Manus has obtained on the GAIA benchmark, we can see that Manus and other universal AI agents are not yet mature.

So, how far away is a universal agent from being mature and commercially available?

I believe that to be mature and usable, it still needs to overcome three major challenges: foundational model capabilities, partner ecosystems, and engineering infrastructure.Currently, universal agents still rely on foundational large language models for task decomposition and execution, especially in the execution phase. In this phase, large language models face significant challenges in utilizing web information and computer operations…

…In the actual experience of OpenAI Operator, a significant issue is the restricted interaction between agents and external services. For example, when Operator accesses Reddit, GitHub, or other websites to complete tasks, it is often identified as abnormal traffic and blocked.

Currently, most agents access network services anonymously or with a generic identity, lacking a clear identity marker, leading to:

  • Being identified and blocked by websites’ anti-crawling mechanisms, including search engines like Google.
  • Inability to access services that require account login, such as obtaining information from Twitter and Facebook.
  • Inability to access personalized content and services, such as letting the agent view one’s own email…

…Unlike traditional internet services, which can often abstract services into “calling a microservice” as an instantaneous stateless operation, agent services are almost all long-duration, multi-state conversational interactions. After the product capabilities reach maturity, how to efficiently provide agent services to millions, or even tens of millions, of users is a significant engineering challenge.

3. ‘Project Osprey:’ How Nvidia Seeded CoreWeave’s Rise – By Cory Weinberg and Anissa Gardizy

Then Nvidia made a curious move: It agreed to spend $1.3 billion over four years to rent its own chips from CoreWeave, an upstart cloud computing firm trying to take on the giants. The deal, which CoreWeave dubbed “Project Osprey,” made Nvidia its second-largest customer last year after Microsoft, documents show, a detail not disclosed in CoreWeave’s filing last week for an initial public offering.

The deal shows how crucial Nvidia’s support has been to CoreWeave. The chip giant invested $100 million in the startup in early 2023. It funneled hundreds of thousands of high-end graphics processing units to CoreWeave. And it agreed to rent back its chips from CoreWeave through August 2027. Revenue from the deal represented 15% of CoreWeave’s sales last year…

…Prospective investors have been grappling with how long they can count on CoreWeave to sustain such big customers. Contracts with Microsoft and Nvidia, which together made up more than three-quarters of the company’s sales last year, expire between 2027 and 2029.

CoreWeave, meanwhile, has fueled its expansion with $8 billion in debt and $15 billion of long-term leases for data centers and office buildings, making it essential to attract new customers to supplement its original deals…

…It’s unclear exactly why Nvidia wanted to rent its own chips, but the company had several reasons to be interested in CoreWeave. Nvidia rents servers from cloud providers—including from CoreWeave—for its internal research and development teams.

Nvidia also rents servers from cloud providers for its DGX cloud computing service, which re-rents the servers to Nvidia cloud computing customers. CoreWeave has told investors it supports DGX…

…CoreWeave, in turn, purchased $380 million in chips and other hardware from Nvidia in 2023, documents shared with investors last year show. That spending total likely grew significantly last year. The company wrote in the prospectus that it purchases all of its chips from Nvidia, without specifying the total…

…The money Nvidia paid CoreWeave last year—in excess of $25 million a month, according to sales projections—was far more than what some of the other customers CoreWeave displayed in its IPO prospectus were spending. For example, CoreWeave cited model fine-tuning startup Replicate and quant trading firm Jane Street, which expected to spend hundreds of thousands of dollars a month with CoreWeave last year, internal materials show. 

4. Twitter thread on Diamond Online’s interview of Tatsuro Kiyohara – Japan Stock Feed

Interviewer: The Japanese stock market has been experiencing sharp declines. What’s your outlook for the next year?

Kiyohara: First and foremost, let me be clear—I don’t claim to have an accurate read on the market. The only time my predictions have been right is when investors lose their composure and panic. Over the past five years, I’ve only “gotten it right” twice—once during the COVID-19 crash in March 2020 and again on August 5, 2024. Those were the only times I felt truly confident and decided to aggressively buy Japanese stocks. Given how often I’ve been wrong, asking me for a market outlook is, quite frankly, insane…

…Interviewer: I see… but despite that, many individual investors would still love to hear your insights. After all, you’re the legendary salaryman investor who built an ¥80 billion fortune. Could you at least offer some guidance?

Kiyohara: If I told you, “The Nikkei will hit 42,000 by year-end,” what meaning would that have? That kind of prediction is pointless. But since that’s not a very helpful answer, let’s take a different approach. If I were still actively managing money today, what kind of portfolio would I hold? Let’s imagine what my positioning would look like…

…Kiyohara: If I were managing ¥10 billion ($66 million) today, my portfolio would be structured as follows:

▶️80% Long (¥8 billion in buy positions)

– ¥5 billion in small-cap stocks

– ¥2 billion in large-cap or mid-cap stocks

– ¥1 billion in REITs (Real Estate Investment Trusts)

▶️20% Short (¥2 billion in sell positions)

– Short positions in four highly liquid large-cap stocks

– (At the time of this interview, the market is already declining, so these aren’t necessarily the best picks, but as an example, I’d consider names like Advantest, Fujikura, Sanrio, and Mitsubishi Heavy Industries.)…

….Kiyohara: If you buy during a crash and think, “What if the market drops even further?”—you’re doing it wrong. “But that’s easier said than done,” you might say. Fair enough. So here’s how I train my mindset during downturns: “If I don’t buy today, then when?”…

…Interviewer: For more advanced investors, is an 80% long / 20% short portfolio a bullish position?

Kiyohara: Not necessarily.

– My core holdings are in net-cash-rich small-cap stocks, which means they are largely immune to overall market movements

– Higher interest rates are actually a positive for these companies, since they have strong balance sheets.

– In my view, this portfolio is neutral, reflecting a stance of “There could be a market drop, but it’s not a certainty.”…

…Kiyohara: I’ve said before that I’m terrible at predicting markets, but if we zoom out, I’m bullish on Japan. For years, Japan’s stock market was trapped in a box. Foreigners would buy, the market would rise. They’d sell, and the market would drop. It was a simple cycle. It was like a lottery—you had to ‘buy a ticket’ (stocks) to have a chance, but ultimately, you were at the mercy of foreign investors’ flows…

…For the first time, Japanese companies truly belong to their shareholders. That’s a massive structural shift—a revolution. Yes, risks remain. But this governance transformation is so significant that it outweighs them. That’s why, even if a crash comes, I focus on making money from the rebound rather than betting on the decline. Put simply: If share buybacks and dividend hikes continue, stock prices will rise.

5. What it will really take to feed the world – Bill Gates

Many discussions about feeding the world focus on increasing agricultural productivity through improved seeds, healthier soils, better farming practices, and more productive livestock (all priorities for the Gates Foundation). Vaclav, however, insists we already produce more than enough food to feed the world. The real challenge, he says, is what happens after the food is grown…

…Some of the world’s biggest food producers have the highest rates of undernourishment. Globally, we produce around 3,000 calories per person per day—more than enough to feed everyone—but a staggering one-third of all food is wasted. (In some rich countries, that figure climbs to 45 percent.) Distribution systems fail, economic policies backfire, and food doesn’t always go where it’s needed.

I’ve seen this firsthand through the Gates Foundation’s work in sub-Saharan Africa, where food insecurity is driven by low agricultural productivity and weak infrastructure. Yields in the region remain far lower than in Asia or Latin America, in part because farmers rely on rain-fed agriculture rather than irrigation and have limited access to fertilizers, quality seeds, and digital farming tools. But even when food is grown, getting it to market is another challenge. Poor roads drive up transport costs, inadequate storage leads to food going bad, and weak trade networks make nutritious food unaffordable for many families…

…While severe hunger has declined globally, micronutrient deficiencies remain stubbornly common, even in wealthy countries. One of the most effective solutions has been around for nearly a century: food fortification. In the U.S., flour has been fortified with iron and vitamin B since the 1940s. This simple step has helped prevent conditions like anemia and neural tube defects and improve public health at scale—close to vaccines in terms of lives improved per dollar spent…

…CRISPR gene editing, for instance, could help develop crops that are more resilient to drought, disease, and pests—critical for farmers facing the pressures of climate change. Vaclav warns that we can’t count on technological miracles alone, and I agree. But I also believe that breakthroughs like CRISPR could be game-changing, just as the Green Revolution once was…

…And some of these solutions aren’t about producing more food at all—they’re about wasting less of what we already have. Better storage and packaging, smarter supply chains, and flexible pricing models could significantly reduce spoilage and excess inventory. In a conversation we had about the book, Vaclav pointed out that Costco (which might seem like the pinnacle of U.S. consumption) stocks fewer than 4,000 items, compared to 40,000-plus in a typical North American supermarket.

That kind of efficiency—focusing on fewer, high-turnover products—reduces waste, lowers costs, and ultimately eases pressure on global food supply, helping make food more affordable where it is needed most.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Costco and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 March 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 March 2025:

1. The Troubled Energy Transition – Daniel Yergin, Peter Orszag, and Atul Arya

The fundamental objective of the energy transition is to replace most of today’s energy system with a completely different system. Yet throughout history, no energy source, including traditional biomass of wood and waste, has declined globally in absolute terms over an extended period.

The first energy transition began in 1709, when a metalworker named Abraham Darby figured out that coal provided “a more effective means of iron production” than wood. And the ensuing “transition” took place over at least a century. Although the nineteenth century has been called “the century of coal,” the energy scholar Vaclav Smil has observed that coal did not overtake traditional biomass energy sources (such as wood and crop residues) until the beginning of the twentieth century. Oil, discovered in western Pennsylvania in 1859, would overtake coal as the world’s top energy source in the 1960s. Yet that did not mean that the absolute amount of coal used globally was falling—in 2024, it was three times what it had been in the 1960s.

The same pattern is playing out today. About 30 percent of the world’s population still depends on traditional biomass for cooking, and demand for hydrocarbons has yet to peak or even plateau. The portion of total energy usage represented by hydrocarbons has changed little since 1990, even with the massive growth in renewables. (In the same period, overall energy use has increased by 70 percent.) And the global population is expected to grow by approximately two billion in the coming decades, with much of that growth taking place in the global South. In Africa—a demographically young continent whose population has been projected to increase from 18 percent of the global population today to 25 percent by 2050—almost 600 million people live without electricity, and roughly one billion lack access to clean cooking fuel. Traditional biomass energy still fuels almost half the continent’s total energy consumption…

…Technological, policy, and geopolitical uncertainty makes it challenging to estimate the costs associated with achieving net zero by 2050. But one thing is certain: the costs will be substantial.

The most recent estimate comes from the Independent High-­Level Expert Group on Climate Finance, whose numbers provided a framework for the COP29 meeting—the UN’s annual forum on climate change—in Azerbaijan. It projected that the investment requirement globally for climate action will be $6.3 to $6.7 trillion per year by 2030, rising to as much as $8 trillion by 2035. It further estimated that the global South countries will account for almost 45 percent of the average incremental investment needs from now to 2030, and they have already been falling behind in meeting their financing needs, especially in sub-Saharan Africa.

Based on such estimates, the magnitude of energy-transition costs would average about five percent a year of global GDP between now and 2050. If global South countries are largely exempted from these financial burdens, global North countries would have to spend roughly ten percent of annual GDP—for the United States, over three times the share of GDP represented by defense spending and roughly equal to what the U.S. government spends on Medicare, Medicaid, and Social Security combined…

…In other words, achieving net zero will also require an unprecedented reorganization of capital flows from the global North to the global South, which will necessitate substantial investments in renewable-energy infrastructure at a time when, according to the International Monetary Fund, 56 percent of low-income countries are “at high levels of debt distress.” While innovative financing mechanisms (such as debt-for-climate and debt-for-nature swaps) will help, low sovereign-debt ratings throughout the developing world present a major obstacle to outside investment and raise capital costs. As a result, the bulk of the financial burden will be borne by advanced economies. But even there, debt has risen considerably—average public debt today is over 100 percent of GDP, a level not seen since World War II and a major constraint on governments’ ability to finance the transition through public spending…

…At the moment, almost half the population of the developing world—three billion people—annually uses less electricity per capita than the average American refrigerator does. As energy use grows, “carbonizing” will precede “decarbonizing.” Natural gas is a readily available option, and it’s a better alternative to coal, as well as to traditional biomass fuels that produce harmful indoor air pollution. Although global oil demand seems slated to plateau in the early 2030s, natural gas consumption is expected to continue to increase well into the 2040s. Production of liquefied natural gas is on track to increase by 65 percent by 2040, meeting energy security needs in Europe, replacing coal in Asia, and driving economic growth in the global South…

…The clash of priorities between the North and the South is especially striking when it comes to carbon tariffs. Many global North governments have, as part of their efforts to reduce emissions, put up barriers preventing other countries from taking the same carbon-based economic development path that they took to achieve prosperity. The European Union has launched the first phase of its Carbon Border Adjustment Mechanism. The CBAM is intended to support European climate objectives globally by initially imposing import tariffs on products such as steel, cement, aluminum, and fertilizer based on the carbon emissions embedded in their production and then expanding to more imports. Critics in the global North have argued that such measures would be ineffective because of the enormous complexity of supply chains and the associated difficulty of tracking embedded carbon in imports. Critics in the global South see the CBAM as a barrier to their economic growth. Ajay Seth, India’s economic affairs secretary, has argued that CBAM would force higher costs on the Indian economy: “With income levels which are one-twentieth of the income levels in Europe, can we afford a higher price? No, we can’t.”…

…The International Energy Agency has projected that global demand for the minerals needed for “clean energy technologies” will quadruple by 2040. At the top of the list are such critical minerals as lithium, cobalt, nickel, and graphite, as well as copper. Between 2017 and 2023 alone, demand for lithium increased by 266 percent; demand for cobalt rose by 83 percent; and demand for nickel jumped by 46 percent. Between 2023 and 2035, S&P expects the demand for lithium to increase by another 286 percent; cobalt, by 96 percent; and nickel, by 91 percent. Electric vehicles require two and a half to three times more copper than an internal combustion engine car; battery storage, offshore and onshore wind systems, solar panels, and data centers all require significant amounts of copper. S&P’s analysis of future copper demand found that global copper supply will have to double by the middle of the 2030s to meet current policy ambitions for net-zero emissions by 2050. This is extremely unlikely, considering that, based on S&P data that tracked 127 mines that have come online globally since 2002, it takes more than 20 years to develop a major new mine; in the United States, it takes an average of 29 years…

…China already has a dominant position in mining and a predominant position in the processing of minerals into metals essential for renewable energy infrastructure. It accounts for over 60 percent of the world’s rare-earth mining production (compared with nine percent for the United States) and more than 90 percent of the processing and refining of rare earths. It produces 77 percent of the world’s graphite, processes 98 percent of it, and processes over 70 percent of the world’s lithium and cobalt and almost half the copper.

Beijing aims to extend this dominance to what it calls the “global new energy industrial chain,” with its commanding position in batteries, solar panels, and electric vehicles, as well as in deploying massive amounts of capital toward energy infrastructure in the developing world. With China’s huge scale and low costs, Beijing describes this effort as an extensive and integrated approach to developing and dominating the renewable energy sector. From 2000 to 2022, it issued $225 billion in loans for energy projects in 65 strategically significant nations, with about 75 percent of that directed toward coal, oil, and gas development. Between 2016 and 2022, China provided more energy project financing around the world than any major Western-backed multilateral development bank, including the World Bank…

…Electrification trends suggest that power demand in the United States will double between now and 2050. Electricity consumption is already outpacing recent demand forecasts. PJM, which manages electricity transmission from Illinois to New Jersey, almost doubled its growth projection between 2022 and 2023 and is warning of the danger of shortfalls in electricity before the end of the decade…

…Today’s energy transition is meant to be fundamentally distinct from every previous energy transition: it is meant to be transformative rather than an additive. But so far it is “addition,” not replacement. The scale and variety of the challenges associated with the transition mean that it will not proceed as many expect or in a linear way: it will be multidimensional, proceeding at different rates with a different mix of technologies and different priorities in different regions. That reflects the complexities of the energy system at the foundation of today’s global economy. It also makes clear that the process will unfold over a long period and that continuing investment in conventional energy will be a necessary part of the energy transition.

2. An Interview with Benedict Evans About AI Unknowns – Ben Thompson and Benedict Evans

Well, you wrote about Deep Research a couple of weeks ago and you were pretty disappointed in the output. They used a smartphone report as the demo and it’s interesting, because the Deep Research case that convinced me was actually interview prep, and the key thing about it was, it was a lot of qualitative information that was helpful, and I wasn’t looking for quantitative information. Does that ring true of your experience?

BE: It does, yes. There’s a lot of different things one can say about this, and most of what I said was, it’s kind of interesting and puzzling rather than just, “It’s crap”. It’s very easy to say, “This is amazing and it changes the world, and it’s the fifth industrial revolution”, and it’s very easy to say, “This is all a bunch of crap and it’s the biggest waste of time and money since NFTs, please subscribe to my Substack”, and leave it at that. But what I struggle with is, it’s actually much more interesting and more complicated.

There’s a simple statement which is, “These things are good at things that don’t have wrong answers”. The quote someone used a couple of years ago was, “They tend to be good at things that computers are bad at and bad at things that computers are good at”, they’re very bad at precise specific information retrieval, which is what computers begin with. But on the other hand, you can ask them a question like, “What would be a good thing to take on a picnic?”, and that’s a question that a computer just couldn’t answer, that’s not a SQL query, and an LLM can answer that.

I think a lot of the product challenge and use case challenge around these things is trying to work out how you translate that into something that you’re actually trying to do. You gave the example of interview prep, which is — actually I don’t do interviews, but that would be something where, yeah, I can see that would be useful. The funny thing here is that OpenAI, I wasn’t testing it myself, I went and looked at OpenAI’s own product page, this is their test where they’re telling me this is what it’s for and this is what it’s good at, and proceed to show it to doing precise information retrieval, which of course, it can’t do. So just for the people who haven’t looked into OpenAI’s product page, it suggests some use cases, and one of them is, “Make a table of a bunch of countries with smartphone adoption by operating system””, and also stuff like, “Who wants to learn languages”.

The wrong report for the wrong guy.

BE: Yeah. The problem is, as many people may know, I used to be a telecoms analyst, so I looked at this and thought, “Okay, well let me have a look”. Problem one is, it used Statista, which is basically an SEO spam house that aggregates other people’s data. Saying “Source: Statista” at best is kind of saying “Source: a Google Search”, they’re not actually telling me what the source is and secondly, StatCounter, which tells you traffic. And I looked at this and I thought — I won’t monologue too long, I promise — I looked at this and I thought-

No, this is great. I completely agree with where you’re going.

BE: -there’s two questions here. The first is, is this model accurately working out what sources it should use? And then, is it getting the correct data from those sources?

And the question that OpenAI have posed is smartphone adoption. Well, what does that mean exactly? Are you asking me about unit sales? Are you asking about the install base? Are you asking me about usage? Are you asking me about outdoor usage? Because a use case that they propose actually was something like a translation app. Adoption isn’t any of those, it might be any of those, depending on context.

ChatGPT has come up with, number one, StatCounter, which is a metric of usage, not the install base and then, it’s come up with Statista, which is actually going to Kantar, which I think is doing install base, but I’m not sure, and those two come up with two different numbers, and the number that’s in ChatGPT, the Deep Research report, is a third number.

The thing that I thought about this was, you’ve asked this a probabilistic question, not a deterministic question. You’ve asked it a “What should I take on a picnic?”-type question, you haven’t asked it a precise database-y question where a computer would know the answer. You’ve asked it to work out what you want, and then you’ve asked it to work out where to get it from, and then you’ve asked it to do the database retrieval and actually report what’s on those pages. It’s kind of done okay at the first two, or it’s done what I would expect an intern to do on the first two, and as I wrote, if I had an intern, I would’ve said, “This is why you wouldn’t use either of those two”.

Yeah.

BE: But an intern wouldn’t know, and then it’s copied the number down wrong, which is where you smack the intern on the back of the head and tell them to go back and do it again. There’s a lot of different ways you can talk about this. Are you using these things for precise information retrieval? Are you using them for things that don’t really have wrong answers? Are you using them for qual or for quant? Are you using them for brainstorming? How do you work out what your problem is and whether it would be good at that, and how it would map against that?

But at the end of the day, I went and asked it to do a thing and it told me it had done the thing, and it’s wrong. In fact, it’s worse than that. OpenAI asked it to do the thing and it did the thing, and it was wrong.

And then put it up as a demo!

BE: There’s this really profoundly important thing here, which I had this feeling looking at the new Claude model today as well, or yesterday, is people talk about these models getting better a lot, but if you’ve given me a table with 20 entries and some of them are wrong, what do you mean when you say the model’s better? Do you mean that all the odd entries are right? Or do you mean that the entries are more likely to be right? Those are very different things. The idea that we would get to these models to the point that they always would be right and you would know that in a way that you would know a database would be right, we have no idea if that’s possible. That’s a profound scientific question in the field. What do you do with something that can do amazing things but you don’t know if it’s right?…

You made this point, this tool’s the most helpful if you are already an expert because it saves you time and you can identify the errors. But if you’re not an expert, it’s incredibly dangerous. I wrote about this a couple of weeks ago, I asked for this report on an industry I happen to know a lot about, and Deep Research completely missed a big player because it was a private company, there was no listings about it even though anyone in the field would have known about it. You now have an unknown known, there’s something that you think you know, but actually, you don’t know. You’ve been convinced to be more ignorant.

BE: Yeah, listening to you talk, I’m reminded, actually, of one of these very old fallacies from engineers, which is to say, “The users have to learn how it works”, which is the thing you see with open source over and over again or, “The users will have to learn”. Of course you can say that, but the users won’t learn and it’s not the user’s job to learn how it works. The more that you force people to have to understand how it works, the more you limit the potential adoption of it.

Zooming back a little bit, something that I have in draft at the moment is that, if we think about where we’ve come in the last whatever it is, two and a bit years since GPT 3.5 launched, at the beginning of 2023, say, you could think there was a cluster of questions that would determine what was going to happen. How much will these scale? How big will the models get? How expensive will it be? What will happen to the error rates? What will reasoning be? Are there barriers to entry? Are there winner-takes-all effects? Is there enough data? You can make a list of a dozen questions, they all kind of link together.

Out of that there’s one possible output which is there’s one computer that runs the whole world, and the other extreme is it ends up being like databases or indeed like machine learning in which if you were to say today, “How many databases are there?”, that’s just a meaningless question. What are you talking about?

Since then, none of those questions have really been answered, except for the extent that it seems clear right now that these things are going to be commodities, although still quite expensive commodities. But anyone who’s got a billion dollars can have one, you don’t need a hundred billion dollars, and there’s not a lot of winner-takes-all effect the way there was with smartphones. Anyone who’s got $500 million or $100 million dollars, or something, pick a number, can have an outlet, can have a frontier model. But all the other questions, we don’t know. We don’t know what will happen to the error rate. We don’t know how big the models will get or how long the scaling works.

One of the things that kind of came out of that was, there’s a path that says, you can just go to ChatGPT and say, “I want to move to Taiwan, how do I do that?”, “I need to file my taxes in New York, London, and California, do it for me”. And the model can go and read the right websites and ask you for a photograph of your bank statement and just make you the PDF and do it for you…

…BE: And suddenly, there’s this explosion of complexity in the data centers, and we have to know about it. You have to know chips, and there are all these papers. But I did this deck, I do this annual presentation, and I ended my presentation, the section that talked about scaling, with a quote from the Godfather that says, “If history teaches you anything, it’s that you can kill anybody”, and I crossed out, “Kill anybody” and said, “Commodity computing tends to get cheap”. You’ve got all of this complexity in the creation of the models and the data center and everything else, and yet, I don’t know, I look at Grok and I think, okay, in less than two years, you managed to produce a state-of-the-art model. Is that really, really good or really, really bad?

That’s bearish for model creation.

BE: That is not as positive as you think. Yes, they’re a great team, and well done for building a 100,000 GPU cluster, but what this tell us is it’s a commodity…

…BE: There’s no difference, that’s what that means. But yeah, I think you can extend this to the doomers, where it was clear that the people who were thinking this stuff is going to take over the world in the next three months just had no conception of how the world worked outside of their shared group house in the Berkeley Hills. The puzzle and the analogy I always used to give, looking at, going back to talking about use cases, is imagine the first people seeing VisiCalc, the first spreadsheet in the late ’70s.

Yep.

BE: So if you saw this and you were an accountant, it blew your mind because the line is like you change the interest rate here and all the other numbers on the spreadsheet change.

Yep. [John] Gruber and I were just talking about this the other day, and you could watch it change!

BE: You say that now and people now are like, “Yes…?”, but back then you did spreadsheets on paper with a pencil and so if you’re an accountant, you have to have this. Certainly, you can look up the pricing of the Apple II that you needed to run VisiCalc, the full setup with a floppy drive and a printer and a monitor was like 15 grand adjusted for inflation. But if you were a lawyer and you see it, you think, “Well, that’s great, my accountant should see this, but that’s not what I do all day”.

Yep.

BE: Now, show me a word processor that can do word counts and footnotes and line numbers. That, I will pay for, that solves a problem that I have. And the challenge of the text box and the blinking cursor is either you really know you’ve got a problem that it solves, which is coding and marketing, or you’re the kind of person that’s instinctively looking for tools to solve things in their company, which is the bottom-up IT adoption and it’s no coding and everything else, but that’s a very small portion of the population.

And then it’s everybody else who didn’t see that they had that problem until an enterprise SaaS company came and showed them that they were spending three hours a week on it and sold them something for 10 grand a seat to fix it. Otherwise, you’ve got this prompt. What do I do with it?

I completely agree with you and this is where one of the analogies I’ve been thinking about is going back to the first — arguably the greatest direct job displacement in IT was actually the first mainframes, where it’s just like, “Okay, we don’t need an entire backroom of bookkeepers, we don’t need an entire backroom of ERP trackers, we can just do it on a computer”, and it was like a one-to-one replacement. What’s interesting about this is right now, AI is a bottoms-up phenomenon because you need so much agency to go out and find a way to use this, and because the model doesn’t learn, you have to learn how to use the model. It’s like people wanting to bring PCs to work in the late ’70s, and it’s like, “What are you trying to do here?”.

BE: And if you look at what these people were doing, all the books and magazines at the time were, “You should learn to code. Or at a minimum, you need to buy a database software program”. So it wasn’t you buy Quicken, it’s you buy a database plus software program and you make your own Quicken by yourself.

That’s such a great point, it’s the same thing it was with code. That was the thing at the beginning and you can do something yourself, it’s an excellent point. But it does make me think that if you’re a top-down decision maker, you can, number one, decide that the cost-benefit is worth actually removing an entire category or entire department. And number two, you can tolerate the error rate because you’ll do a cost-benefit analysis that says, “Okay, I’m at X percent error rate, this is going to cost me Y amount of money. How does that balance versus this collection of humans who are also going to make X amount of errors and cost me Y amount of money?” And you go in and you just shift people out and that is actually going to be a more meaningful change or where it’s going to start. It’s going to be less about getting people to fundamentally change how they work and more about, “You know what? We can do it good enough with AI for this whole category of work. Sorry, you guys are laid off”.

BE: Yeah, I think it depends. And if you go back and think about how the last waves of automated enterprise software worked, whether it’s SaaS or on-prem or whatever, there’s two or three things here. So one of them is you don’t just replace a whole department with one piece of software.

No, not now. I’m talking about back in the ’70s.

BE: Yeah. But the other part of that is that didn’t result in fewer white-collar workers or fewer accountants. Excel didn’t result in fewer accountants. I mean, my joke was always that, young people won’t believe this, but before Excel, investment bankers worked really long hours. Now they can get their work finished at lunchtime on Fridays and go home to The Hamptons. And of course, that’s not what happened.

Well, why is it that that isn’t what happened? I think, and it comes back to what I was saying earlier, the base case here is that you work in invoice processing and now you have a new piece of software that’s better at resolving failed invoice payments, and that is worth X-hundred thousand dollars a year to your company, and so they come out and they buy this piece of software. Or it’s a feature that gets added to their existing software or it plugs into SAP or Salesforce or Oracle or whatever it is, and it’s that piece of software and today, the typical big company has 400 or 500 SaaS applications, maybe more. And the HR team, account department has 45 different applications, there’s the German tax planning thing, and there’s the thing that manages stock options, and there’s the thing that manages training for new recruits, and the thing that makes sure that everybody’s taking their compliance exam, and the other thing that makes sure everyone’s done their compliance exam, and the training thing, and you just keep adding these all up. And they all find value and they all find budget and they all move costs from one place to another and the big company has 400 or 500 other new things.

The base case is that generative AI will be another 400 or 500 of these, and it will replace half of them, and it will double the value of the other half. 250 of them will get killed and 250 of them will get a bunch of new features and there’ll be another 250 of them on top, and now you’ll have 750 or 1,000 new applications in this company and there will be bits of LLM scattered all the way through them just as there’s bits of machine learning scattered all the way through them, just as there’s databases scattered all the way through them. It will just be kind of a building block in making software. Does that mean that there’s less net employment or does that just mean that a bunch of jobs go away and get replaced by new jobs?…

…BE: If you’re in the accounting industry, this changes the whole nature of your industry, so that’s one observation.

The second observation would be — yes, you don’t know what those changes will be, you don’t know how it is, what all the new things it will be, you start by doing what you already know you need to do. And then over time, you realize there’s new things you can do with this, which is your point about feeds, and you could say the same thing about the emergence of Instagram and online dating, all the stuff that happened that wasn’t obvious at the time.

However, I think there’s a completely opposite point, which is equally interesting, about how new this is or how different this is, which is that if you’re looking at the Internet in 1995, you kind of knew how fast computers were and how fast they’d be in like five years, and you knew how many people had broadband, and you had a pretty good sense — you could plot a line on a chart of how many people are going to have broadband and how fast, on a three to five-year view and you kind of knew how much PCs cost, and you knew what annual PC sales were, and you could make a guess that, okay, this is going to mean that PC sales quadruple and everyone will go out and buy a PC, which is more or less what happened, and you knew how many middle-class households there were in America, so you kind of knew what the PC market could, in principle, do. The same thing with smartphones, you knew how fast 3G was, you knew how fast 4G was, you knew how fast the chips were.

What I was getting at is, with LLMs, we don’t know that. We don’t know what that roadmap is for the fundamental technical capabilities of the thing, which is different to anything from the web to flight or cars. We weren’t looking at a smartphone and saying, “Well, this is where we are today, but maybe in two or three years, it will be able to unroll and fill the whole wall, or maybe it’ll have a year’s battery life”. You kind of knew what the really basic core physics constraints were, and we don’t know that for this stuff.

Well, especially with this accuracy point. Daniel Gross made this point a few weeks ago too, I think it’s really profound, that there’s just a really stark fundamental difference between 100% accuracy and 99% accuracy.

BE: Well, this is a problem with saying “better” models. What do you mean “better”? Do you mean it was 82 and now it’s 83? Or do you mean it was 80 and now it’s 100 and it will always be 100? That’s a completely different thing.

3. Reality has a surprising amount of detail – John Salvatier

It’s tempting to think ‘So what?’ and dismiss these details as incidental or specific to stair carpentry. And they are specific to stair carpentry; that’s what makes them details. But the existence of a surprising number of meaningful details is not specific to stairs. Surprising detail is a near universal property of getting up close and personal with reality.

You can see this everywhere if you look. For example, you’ve probably had the experience of doing something for the first time, maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were. Then you got more practice and then you told yourself ‘man, it was so simple all along, I don’t know why I had so much trouble’. We run into a fundamental property of the universe and mistake it for a personal failing.

If you’re a programmer, you might think that the fiddliness of programming is a special feature of programming, but really it’s that everything is fiddly, but you only notice the fiddliness when you’re new, and in programming you do new things more often.

You might think the fiddly detailiness of things is limited to human centric domains, and that physics itself is simple and elegant. That’s true in some sense – the physical laws themselves tend to be quite simple – but the manifestation of those laws is often complex and counterintuitive…

…The more difficult your mission, the more details there will be that are critical to understand for success.

You might hope that these surprising details are irrelevant to your mission, but not so. Some of them will end up being key. Wood’s tendency to warp means it’s more accurate to trace a cut than to calculate its length and angle. The possibility of superheating liquids means it’s important to use a packed bed when boiling liquids in industrial processes lest your process be highly inefficient and unpredictable. The massive difference in weight between a rocket full of fuel and an empty one means that a reusable rocket can’t hover if it can’t throttle down to a very small fraction of its original thrust, which in turn means it must plan its trajectory very precisely to achieve 0 velocity at exactly the moment it reaches the ground.

You might also hope that the important details will be obvious when you run into them, but not so. Such details aren’t automatically visible, even when you’re directly running up against them. Things can just seem messy and noisy instead. ‘Spirit’ thermometers, made using brandy and other liquors, were in common use in the early days of thermometry. They were even considered as a potential standard fluid for thermometers. It wasn’t until the careful work of Swiss physicist Jean-André De Luc in the 18th century that physicists realized that alcohol thermometers are highly nonlinear and highly variable depending on concentration, which is in turn hard to measure.

You’ve probably also had experiences where you were trying to do something and growing increasingly frustrated because it wasn’t working, and then finally, after some time you realize that your solution method can’t possibly work.

Another way to see that noticing the right details is hard, is that different people end up noticing different details…

…Before you’ve noticed important details they are, of course, basically invisible. It’s hard to put your attention on them because you don’t even know what you’re looking for. But after you see them they quickly become so integrated into your intuitive models of the world that they become essentially transparent. Do you remember the insights that were crucial in learning to ride a bike or drive? How about the details and insights you have that led you to be good at the things you’re good at?

This means it’s really easy to get stuck. Stuck in your current way of seeing and thinking about things. Frames are made out of the details that seem important to you. The important details you haven’t noticed are invisible to you, and the details you have noticed seem completely obvious and you see right through them. This all makes makes it difficult to imagine how you could be missing something important…

…If you’re trying to do impossible things, this effect should chill you to your bones. It means you could be intellectually stuck right at this very moment, with the evidence right in front of your face and you just can’t see it.

This problem is not easy to fix, but it’s not impossible either. I’ve mostly fixed it for myself. The direction for improvement is clear: seek detail you would not normally notice about the world. When you go for a walk, notice the unexpected detail in a flower or what the seams in the road imply about how the road was built. When you talk to someone who is smart but just seems so wrong, figure out what details seem important to them and why. In your work, notice how that meeting actually wouldn’t have accomplished much if Sarah hadn’t pointed out that one thing. As you learn, notice which details actually change how you think.

If you wish to not get stuck, seek to perceive what you have not yet perceived.

4. America’s Growing Trade Deficit Is Selling the Nation Out From Under Us. Here’s a Way to Fix the Problem – And We Need to Do It Now – Warren Buffett

Take a fanciful trip with me to two isolated, side-by-side islands of equal size, Squanderville and Thriftville. Land is the only capital asset on these islands, and their communities are primitive, needing only food and producing only food. Working eight hours a day, in fact, each inhabitant can produce enough food to sustain himself or herself. And for a long time that’s how things go along…

…Eventually, though, the industrious citizens of Thriftville decide to do some serious saving and investing, and they start to work 16 hours a day. In this mode, they continue to live off the food they produce in the eight hours of work but begin exporting an equal amount to their one and only trading outlet, Squanderville.

The citizens of Squanderville are ecstatic about this turn of events, since they can now live their lives free from toil but eat as well as ever. Oh, yes, there’s a quid pro quo – but to the Squanders, it seems harmless: All that the Thrifts want in exchange for their food is Squanderbonds (which are denominated, naturally, in Squanderbucks).

Over time Thriftville accumulates an enormous amount of these bonds, which at their core represent claim checks on the future output of Squanderville. A few pundits in Squanderville smell trouble coming. They foresee that for the Squanders both to eat and pay off – or simply service – the debt they’re piling up will eventually require them to work more than eight hours a day…

…Meanwhile, the citizens of Thriftville begin to get nervous. Just how good, they ask, are the IOUs of a shiftless island? So the Thrifts change strategy: Though they continue to hold some bonds, they sell most of them to Squanderville residents for Squanderbucks and use the proceeds to buy Squanderville land. And eventually the Thrifts own all of Squanderville.

At that point, the Squanders are forced to deal with an ugly equation: They must now not only return to working eight hours a day in order to eat—they have nothing left to trade—but must also work additional hours to service their debt and pay Thriftville rent on the land so imprudently sold. In effect, Squanderville has been colonized by purchase rather than conquest.

It can be argued, of course, that the present value of the future production that Squanderville must forever ship to Thriftville only equates to the production Thriftville initially gave up and that therefore both have received a fair deal. But since one generation of Squanders gets the free ride and future generations pay in perpetuity for it, there are—in economist talk—some pretty dramatic “intergenerational inequities.”…

…Sooner or later the Squanderville government, facing ever greater payments to service debt, would decide to embrace highly inflationary policies—that is, issue more Squanderbucks to dilute the value of each. After all, the government would reason, those irritating Squanderbonds are simply claims on specific numbers of Squanderbucks, not on bucks of specific value. In short, making Squanderbucks less valuable would ease the island’s fiscal pain.

That prospect is why I, were I a resident of Thriftville, would opt for direct ownership of Squanderville land rather than bonds of the island’s government. Most governments find it much harder morally to seize foreign-owned property than they do to dilute the purchasing power of claim checks foreigners hold. Theft by stealth is preferred to theft by force…

…The time to halt this trading of assets for consumables is now, and I have a plan to suggest for getting it done. My remedy may sound gimmicky, and in truth it is a tariff called by another name. But this is a tariff that retains most free-market virtues, neither protecting specific industries nor punishing specific countries nor encouraging trade wars. This plan would increase our exports and might well lead to increased overall world trade. And it would balance our books without there being a significant decline in the value of the dollar, which I believe is otherwise almost certain to occur.

We would achieve this balance by issuing what I will call Import Certificates (ICs) to all U.S. exporters in an amount equal to the dollar value of their exports. Each exporter would, in turn, sell the ICs to parties—either exporters abroad or importers here—wanting to get goods into the U.S. To import $1 million of goods, for example, an importer would need ICs that were the byproduct of $1 million of exports. The inevitable result: trade balance.

Because our exports total about $80 billion a month, ICs would be issued in huge, equivalent quantities—that is, 80 billion certificates a month—and would surely trade in an exceptionally liquid market. Competition would then determine who among those parties wanting to sell to us would buy the certificates and how much they would pay. (I visualize that the certificates would be issued with a short life, possibly of six months, so that speculators would be discouraged from accumulating them.)

For illustrative purposes, let’s postulate that each IC would sell for 10 cents—that is, 10 cents per dollar of exports behind them. Other things being equal, this amount would mean a U.S. producer could realize 10% more by selling his goods in the export market than by selling them domestically, with the extra 10% coming from his sales of ICs.

In my opinion, many exporters would view this as a reduction in cost, one that would let them cut the prices of their products in international markets. Commodity-type products would particularly encourage this kind of behavior. If aluminum, for example, was selling for 66 cents per pound domestically and ICs were worth 10%, domestic aluminum producers could sell for about 60 cents per pound (plus transportation costs) in foreign markets and still earn normal margins. In this scenario, the output of the U.S. would become significantly more competitive and exports would expand. Along the way, the number of jobs would grow…

…To see what would happen to imports, let’s look at a car now entering the U.S. at a cost to the importer of $20,000. Under the new plan and the assumption that ICs sell for 10%, the importer’s cost would rise to $22,000. If demand for the car was exceptionally strong, the importer might manage to pass all of this on to the American consumer. In the usual case, however, competitive forces would take hold, requiring the foreign manufacturer to absorb some, if not all, of the $2,000 IC cost.

There is no free lunch in the IC plan: It would have certain serious negative consequences for U.S. citizens. Prices of most imported products would increase, and so would the prices of certain competitive products manufactured domestically. The cost of the ICs, either in whole or in part, would therefore typically act as a tax on consumers.

That is a serious drawback. But there would be drawbacks also to the dollar continuing to lose value or to our increasing tariffs on specific products or instituting quotas on them—courses of action that in my opinion offer a smaller chance of success. Above all, the pain of higher prices on goods imported today dims beside the pain we will eventually suffer if we drift along and trade away ever larger portions of our country’s net worth.

I believe that ICs would produce, rather promptly, a U.S. trade equilibrium well above present export levels but below present import levels. The certificates would moderately aid all our industries in world competition, even as the free market determined which of them ultimately met the test of “comparative advantage.”

This plan would not be copied by nations that are net exporters, because their ICs would be valueless. Would major exporting countries retaliate in other ways? Would this start another Smoot-Hawley tariff war? Hardly. At the time of Smoot-Hawley we ran an unreasonable trade surplus that we wished to maintain. We now run a damaging deficit that the whole world knows we must correct…

…The likely outcome of an IC plan is that the exporting nations—after some initial posturing—will turn their ingenuity to encouraging imports from us. Take the position of China, which today sells us about $140 billion of goods and services annually while purchasing only $25 billion. Were ICs to exist, one course for China would be simply to fill the gap by buying 115 billion certificates annually. But it could alternatively reduce its need for ICs by cutting its exports to the U.S. or by increasing its purchases from us. This last choice would probably be the most palatable for China, and we should wish it to be so.

If our exports were to increase and the supply of ICs were therefore to be enlarged, their market price would be driven down. Indeed, if our exports expanded sufficiently, ICs would be rendered valueless and the entire plan made moot. Presented with the power to make this happen, important exporting countries might quickly eliminate the mechanisms they now use to inhibit exports from us.

5. The hidden cost of AI: Trading long-term resilience for short-term efficiency – Eric Markowitz

AI is the latest in a long lineage of efficiency-maximizing tools. It promises to make research instantaneous, strip away uncertainty, and optimize everything from hiring to investment analysis. But for all the gains in speed and precision, we rarely stop to ask: What are we losing in return? Because there is no such thing as a free lunch…

…AI makes the world feel more scientific than ever. It can generate business strategies, write persuasive emails, and surface patterns invisible to human analysis. But the most important decisions — the ones that lead to breakthroughs, revolutions, and paradigm shifts — are rarely the result of pure data analysis.

Some of the best ideas in history looked irrational at first. They required deep research, yes — but more importantly, they required taste. (And perhaps a bit of luck.)

Taste is an underrated concept in a world obsessed with efficiency. It’s the ability to recognize something valuable before the numbers prove it. The ability to see beyond spreadsheets and sentiment analysis and understand how an idea actually fits into the world. If everyone has access to the same AI-generated insights, the only thing that remains scarce is independent thinking. And that is precisely where the edge lies.

None of this is an argument against AI. It’s an argument for knowing what not to outsource to our robot overlords. AI is a tool. A powerful one. But it is not a substitute for intuition, nor a replacement for deep thinking. The institutions and ideas that endure the longest are those that understand what to hold onto even as the world around them changes. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 March 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 March 2025:

1. Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough – Dwarkesh Patel and Satya Nadella

Dwarkesh Patel

Where is the value going to be created in AI?

Satya Nadella

That’s a great one. So I think there are two places where I can say with some confidence. One is the hyperscalers that do well, because the fundamental thing is if you sort of go back to even how Sam and others describe it, if intelligence is log of compute, whoever can do lots of compute is a big winner.

The other interesting thing is, if you look at underneath even any AI workload, like take ChatGPT, it’s not like everybody’s excited about what’s happening on the GPU side, it’s great. In fact, I think of my fleet even as a ratio of the AI accelerator to storage, to compute. And at scale, you’ve got to grow it…

…Satya Nadella

So in fact it’s manna from heaven to have these AI workloads because guess what? They’re more hungry for more compute, not just for training, but we now know, for test time. When you think of an AI agent, it turns out the AI agent is going to exponentially increase compute usage because you’re not even bound by just one human invoking a program. It’s one human invoking programs that invoke lots more programs. That’s going to create massive, massive demand and scale for compute infrastructure. So our hyperscale business, Azure business, and other hyperscalers, I think that’s a big thing.

Then after that, it becomes a little fuzzy. You could say, hey, there is a winner-take-all model- I just don’t see it. This, by the way, is the other thing I’ve learned: being very good at understanding what are winner-take-all markets and what are not winner-take-all markets is, in some sense, everything. I remember even in the early days when I was getting into Azure, Amazon had a very significant lead and people would come to me, and investors would come to me, and say, “Oh, it’s game over. You’ll never make it. Amazon, it’s winner-take-all.”

Having competed against Oracle and IBM in client-server, I knew that the buyers will not tolerate winner-take-all. Structurally, hyperscale will never be a winner-take-all because buyers are smart.

Consumer markets sometimes can be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you got to be one of the multiple suppliers.

That, I think, is what will happen even on the model side. There will be open-source. There will be a governor. Just like on Windows, one of the big lessons learned for me was, if you have a closed-source operating system, there will be a complement to it, which will be open source.

And so to some degree that’s a real check on what happens. I think in models there is one dimension of, maybe there will be a few closed source, but there will definitely be an open source alternative, and the open-source alternative will actually make sure that the closed-source, winner-take-all is mitigated.

That’s my feeling on the model side. And by the way, let’s not discount if this thing is really as powerful as people make it out to be, the state is not going to sit around and wait for private companies to go around and… all over the world. So, I don’t see it as a winner-take-all.

Then above that, I think it’s going to be the same old stuff, which is in consumer, in some categories, there may be some winner-take-all network effect. After all, ChatGPT is a great example.

It’s an at-scale consumer property that has already got real escape velocity. I go to the App Store, and I see it’s always there in the top five, and I say “wow, that’s pretty unbelievable”.

So they were able to use that early advantage and parlay that into an app advantage. In consumer, that could happen. In the enterprise again, I think there will be, by category, different winners. That’s sort of at least how I analyze it…

…Satya Nadella

The way I come at it, Dwarkesh, it’s a great question because at some level, if you’re going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth.

Before I get to what Microsoft’s revenue will look like, there’s only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?

So in 2025, as we sit here, I’m not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let’s have that Industrial Revolution type of growth.

That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That’s the real marker. It can’t just be supply-side.

In fact that’s the thing, a lot of people are writing about it, and I’m glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.

But that’s to me the moment. Us self-claiming some AGI milestone, that’s just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.

Dwarkesh Patel

Okay, so if the world grew at 10%, the world economy is $100 trillion or something, if the world grew at 10%, that’s like an extra $10 trillion in value produced every single year. If that is the case, you as a hyperscaler… It seems like $80 billion is a lot of money. Shouldn’t you be doing like $800 billion?

If you really think in a couple of years, we could be really growing the world economy at this rate, and the key bottleneck would be: do you have the compute necessary to deploy these AIs to do all this work?

Satya Nadella

That is correct. But by the way, the classic supply side is, “Hey, let me build it and they’ll come.” That’s an argument, and after all we’ve done that, we’ve taken enough risk to go do it.

But at some point, the supply and demand have to map. That’s why I’m tracking both sides of it. You can go off the rails completely when you are hyping yourself with the supply-side, versus really understanding how to translate that into real value to customers.

That’s why I look at my inference revenue. That’s one of the reasons why even the disclosure on the inference revenue… It’s interesting that not many people are talking about their real revenue, but to me, that is important as a governor for how you think about it.

You’re not going to say they have to symmetrically meet at any given point in time, but you need to have existence proof that you are able to parlay yesterday’s, let’s call it capital, into today’s demand, so that then you can again invest, maybe exponentially even, knowing that you’re not going to be completely rate mismatched.

Dwarkesh Patel

I wonder if there’s a contradiction in these two different viewpoints, because one of the things you’ve done wonderfully is make these early bets. You invested in OpenAI in 2019, even before there was Copilot and any applications.

If you look at the Industrial Revolution, these 6%, 10% build-outs of railways and whatever things, many of those were not like, “We’ve got revenue from the tickets, and now we’re going to…”

Satya Nadella

There was a lot of money lost.

Dwarkesh Patel

That’s true. So, if you really think there’s some potential here to 10x or 5x the growth rate of the world, and then you’re like, “Well, what is the revenue from GPT-4?”

If you really think that’s the possibility from the next level up, shouldn’t you just, “Let’s go crazy, let’s do the hundreds of billions of dollars of compute?” I mean, there’s some chance, right?

Satya Nadella

Here’s the interesting thing, right? That’s why even that balanced approach to the fleet, at least, is very important to me. It’s not about building compute. It’s about building compute that can actually help me not only train the next big model but also serve the next big model. Until you do those two things, you’re not going to be able to really be in a position to take advantage of even your investment.

So, that’s kind of where it’s not a race to just building a model, it’s a race to creating a commodity that is getting used in the world to drive… You have to have a complete thought, not just one thing that you’re thinking about.

And by the way, one of the things is that there will be overbuild. To your point about what happened in the dotcom era, the memo has gone out that, hey, you know, you need more energy, and you need more compute. Thank God for it. So, everybody’s going to race.

In fact, it’s not just companies deploying, countries are going to deploy capital, and there will be clearly… I’m so excited to be a leaser, because, by the way; I build a lot, I lease a lot. I am thrilled that I’m going to be leasing a lot of capacity in ’27, ’28 because I look at the builds, and I’m saying, “This is fantastic.” The only thing that’s going to happen with all the compute builds is the prices are going to come down…

…Satya Nadella

This has been another 30-year journey for us. It’s unbelievable. I’m the third CEO of Microsoft who’s been excited about quantum.

The fundamental breakthrough here, or the vision that we’ve always had is, you need a physics breakthrough in order to build a utility-scale quantum computer that works. We took the path of saying, the one way for having a less noisy or more reliable qubit is to bet on a physical property that by definition is more reliable and that’s what led us to the Majorana zero modes, which was theorized in the 1930s. The question was, can we actually physically fabricate these things? Can we actually build them?

So the big breakthrough effectively, and I know you talked to Chetan, was that we now finally have existence proof and a physics breakthrough of Majorana zero modes in a new phase of matter effectively. This is why we like the analogy of thinking of this as the transistor moment of quantum computing, where we effectively have a new phase, which is the topological phase, which means we can even now reliably hide the quantum information, measure it, and we can fabricate it. And so now that we have it, we feel like with that core foundational fabrication technique out of the way, we can start building a Majorana chip.

That Majorana One which I think is going to basically be the first chip that will be capable of a million qubits, physical. And then on that, thousands of logical qubits, error-corrected. And then it’s game on. You suddenly have the ability to build a real utility-scale quantum computer, and that to me is now so much more feasible. Without something like this, you will still be able to achieve milestones, but you’ll never be able to build a utility-scale computer. That’s why we’re excited about it…

…Satya Nadella

It’s a great question. One thing that I’ve been excited about is, even in today’s world… we had this quantum program, and we added some APIs to it. The breakthrough we had maybe two years ago was to think of this HPC stack, and AI stack, and quantum together.

In fact, if you think about it, AI is like an emulator of the simulator. Quantum is like a simulator of nature. What is quantum going to do? By the way, quantum is not going to replace classical. Quantum is great at what quantum can do, and classical will also…

Quantum is going to be fantastic for anything that is not data-heavy but is exploration-heavy in terms of the state space. It should be data-light but exponential states that you want to explore. Simulation is a great one: chemical physics, what have you, biology.

One of the things that we’ve started doing is really using AI as the emulation engine. But you can then train. So the way I think of it is, if you have AI plus quantum, maybe you’ll use quantum to generate synthetic data that then gets used by AI to train better models that know how to model something like chemistry or physics or what have you. These two things will get used together.

So even today, that’s effectively what we’re doing with the combination of HPC and AI. I hope to replace some of the HPC pieces with quantum computers.

2. Microsoft’s Majorana 1 chip carves new path for quantum computing – Catherine Bolgar

Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.

It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.

In the same way that the invention of semiconductors made today’s smartphones, computers and electronics possible, topoconductors and the new type of chip they enable offer a path to developing quantum systems that can scale to a million qubits and are capable of tackling the most complex industrial and societal problems, Microsoft said…

…This new architecture used to develop the Majorana 1 processor offers a clear path to fit a million qubits on a single chip that can fit in the palm of one’s hand, Microsoft said. This is a needed threshold for quantum computers to deliver transformative, real-world solutions – such as breaking down microplastics into harmless byproducts or inventing self-healing materials for construction, manufacturing or healthcare. All the world’s current computers operating together can’t do what a one-million-qubit quantum computer will be able to do…

…The topoconductor, or topological superconductor, is a special category of material that can create an entirely new state of matter – not a solid, liquid or gas but a topological state. This is harnessed to produce a more stable qubit that is fast, small and can be digitally controlled, without the tradeoffs required by current alternatives…

…This breakthrough required developing an entirely new materials stack made of indium arsenide and aluminum, much of which Microsoft designed and fabricated atom by atom…

…Commercially important applications will also require trillions of operations on a million qubits, which would be prohibitive with current approaches that rely on fine-tuned analog control of each qubit. The Microsoft team’s new measurement approach enables qubits to be controlled digitally, redefining and vastly simplifying how quantum computing works.

This progress validates Microsoft’s choice years ago to pursue a topological qubit design – a high risk, high reward scientific and engineering challenge that is now paying off. Today, the company has placed eight topological qubits on a chip designed to scale to one million…

…But reaching the next horizon of quantum computing will require a quantum architecture that can provide a million qubits or more and reach trillions of fast and reliable operations. Today’s announcement puts that horizon within years, not decades, Microsoft said.

Because they can use quantum mechanics to mathematically map how nature behaves with incredible precision – from chemical reactions to molecular interactions and enzyme energies – million-qubit machines should be able to solve certain types of problems in chemistry, materials science and other industries that are impossible for today’s classical computers to accurately calculate…

…Most of all, quantum computing could allow engineers, scientists, companies and others to simply design things right the first time – which would be transformative for everything from healthcare to product development. The power of quantum computing, combined with AI tools, would allow someone to describe what kind of new material or molecule they want to create in plain language and get an answer that works straightaway – no guesswork or years of trial and error.

“Any company that makes anything could just design it perfectly the first time out. It would just give you the answer,” Troyer said. “The quantum computer teaches the AI the language of nature so the AI can just tell you the recipe for what you want to make.”…

…Qubits can be created in different ways, each with advantages and disadvantages. Nearly 20 years ago, Microsoft decided to pursue a unique approach: developing topological qubits, which it believed would offer more stable qubits requiring less error correction, thereby unlocking speed, size and controllability advantages. The approach posed a steep learning curve, requiring uncharted scientific and engineering breakthroughs, but also the most promising path to creating scalable and controllable qubits capable of doing commercially valuable work.

The disadvantage is – or was – that until recently the exotic particles Microsoft sought to use, called Majoranas, had never been seen or made. They don’t exist in nature and can only be coaxed into existence with magnetic fields and superconductors. The difficulty of developing the right materials to create the exotic particles and their associated topological state of matter is why most quantum efforts have focused on other kinds of qubits…

…Majoranas hide quantum information, making it more robust, but also harder to measure. The Microsoft team’s new measurement approach is so precise it can detect the difference between one billion and one billion and one electrons in a superconducting wire – which tells the computer what state the qubit is in and forms the basis for quantum computation.

The measurements can be turned on and off with voltage pulses, like flicking a light switch, rather than finetuning dials for each individual qubit. This simpler measurement approach that enables digital control simplifies the quantum computing process and the physical requirements to build a scalable machine…

…Majorana 1, Microsoft’s quantum chip that contains both qubits as well as surrounding control electronics, can be held in the palm of one’s hand and fits neatly into a quantum computer that can be easily deployed inside Azure datacenters.

3. The most underreported and important story in AI right now is that pure scaling has failed to produce AGI – Gary Marcus

On the order of half a trillion dollars has been invested on a premise that I have long argued was unlikely to succeed: the idea (sometimes informally referred to as the scaling hypothesis) that we could get to “artificial general intelligence” simply by adding more and more data and GPUs…

…Virtually all of the generative AI industry has been built on this presumption; projects like the OpenAI/Oracle/Softbank joint venture Stargate, allegedly another half trillion dollars, are also largely based on this premise…

…But I always knew it couldn’t last forever. When I said as much, the field was absolutely furious at me…

…The first signs that the pure scaling of data and compute might in fact be hitting a wall came from industry leaks from people like famed investor Marc Andreessen, who said in early November 2024 that current models are “sort of hitting the same ceiling on capabilities.” Then, in December, Microsoft CEO Satya Nadella echoed many of my 2022 themes, saying at a Microsoft Ignite event, “in the last multiple weeks there is a lot of debate on have we hit the wall with scaling laws. Is it gonna continue? Again, the thing to remember at the end of the day these are not physical laws. There are just empirical observations that hold true just like Moore’s Law did for a long period of time and so therefore it’s actually good to have some skepticism some debate.”…

…Finally, and perhaps most significantly: Elon Musk said over that weekend that Grok 3, with 15x the compute of Grok 2, and immense energy (and construction and chop) bills, would be “the smartest AI on the earth.” Yet the world quickly saw that Grok 3 is still afflicted by the kind of unreliability that has hobbled earlier models. The famous ML expert Andrej Karpathy reported that Grok 3 occasionally stumbles on basics like math and spelling. In my own experiments, I quickly found a wide array of errors, such as hallucinations (e.g, it told me with certainty that there was a significant 5.6-sized earthquake on Feb. 10 in Billings, Montana, when no such thing had happened) and extremely poor visual comprehension (e.g. it could not properly label the basic parts of a bicycle).

Nadella, in his December speech, pointed to test-time compute, in which systems are allowed extra time for “reasoning” as the next big thing, and to some degree he is right; it is the next big thing, a new thing to try to scale, since merely scaling compute and data is no longer bringing the massive returns it once did. At least for a while, adding more and more computing time will help, at least on some kinds of problems…

…although DeepSeek lowered the costs of training these new systems, they are still expensive to operate, which is why companies like OpenAI are limiting their usage. When customers begin to realize that even with the greater expenses, errors still seep in, they are likely to be disappointed. One irate customer cc:d me yesterday on a several page demand for a refund, writing in part that “GPT-4o Pro [which includes access to test time compute] has consistently underperformed,” and enumerated problems such as “Severely degraded memory” and “Hallucinations and Unreliable Answers.”…

…the illustrious Stanford Natural Language Processing group reached a similar conclusion, reading between the lines of OpenAI’s recent announcement in the same way I did. In their words, Altman’s recent OpenAI roadmap was “the final admission that the 2023 strategy of OpenAI, Anthropic, etc. ‘“simply scaling up model size, data, compute, and dollars spent will get us to AGI/ASI’) is no longer working!”

In short, half a trillion dollars have been invested in a misguided premise; a great deal more funding seems to be headed in the same direction for now.

4. Is Microsoft’s Copilot push the biggest bait-and-switch in AI – Tien Tzuo

Over a year ago, Microsoft launched Copilot Pro, an AI assistant embedded in its Office suite, with a $20/month price. The uptake apparently was abysmal. By October, they admitted that the way they were selling Copilot was not working out.

So what did they do? They forced it on Microsoft users by including Copilot in Office, and hiking up subscription fees. Microsoft first made this change in Asia, then fully pulled the plug across the globe last month, impacting 84 million subscribers. To add insult to injury, Microsoft renamed the product to Microsoft 365 Copilot. You didn’t want to pay for Copilot? Well, now you are…

…Not only is Microsoft’s Copilot rollout deceptive, it’s also embarrassingly disastrous.

This goes to show that even tech giants, including a major backer of AI pioneer OpenAI, can suffer the hype and competitive pressure surrounding AI. And it’s a stark reminder that what businesses should really be focused on instead is value — communicating it clearly and delivering it tangibly to customers…

…Well, there’s a good contrast to Microsoft’s approach — from Adobe.

Adobe took a different approach with its AI rollout last fall, resisting the temptation to immediately monetize its new video generator, instead using it to boost adoption and engagement. By positioning AI as a value-add rather than a paid extra, Adobe was playing the long game, building a loyal user base that would be ripe for future upselling once they experienced AI’s benefits for themselves.

5. Broken Markets!? – The Brooklyn Investor

So, I keep hearing that markets are broken, or that the market is as expensive as ever. I know I keep saying this and I am sounding like a broken record, but I am not so sure…

…But if you look at individual stocks, markets are clearly differentiating between individual stocks. Look at Nvidia vs. Intel. If nobody was really evaluating them and the market was ignoring fundamentals, you would think both stocks would be performing similarly. But they are clearly not. People are clearly differentiating between winners and losers. It’s a separate question whether they are over-discounting their views. That’s a different discussion, and contrary to the view that passive investing is killing fundamental analysis.

Another example: JPM, the better bank, is trading at 2.4x book, versus C, which is a crappier one, selling at 0.8x book. You can’t complain that the market is broken just because you don’t agree with it. On the other hand, Buffett in the 50s loved the fact that institutional investors of the time completely ignored company analysis / valuation…

…Look at all the rich folks at the Berkshire Hathaway annual meeting. Look at BRK stock since 1970. How often did it look ‘overpriced’? What about the market? What if people sold out when they thought BRK was overpriced? Or the market? Would they be as rich as they are now? Probably not. Maybe there are some smart folks that got in and out of BRK over the years, but I would bet that the richest of them are the ones that just held it and did nothing.

I keep telling people this, but if you look at all the richest people in the world, a lot of them are people who held a single asset, and held it for decades. Look at Bill Gates. What if he was hip to value investing and knew more about stocks and values. He may have sold out of MSFT when it looked really overvalued. What about Buffett? What about Bezos?

A lot of the rich used to be real estate moguls, and I thought a lot of them were wealthy because real estate was not very liquid, so they had no choice but to hold on even in bad times. Stocks have what you may call the “curse of liquidity”. It’s so easy to say, holy sh*t, something bad is going to happen, and *click*, you can get out of the market. Back in the 90s, we used to fear a 1-800 crash; people will call their brokers’ automated trade execution lines, 1-800-Sell-Everything, and go “Get me out of the market!!! See everything NOW!!!”, and the market would not be able to open the next morning. Some of us truly feared that, and hedge funds talked about that sort of thing all the time. But you can’t do that with your house.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 February 2025:

1. Weekend Thoughts: Demand Shocks and How Utilities Went From Sleepy to AI Darlings – Andrew Walker

That history has completely changed with AI over the past 18 months. Suddenly, the U.S. is experiencing real electricity demand growth for the first time in decades, and the rush for AI players to secure an enormous amount of electricity for their big AI datacenters has left the U.S. short electricity and scrambling to reopen old plants. The combination has created a bonanza for utilities; the whole sector screamed higher in 2024, and many of the utilities proved to be better AI plays than MSFT or even NVDA in 2024!…

…I mention it because I think that’s a really interesting set up to pattern to match / look for in the future. Find a staid, boring industry that most investors won’t look at that trades for a reasonable multiple (as utilities did pre-AI boom), and buy them just before a demand shock. You could argue if you do / time it right your downside is protected (it’s not like the stocks are going to go down when a demand shock doesn’t come; they’re not pricing one in!), and if you’re right you can make multiples of your money.

It worked for utilities in 2024, and it worked for all sorts of COVID beneficiaries in 2020/2021 (cleaning products, home gym equipment, etc.).

Now, there is one interesting wrinkle to this thesis: industries with long lead times benefit the most from a demand shock. Consider power: if you have a demand shock on the power side, then you eventually need to build more power plants to handle that shock. Building power plants takes a long time; I’d guess it takes 3-5 years from start to finish to build a baseload natural gas plant, and probably 20 years to build a nuke (if you can even get one built!). So a demand shock in power can create super normal pricing for a very long time. Most other demand shocks can be responded to much faster. A simple example: a demand shock in oil can be met relatively quickly; it only takes a few months to start spinning up new Permian wells!

2. How I avoided China’s biggest bond default – Willie Keng

In 2016, I told myself:

“Stay away from this Chinese company at ALL cost.”

It was a Thursday. I was in a room filled with high-level executives dressed in black suits – the company’s chief financial officer, finance managers, bankers, analysts and fund managers were there…

…Back then, even Chinese property developers from blue-chips to junk-rated companies all wanted to raise debt. China’s skyline was dominated by the grey, skeletal skyscrapers wrapped in scaffoldings. Tower cranes with long, mechanical arms swung on these buildings — there were noise of rapid urban growth.

And China’s property bond market was hot like an iron.

It was common to have 3-4 such company meetings in a single day. These were eye-opening — and very exciting times, as developers rushed to raise as much “cheap debt” as possible…

…In 2016, I saw the early warning signs of China’s bigger property developer, Evergrande…

…During that management [sic: meeting] held on Thursday, I concluded its leverage was too high…

…In 2021, Evergrande defaulted.

I’ve dug out some short-cut questions from my old notebooks:

1. How do you make money? (where are your sources of revenues from?)…
…11. How much debt are you refinancing over the next few years?
12. Who are your main bankers?
13. How do you hedge your interest rates? At what cost?
14. How much committed credit facilities you have with banks?
15. Why are you borrowing X debt, what are you using for?…
…23. How are you planning to refinance your short-term loans and who are the lenders? At what interest rate? Is it possible to get “secured” and “unsecured” funding?

3. The Magnificent Seven, MKL – The Brooklyn Investor

The question is, basically, what’s up with the Mag 7? How crazy is it? Is it Nifty-fifty all over again?…

The Mag 7 is trading at 41x 2024 earnings, which is close to ttm earnings. and 33.5x 2025 expected earnings, and 28x 2026 expected earnings. I know, expected earnings is sort of rubbish. Nobody gets that right. But we gotta start somewhere, right?

By the way, if you exclude NVDA and TSLA, the above would be 32.6x, 29.6x and 25.8x P/Es, respectively. In this case, this is valid to do, because you can actually create a portfolio of high ROE, decent growth stocks at that valuation.

And then look at the ROE of each of these companies. And then look at the historic growth rates, 5 and 10 year of these companies…

… Let’s say the Mag 7 was a modern-day, techful version of a conglomerate like BRK. Its subsidiaries have grown tremendously in the past 5 and 10 years. Earnings will collectively grow 23% in 2025 and 19% in 2026 (willful suspension of disbelief may be key here), and look at the high ROE of each division (OK, I was too lazy to calculate a weighted average).

And this conglomerate is trading for 34x earnings! Or less than 30x if you don’t want NVDA and TSLA. Think about that for a second. How many ideas with those metrics can you find?…

… It’s easy to call AMZN a retailer, for example. YouTube is a big part of Google, and the rest of Google is advertising. So is Facebook. They compete with linear television, radio and other forms of entertainment in the past, and they make money from advertising, just like old media (including magazines too…). So we can call it media and advertising, not even “new” media. Just media. Tesla is an automaker. AAPL is more like the old Sony; consumer electronics. Basically every single consumer electronic product ever invented rolled into one product. They do media too; music, streaming etc. Gaming too. Only NVDA and MSFT sort of feel like the conventional ‘tech’.

My point was going to be, the Mag 7 domination may or may not be a problem, but it is quite diversified as a virtual conglomerate.

4. The Great EBITDA Illusion –  Stephen Clapham

KPMG examined 1800 transactions between 2013 and 2018 and found that both the number of adjustments to EBITDA increased, as did the value. The number increased from 5.8 to 7.9 per transaction and the value increased…

…Pro-forma adjustments have risen by 10% and were in over 60% of deals. These include cost-related adjustments, to reflect the future value of cost reductions, and revenue run-rate adjustments, including planned price increases, or the expected impact of a new product line. In my experience, cost savings tend to have a higher deliverability as they are more within management’s control; but it’s a rare business which can increase price without affecting volumes, while capacity increases are often tricky to model…

…S&P’s Leveraged Finance group have also looked at the topic of EBITDA adjustments, but through a different lens.

“Our six-year study on EBITDA addbacks appears to shows a positive correlation between the magnitude of addbacks at deal inception and the severity of management projection misses.”

They highlight that addbacks represent a median 30% of management adjusted EBITDA at deal inception. They consider management projections to be aggressive and U.S. speculative-grade corporate issuers generally “present earnings, debt, and leverage projections in their marketing materials at deal inception that they cannot realize”…

…This is of real significance, especially to lenders…

…Forecasts made in M&A deals turn out badly with leverage nearly twice the projection in year 1 and worse by end year 2. Most of the miss is down to over-estimating adjusted EBITDA. The median miss in year one was 34%, rising to 35% in year two…

…Leverage forecasts made in leveraged buyout transactions are much worse with actual leverage of 8.2x vs a 3.9x forecast…

…The S&P report concludes:

“Our six-year study continues to underscore that addbacks and company-adjusted EBITDA are a poor predictor of profitability. Our substantial dataset makes it clear that management teams and equity sponsors regularly miss their projections by a large margin, and that the magnitude of the misses is positively correlated with addbacks and firms that we rate lower. This suggests that inflated addbacks may help companies with higher financial risk get deals done.”

The data is clear and there is no reason to doubt it. What surprises me is that private equity and credit funds continue to engage in such practices and that allocators and credit investors appear relaxed. That may be justified given past performance, but as I have written here several times, I don’t believe that the historical record is anywhere near sustainable.

5. There Goes My Hero – Ben Carlson

My family took its first and only Disney trip in the summer of 1990.

We rode some rollercoasters. Went to one of the waterparks. Decently fun trip from what I can remember as a 4th grader.

The strange part was that my older brother Jon was lethargic the whole trip. I still remember a picture of him taking a nap on a bench in the middle of the day. Something was off.

I was nine, so I didn’t think anything of it. My mother, a registered nurse, knew something was wrong so when we got home, they took Jon to the doctor.

He was diagnosed with a rare form of leukemia just before heading into the 7th grade…

…Jon endured months of chemotherapy and radiation, after which the only solution was a bone marrow transplant. My parents weren’t a match. Luckily, my sister and I were both were.

I was the bone marrow donor. There was no guarantee it would work, but miraculously, it did. Jon’s cancer went into remission.

It was a terrible year for our family but Jon was a trooper. He never once complained. Even though he lived in the hospital on and off for months at a time and lost all of his hair he never felt sorry for himself…

…Last year, he was diagnosed with stage 4 pancreatic cancer. Last week he passed away just shy of his 46th birthday.

Jon was a tough son of a bitch and went out swinging.

The original plan was to manage the pancreatic cancer with chemo until Jon died but he didn’t want to just wither away. He called specialists all over the country, finally finding a doctor who would give him an experimental drug that allowed him to stop receiving chemo.

And it actually worked for a while. The cancer spread slowed. Eventually it would stop working but it gave us an extra six months or so…

…Grief is strange. Although you know millions and millions of other people have felt it, it still feels like the most personal of all emotions. I guess it is in some ways depending on the person and how they were lost.

At times, I’ve felt like there’s a black cloud hanging over my head. Other times, it’s as if there is a dull knife stuck in the back of my head. Sometimes it crashes into you all at once like a wave.

But it also forces you to reminisce about the good times. These past few months, it’s almost felt like my life has slowly flashed before my eyes through the lens of all the memories of my brother…

…After his bone marrow transplant, Jon was approached by the Make a Wish Foundation — anything he wanted, within reason.

He could have asked to meet his favorite celebrity or athlete. He could have asked for a room full of video games. He could have asked for a four-wheeler or a jetski or some other fun toy like that.

Instead, Jon requested a two-week all-expenses-paid vacation to Hawaii for our entire family. We got to swim with dolphins, fly in a helicopter, see some volcanoes, play on the beach, and more. They even sent a limo to our house to drive us to the airport.

I didn’t realize it at the time, but it was like Jon instinctively knew our family needed that after what we all went through. I still can’t believe a 12-year-old had the foresight to be so selfless, especially when no one would have blamed him for being as selfish as he wanted.

Jon was wise beyond his years and valued experiences with loved ones more than material possessions…

…As we worked through his financial situation it became abundantly clear he was more than prepared for something like this than I ever could have imagined. There was a large life insurance policy. He was holding far too much cash for a person his age.

Jon why do you have so much cash?

Ben, I knew something like this was going to happen. I’ve known it since I was 12 years old.

That bout with cancer changed his entire perception of risk. He’s been working and saving since age 19 because there was always a voice in the back of his head telling him something like this could happen again…

…He also left behind some life advice for his kids that helps explain the kind of guy he was:

Be happy with what you have, you don’t need as much as you think.

Never leave anyone behind.

Life is way better than a screen, go live it.

Our mantra is to go live like Jon. I’m so lucky to have him as part of my life while he was here.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Markel, Meta Platforms (parent of Facebook), Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 February 2025:

1. The real threat to American prosperity – Daron Acemoglu

American economic success in the era after the second world war depended on innovation, which in turn relied on strong institutions that encouraged people to invest in new technologies, trusting that their inventiveness would be rewarded. This meant a court system that functioned, so that the fruits of their investments could not be taken away from them by expropriation, corruption or chicanery; a financial system that would enable them to scale up their new technologies; and a competitive environment to ensure that incumbents or rivals couldn’t block their superior offerings. These kinds of institutions matter under all circumstances, but they are especially critical for economies that rely heavily on innovation.

Stability requires that people trust institutions, and institutions become more likely to fail when people think they are failing. This is what explained the sudden meltdown of US economic dynamism…

…Economic growth in the US was rapid for most of the post-1980 era, but about half of the country didn’t benefit much from this. In a pattern unparalleled in the industrialised world, Americans with less than a college degree experienced a real (inflation-adjusted) decline in their wages between 1980 and 2013, while those with postgraduate degrees experienced robust growth…

…Many Americans felt that they no longer had much of a political voice. In surveys, more than 80 per cent started saying that politicians did not care about what people like them thought…

…But perhaps the most important determinant of this dwindling trust in institutions was that the US had become much more polarised, making it increasingly difficult to satisfy the majority of the voters. The flames of grievance were powerfully fanned by social media, which deepened polarisation. This then further reduced trust in democracy and in public institutions. Worse, with intensifying distrust, something essential to democracy — compromise — became more and more challenging.

By the 2010s something unprecedented was happening. Ever since data on this had been collected, an overwhelming majority of Americans saw democracy as the “only game in town” and gave it strong support relative to alternatives such as monarchy, military dictatorship or rule by unelected experts. That began changing, especially among young people, who reported growing scepticism about democracy and much more lukewarm support for these institutions.

The cracks were visible long before Trump was first elected in November 2016. He was in many ways a symptom of those troubled times…

…Turning points are useful to locate because they are symbolic of deeper causes of social change. In hindsight, an obvious turning point came just before Trump’s second inauguration. Biden, who had four years ago made defence of democracy a main agenda item, pre-emptively pardoned his family and a number of politicians and public servants, including former Republican Congresswoman Liz Cheney and the former medical adviser to the president, Anthony Fauci. The optics were clear and ugly: Biden and his camp by this point had so little trust in US institutions that they thought only such pre-emptive pardons could stop Trump’s retribution (and making the reality worse than the optics, it was only the enemies of Trump who were close to Biden that counted)…

…While Trump’s domestic agenda intensified the loss of trust in US institutions and expertise in government, his relations with foreign allies did the same for the so-called rules-based order. Of course, there was some truth to critics’ contention that these rules were designed for America’s benefit and that when they didn’t serve it well, they were bent or broken by US politicians, diplomats and companies. But the world was not ready for Trump’s tariffs, threats and military expansionist rhetoric towards Panama, Greenland and even Canada.

This set the scene for a series of catastrophic governmental failures. With morale gone and key personnel fired, the US state was ill-equipped to deal with emergencies. When new pandemics arrived, the response was haphazard, and unpreparedness cost tens of thousands of lives. The few remaining independent media sources uncovered a glaring and dangerous lack of oversight of critical infrastructure, including nuclear reactors and cyber security.

But the real extent of the damage became clear only with the tech meltdown of 2030. Economists and historians have now shown that a lot of this was the outcome of institutional failures and growing concentration in the industry. After Trump lifted all roadblocks ahead of AI acceleration and cryptocurrency speculation, there was initially a boom in the tech sector. But within a few years the industry had become even more consolidated than before, and both insiders and outsiders came to realise that only companies favoured by the administration could survive…

…By late 2029, many commentators were questioning what was going on in the tech industry, which had invested heavily in AI but had little to show for this in terms of innovation or productivity growth. There was huge enthusiasm and investment in cryptoassets, which were one by one revealed to be scams costing regular Americans billions of dollars. The AI empire had no clothes by this point, because the competitive energy had been sucked out of it. It took a while longer for the market to realise that, but when it did, a massive stock market crash followed.

This is the kind of shock that a dynamic economy can recover from, with new innovators coming in, government experts using fiscal policy and other interventions to prevent the crash from translating into a deep recession, and all sorts of people still believing in their ability to make a difference. But once malaise about US institutions had sunk in and experts were no longer around in the government, the crash became a recession and then a depression.

The depression continued and intensified. Many now understood that institutions needed to be fixed, but after the damage that Biden and Trump had done and the polarisation that had reached even higher peaks, rebuilding them proved difficult. American innovators and scientists started emigrating to Canada and the European Union. Some even went to China.

America’s collapse thus followed Hemingway’s famous line on bankruptcy. It happened gradually, as shared prosperity, high-quality public services and the operation of democratic institutions weakened, and then suddenly, as Americans stopped believing in those institutions.

2. The Drug Industry Is Having Its Own DeepSeek Moment – David Wainer

In 2020, less than 5% of large pharmaceutical transactions worth $50 million or more upfront involved China. By 2024, that number had surged to nearly 30%, according to DealForma. A decade from now, many drugs hitting the U.S. market will have originated in Chinese labs…

…China’s biotech boom mirrors its rise in tech. In both cases, China has moved up the value chain, from manufacturing goods to becoming a more sophisticated hub for innovation, competing in industries once dominated by the U.S. There are several reasons for the industry’s growth. For one, many top scientists trained in the U.S. have returned to China over the past decade, fueling the emergence of biotech hubs around Shanghai. And just as DeepSeek built a formidable chatbot—allegedly on a lean budget with limited access to semiconductors—Chinese biotech companies are also scrappier, capitalizing on a highly skilled, lower-cost workforce that can move faster.

Additionally, companies can conduct clinical trials at a fraction of what they would cost in the U.S., while recent changes in the Chinese regulatory system have streamlined and accelerated the approval process to get a study started. 

For now, much of China’s biotech innovation is incremental rather than groundbreaking. Many companies focus on improving existing drugs—tweaking the chemistry, enhancing efficacy or differentiating them in key ways.

But Chinese innovation is steadily improving and is already starting to disrupt the U.S. drug-development ecosystem…

…Chief executives of large pharmaceutical companies are broadening their horizons. Why spend $10 billion acquiring a U.S. biotech with a mid-stage drug when a similar molecule can be licensed from China for a fraction of the price?…

…In late 2024, after scouring the market for obesity assets—presumably eyeing U.S. companies like Viking Therapeutics, which trades at a market value of around $3.7 billion—Merck chose to license an oral GLP-1 drug from China’s Hansoh Pharma. The deal: $112 million upfront, with potential milestone payments of up to $1.9 billion…

…These “bargain” deals are great for Big Pharma. But for U.S. biotech companies—and their venture-capital backers—they are creating real challenges. Investors increasingly struggle to value early-stage biotechs because it is difficult to predict what competition might emerge from China.

3. All of us could be wrong about DeepSeek and OpenAI – Chin Hui Leong

China’s DeepSeek has unleashed a new wave of AI hype.

But amid the noise, one thing is clear: everyone has an opinion, and no one has the answers….

…When Apple (NASDAQ: AAPL) unveiled its iPhone in 2007, many analysts dismissed its hardware-focused strategy.

Their argument hinged on a familiar pattern: over time, consumer hardware tends to become commoditised. If the iPhone becomes popular, they reasoned, its unique appeal would fade as competitors come in with cheaper imitations.

This wasn’t a baseless concern.

The personal computer (PC) era, the previous dominant computing platform, was marked by fierce price competition among hardware manufacturers. Even Apple’s Macintosh PC had fallen victim to the cutthroat competition in the 1980s and 1990s.

In short, the precedent was clear: hardware eventually becomes a commodity.

However, this time, things would be different.

Today, nearly 18 years later, Apple boasts over 2.35 billion devices in circulation, generating upwards of US$200 billion in annual iPhone revenue. Clearly, the popular smartphone has defied the conventional wisdom of hardware commoditisation.

Therein lies a lesson.

When considering the future of AI, the iPhone’s success serves as a crucial reminder: be wary of preconceived notions…

…Too often, we fall prey to the “Highlander” fallacy, assuming that one side can only win if the other loses.

This zero-sum mindset blinds us from a range of possible future scenarios.

Think about the mobile operating system (OS) market.

On one side, you’ve got Apple’s closed iOS, with 2.35 billion devices, and on the other, Google’s open-source Android, with a massive three billion devices.

Crucially, they’ve each found their own area to thrive in.

Apple continues to dominate in the premium smartphone market, while Android is all about getting Google services out there.

Going back to AI models: can OpenAI replicate this coexistence, thriving alongside open-source models?

Could we see large, proprietary models handling general use cases while smaller, specialised models address niche needs? Could there be a main AI model, featuring a supporting cast of smaller models?

Your guess is as good as mine…

…Do you know who were among the biggest “losers” in the shift from desktop to mobile?

In my book, it may be Microsoft and Nvidia.

Nvidia tried to break into the smartphone market but threw in the towel when it failed to get a foothold in the market. Microsoft, on the other hand, had long held a monopoly in the desktop OS market but failed to extend its dominance to mobile devices.

But are we really going to brand Microsoft and Nvidia as losers, even though they got the short end of the stick in the smartphone arena?

Today, both are at the forefront of the AI revolution, proving that setbacks don’t preclude future triumphs…

…Amid the noise, it’s important to remember that ChatGPT is barely two years old, a stark reminder of the industry’s infancy.

If history teaches us anything, we may want to put our egos aside and accept that there are developments that cannot be known ahead of time.

The AI landscape is still being written.

4. Deep Research and Knowledge Value – Ben Thompson

I found a much more beneficial use case the next day. Before I conduct a Stratechery Interview I do several hours of research on the person I am interviewing, their professional background, the company they work for, etc.; in this case I was talking to Bill McDermott, the Chairman and CEO of ServiceNow, a company I am somewhat familiar with but not intimately so. So, I asked Deep Research for help…

…I found the results eminently useful, although the questions were pretty mid; I did spend some time doing some additional reading of things like earnings reports before conducting the Interview with my own questions. In short, it saved me a fair bit of time and gave me a place to start from, and that alone more than paid for my monthly subscription.

Another compelling example came in researching a friend’s complicated medical issue; I’m not going to share my prompt and results for obvious reasons. What I will note is that this friend has been struggling with this issue for over a year, and has seen multiple doctors and tried several different remedies. Deep Research identified a possible issue in ten minutes that my friend has only just learned about from a specialist last week; while it is still to be determined if this is the answer he is looking for, it is notable that Deep Research may have accomplished in ten minutes what has taken my friend many hours over many months with many medical professionals.

It is the final example, however, that is the most interesting, precisely because it is the question on which Deep Research most egregiously failed. I generated a report about another friend’s industry, asking for the major players, supply chain analysis, customer segments, etc. It was by far my most comprehensive and detailed prompt. And, sure enough, Deep Research came back with a fully fleshed out report answering all of my questions.

It was also completely wrong, but in a really surprising way. The best way to characterize the issue is to go back to that famous Donald Rumsfeld quote:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.

The issue with the report I generated — and once again, I’m not going to share the results, but this time for reasons that are non-obvious — is that it completely missed a major entity in the industry in question. This particular entity is not a well-known brand, but is a major player in the supply chain. It is a significant enough entity that any report about the industry that did not include them is, if you want to be generous, incomplete.

It is, in fact, the fourth categorization that Rumsfeld didn’t mention: “the unknown known.” Anyone who read the report that Deep Research generated would be given the illusion of knowledge, but would not know what they think they know…

…What Deep Research reveals is how much more could be known. I read a lot of things on the Internet, but it’s not as if I will ever come close to reading everything. Moreover, as the amount of slop increases — whether human or AI generated — the difficulty in finding the right stuff to read is only increasing. This is also one problem with Deep Research that is worth pointing out: the worst results are often, paradoxically, for the most popular topics, precisely because those are the topics that are the most likely to be contaminated by slop. The more precise and obscure the topic, the more likely it is that Deep Research will have to find papers and articles that actually cover the topic well…

…There is a good chance that Deep Research, particularly as it evolves, will become the most effective search engine there has ever been; it will find whatever information there is to find about a particular topic and present it in a relevant way. It is the death, in other words, of security through obscurity. Previously we shifted from a world where you had to pay for the news to the news being fed to you; now we will shift from a world where you had to spend hours researching a topic to having a topic reported to you on command.

Unless, of course, the information that matters is not on the Internet. This is why I am not sharing the Deep Research report that provoked this insight: I happen to know some things about the industry in question — which is not related to tech, to be clear — because I have a friend who works in it, and it is suddenly clear to me how much future economic value is wrapped up in information not being public. In this case the entity in question is privately held, so there aren’t stock market filings, public reports, barely even a webpage! And so AI is blind…

…That, by extension, is why AI’s like Deep Research are one of the most powerful arguments yet for prediction markets. Prediction markets had their moment in the sun last fall during the U.S. presidential election, when they were far more optimistic about a Trump victory than polls. However, the potential — in fact, the necessity — of prediction markets is only going to increase with AI. AI’s capability of knowing everything that is public is going to increase the incentive to keep things secret; prediction markets in everything will provide a profit incentive for knowledge to be disseminated, by price if nothing else.

It is also interesting that prediction markets have become associated with crypto, another technology that is poised to come into its own in an AI-dominated world; infinite content generation increases the value of digital scarcity and verification, just as infinite transparency increases the value of secrecy. AI is likely to be the key to tying all of this together: a combination of verifiable information and understandable price movements may the only way to derive any meaning from the slop that is slowly drowning the Internet.

This is the other reality of AI, and why it is inescapable. Just as the Internet’s transparency and freedom to publish has devolved into torrents of information of questionable veracity, requiring ever more heroic efforts to parse, and undeniable opportunities to thrive by building independent brands — like this site — AI will both be the cause of further pollution of the information ecosystem and, simultaneously, the only way out…

…Secrecy is its own form of friction, the purposeful imposition of scarcity on valuable knowledge. It speaks to what will be valuable in an AI-denominated future: yes, the real world and human-denominated industries will rise in economic value, but so will the tools and infrastructure that both drive original research and discoveries, and the mechanisms to price it. The power of AI, at least on our current trajectory, comes from knowing everything; the (perhaps doomed) response of many will be to build walls, toll gates, and marketplaces to protect and harvest the fruits of their human expeditions.

5. AI and the Mag 7 – Daniel Rasmussen

Last summer, Goldman Sachs was estimating a $1T spend on AI capex in the coming years, and the numbers have only gone up since then, with most of it concentrated in the Mag 7 that dominate the public markets…

…It’s necessary as an investor to at least consider how these bets might go awry…

…The skeptic’s case starts with the possibility that the Mag 7 is suffering from a classic case of “competition neglect,” where “subjects in competitive settings overestimate their own skill and speed in responding to common observable shocks and underestimate the skill and responsiveness of their competitors,” as Robin Greenwood and Samuel Hanson put it in their paper, “Waves in Ship Prices and Investment.” When shipping prices increase, shipping companies all decide to invest in ships—after all, their models are all saying these investments will be profitable at current rates. That investment not only drives up the price of building new ships, it causes a glut of supply once they are built, resulting in poor returns on these pro-cyclical investments, as low as -36%, according to Greenwood and Hanson. Meanwhile, those who invest at the bottom of that cycle—when current shipping prices are low and there’s no one else building at the shipyards—earn returns as high as 24%.

Rather than ships, today’s AI capex “is a euphemism for building physical data centers with land, power, steel and industrial capacity,” as Sequoia Capital’s David Cahn puts it…

…OpenAI, SoftBank, and the federal government’s $500 billion Project Stargate is the culmination of this race to convert tech companies into industrial manufacturers. But even winning this race could be a Pyrrhic victory. Capex at these levels is an asset-heavy business model. Asset-heavy business models historically have lower returns on capital, especially when sunk costs meet increased competition.

In this scenario, perhaps Stargate is the AI equivalent of overinvesting in new ships at the same moment that everyone else is overinvesting in ships, leading to a supply glut, price drops, and poor investment returns…

…We still don’t have many economical use cases for AI. Even in low-compute mode, a single prompt on ChatGPT’s o3 model costs $20 to perform. High-compute mode can cost much more….

…While Anthropic CEO Dario Amodei is confident AI can beat humans at most things in 2-3 years, that doesn’t mean we will all be using AI that way. There’s a difference between what can be automated and what is cost-effective to automate. Daron Acemoglu, Institute Professor at MIT, estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years. An MIT research paper looked at jobs in non-farm businesses and found 36% of tasks in jobs they studied could be automated by AI vision models, but only 8% were economically worth automating.

Scaling laws are an assumption that brute force will get us more and more powerful AI. For AI investors, it’s a playbook to outspend the competition, win the market, and trust that, eventually, more infrastructure and better chips will bring costs down and make more tasks economical to automate. But shooting for scale and achieving high ROI are not usually achieved at the same time.

Shortly after Stargate was announced, it was soon overshadowed by bigger news about China’s DeepSeek model. While the exact specs are a subject of debate, DeepSeek shattered the cost-to-performance expectations that investors and the Mag 7 have been working from…

…We’ve only just entered the true product-building era for AI. How many people today think of the internet as a product? The internet is not a single thing but a collection of services and products on common digital infrastructure (e.g., TCP/IP protocol, which was built by DARPA with US taxpayer money and isn’t a business anyone is making money on). Similarly, AI models could, like other commodities, utilities, and infrastructure projects, become a part of everything we use rather than a distinct product. Usage patterns are starting to reflect this: we are using these models less directly and more through other services built on top of them.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 February 2025:

1. Robert Litan: An Economist Walks Into a Bar at TEDxKC (Transcript) – Robert Litan

First guy, he approaches the first woman that he sees, offers her a drink. She turns him down. He, then, decides to walk his way down the bar. And, of course, all the women watching this, they see what he’s up to. And they all turn him down…

…He hasn’t learned from this experience, in the real world. So he decides to go to the virtual world. He goes to the Internet and joins Cupid.com and he tries the same technique, and sure enough, with the same result. They all turn him down…

…Cupid.com is in trouble too. And the reason they are, is that the women who have joined Cupid.com are being inundated with offers for men for dates. They get turned off, they quit. And if they quit, men quit. Cupid is in trouble. Who are you going call, to solve this problem. Know the answer is more obvious than ghost busters. You call an economist. Don’t laugh, you call economists. In fact, you call two of them.

This is Muriel Niederle of Stanford, and Dan Ariely of Duke. And they spend a lot of time, studying the problem of artificial scarcity and abundance, in the online dating context, which is a reason Cupid call them up. And they wanted to know how to fix their problem and the two economists said they had an idea, that was as simple as it was profound. Just put a sharp limit on the number of date offers that men could make to women each month. This is the notion of artificial scarcity. Taking what looks like an abundant resource, which is date offers, and artificially constraining them.

And the economists said to Cupid that if you do this, the men will take their offer seriously. They’ll look at more than just the women’s pictures and they’ll actually look at their profiles. And the women will know this, and they’ll be more likely to accept date-proposals. Artificial scarcity helped save Cupid.com, and other dating sites that copied the technique…

…Google collects about $50 billion a year, from advertisers, large and small, seeking placement on that right hand side. They auction off the site. But that’s not how the system started, because when Google was launched, online advertising was in its infancy, and Google, believe it or not, went door to door, advertiser to advertiser, trying to get them to place for an ad next to a search term. Highly laborious, you quickly can see that this is not going to scale, as the number of searches explodes on Google.

And so the founder of Google asked two young engineers, Eric Veach and Salar Kamangar, to come up with an automatic system, that would solve this problem. Well, they were instinctively attracted to auctions. But they were thinking about another problem. That is if they auction off the sites, they fear that the advertisers would bid a very low price, and then incrementally raise their prices just a little bit, and keep the auctions going on forever. And if this happened, and a lot of searches were also going on at the same time, the whole site would crash.

So, as an engineering solution, they came up with this idea. That the winning auction, or the winning placement will be the price, the second highest price that was bid plus one penny. This will cut off the auctions, greatly simplify the process, and in the process also solve another problem called “the winner’s curse“. I’m sure that many of you that have participated in auctions may have regretted winning because you felt like you’ve paid too much. Pretty obvious point…

…“You know, those two engineers, they have reinvented what this guy came up with.” This is William Vickrey, he was an economist at Colombia, who proved mathematically, that the second price auction was the ideal solution to the winner’s curse. And you know what, that won him the Nobel Prize in Economics in 1996.

2. Emergent Layers, Chapter 2: Overserved and Underserved Customers – Alex Danco

Returning to disruption theory, the critical element we’re going to use from that framework is the idea of the overserved customer: the customer who is being served too much by incumbents. In mature industries, where everybody agrees what the scarce resource is and the core constraints are well understood and organized around, we see this happen a lot. As incumbent companies compete with each other for business, and customers are all being served adequately (for the understood job at hand), competition becomes a feature race where products improve or expand at a greater rate than customers’ capacity to use them. There’s a misalignment between what the customer needs and is getting, with that misalignment falling onto the side of “I’m spending way too much of my money or time for this.” Crucially, when customers are overserved for a particular job, it introduces the critical space and oxygen required for a new competitor with some sort of scalable, technological advantage to enter the market at the low end. The nature of over-service creates powerful incentives for incumbents to not engage with disruptive entrants, but rather to retreat upmarket towards higher profit margins…

…For a more recent but still “classic” example, let’s look at Airbnb. Airbnb was able to get off the ground because there was a critical subset of customers in the hospitality industry — initially young people, although not exclusively so — who were overserved by many aspects of the hotel industry. Hotels were serving customers along many axes of performance — comfort, privacy, loyalty reward programs, and so forth — that just weren’t very important to a specific subset of customers who didn’t care too much about all that stuff; they just want a place to stay. This gave Airbnb the critical oxygen necessary to get a foot in the door, and then expand upwards from a dramatically cheaper cost structure than Marriott can possibly compete with. The overserved customer is a very potent and dangerous one: they know what they’re looking for, and they don’t need to be educated when a new entrant comes along with the right proposition. If that new entrant gets a few critical things right, they’re looking at a large group of early adopters that need little prodding, little education and little advance notice. That’s a great basis to start a company.

Let’s now consider another kind of pain: underserved customers. Their pain appears to be more straightforward: they have some fundamental need that isn’t being met. But this situation is trickier than it seems: if a group of customers have a genuine need, then why aren’t companies stepping in to offer solutions? What’s the catch? It could be because the solutions are genuinely too hard, or face technical or feasibility obstacles. It could also be that customers aren’t aware they have the problem. Either way, that’s tough…

…Now let’s put these two types of customer pain together. What would happen if a customer were both overserved and underserved at the same time? Is this possible?

As it turns out, this situation is not only possible, but occurs regularly. And it’s highly volatile. The trick to figuring out how this works requires venturing one step beyond disruption theory, and recasting the job-to-be-done as a stack itself with a hierarchy of low-level to high-level needs…

…We can characterize the initial job where customers are being served as being at level j, where incumbents vie for customer dollars and products will inevitably trend towards over-service. Meanwhile, we can characterize the higher-order job as being at level j+1, which encompass the customer’s higher level objectives, and where companies are not, for whatever reason, currently serving anyone…

…Consider Uber: you have a large group of customers (myself included) who are overserved by owning their own vehicle. If your car sits idle & parked more than 95% of the time (which is about average in North America), you are clearly overserved by owning this car! Yet at the same time, that same set of customers is underserved at level j+1, or the reason why they own a car in the first place — “I need to get to specific places at specific times”. You have a schedule to keep, and it’s hard.

Notice that both of these conditions must hold true in order for Uber to work. If customers were not overserved, it would be difficult for them to abandon their current solution. (Consider someone who drives their vehicle for a living, many hours per day. They are significantly less overserved by their vehicle, and quite unlikely to switch to using Uber for the equivalent job.) At the same time, if they weren’t underserved for a higher-level job (get me places at a certain time), then the only way for a new solution to be truly compelling would be dramatically lower price — which makes for a tough business model. This is another thing outside observers get wrong about Uber when they exclaim, “I don’t see how this is cheaper than owning a car!” Well, here’s the thing — Uber doesn’t have to be cheaper than driving, because it’s superior to driving your own vehicle in many ways! You don’t have to worry about parking, insurance, drinking, maintenance, gas, or anything else. The simultaneous condition of being overserved and underserved by existing solutions is what made Uber so compelling, in a way that other ride-sharing services or carpooling didn’t quite get right. Uber works because it’s cheap, but its appeal is because it’s better…

…If customers only check off the “underserved” box, then it seems likely you’re dealing with a problem that’s a. very hard, or b. the customer isn’t aware they have. This isn’t a great position to be in — it’ll be very hard to build an initial solution and attract early adopters.

If they only check off the “overserved” box, then customers know what they want — but it may be that they’re only motivated by price. And that’s also not a great position to be in: you may get lots of adopters really quickly, but find it very difficult to extract any profit from them…

…The particular combination of customers overserved at level j while being underserved at level j+1, when it happens, explains how from time to time we see markets where the demand is zero and then all of a sudden a vertical line straight up.

3. Why Housing May Be In for Another Cost Shock Next Year – Tracy Alloway, Joe Weisenthal, and Lee Everett

Lee (04:44):

It’s interesting. I think stress is hitting sort of all sides of the market. You have your bigger, more well established shops that have been managing through this, able to handle the higher rate environment, but have obviously taken a very real valuation hit on their existing portfolios. Like 20% to 30% depending upon the portfolio composition. At the same time you’ve had record demand hitting the sector because cost to buy housing is exceptionally unattainable today. And then on the other side you’re having a very material impact on the supply side and I think that’s what’s really unique. If you think back to September, the 10-year was around a 3.6%, I think, the day Chair Powell cut us by 50 basis points. Well, we’re at almost a 4.6% today and I remember that night you heard reports about developers out at local dinners and they were calling it Fed Day and getting ready to put shovels in the ground.

Joe (05:37):

Drinking champagne and stuff like that.

Lee (05:38):

Exactly. And what you’ve seen instead is increased stress on both the short end and the long end of the curve. That’s given you trouble on the short end, to start new housing, and trouble on the long end to afford longer term for ownership housing…

…Lee (11:29):

Yes, I think frankly we’re about to transition from what has been a very renter friendly market to again a landlord friendly market over the course of the next two to three years. And that’s going to be particularly driven by what we’re seeing on the supply side. We’re going to have over a million units come to market over a two-year period here in ’24 and ’25, but peak supply is hitting in the next six months and if you look at relative time from a) peak supply and then b) to getting to a level of lower supply than you saw last cycle, every major market in the country will be there by the end of 2026.

Joe (12:13):

Be where?

Lee (12:15):

Delivering less housing units than they did on average from ’17 to ’19 in apartment buildings. So you’re going to go below prior cycle supply very quickly. At the same time, we do have exceptionally strong labor markets here and the demand story has been outstanding. So 2024 is going to end the year, depending upon the data provider you use, as the first or third highest year for rental demand ever. 2021 was the prior record. So we’re seeing people form rental households at unprecedented rate in the US and as that supply comes down, you’re going to see that demand struggle to frankly find high quality, well-located assets to move in, and you’re likely to see that relationship flip at that point.

Tracy (13:08):

So the other thing that affects multifamily housing construction other than interest rates has to be just general confidence, I guess, in the direction of the economy, the direction of the world and certainly there’s a lot going on right now. We’re recording this on January 28th and there’s news that the Trump administration is freezing a whole bunch of federal spending. I think it’s something like 20% of federal spending. That includes presumably stuff like Section 8 and other affordable housing measures. Would that be expected to hit multifamily as well?

Lee (13:46):

Yeah, and I think it’s probably easiest to sort of start at the top, right? When you’re building multifamily, you’re generally trying to build to an acceptable return on cost, but frankly what we’re doing is putting an investor’s money together and generating returns for them. Multifamily isn’t built for free and it can’t be in this sort of economic world and a general rule of thumb is a 6+% return on cost. So cost to build, you want to yield over 6% of that to get a building to pencil. That tracks up closer to 7% depending upon the institution, because you need to build to that yield on cost, you have to have rents that are high enough to generate enough rental revenue to drive that return. So in order to build today, you have to build it exceptionally high rent levels, because of the cost to build, because of the cost of interest rates.

The only way to drop that is to drop the cost and that cost drop typically comes for affordable housing from the federal government, be it HUD grants that are then deployed through the local housing agency, be it LIHTC, be it any sort of an ensemble of ways to cut costs. That’s how you can get to affordable rents on the supply side. And then on the demand side, you can cut rents by literally giving people a rent check, which is what Section 8 is. And that again comes from the federal government via grants given to the local housing agencies to deploy. And if that money dries up, you have immense problems in terms of a) fueling the demand for these people, because you’re cutting rent on the Section 8 side and b) encouraging future construction of affordable apartment buildings…

…Joe (17:47):

Let’s talk about deportation impacts on labor. What are the estimates for what percentage of the multifamily workforce, whether it’s construction or maintenance, whatever else, is undocumented labor?

Lee (18:01):

It’s estimated 20% of construction workers in this country are undocumented labor. I’d venture to guess it’s similar for the whole multifamily industry when you look at staffing and things along those lines, and I think when you look at a combination of deportation of construction workers as well as the sheer amount of labor it’s going to require to rebuild huge swaths of California, I think you could be looking at a massive deficit in labor within the construction space. And when you think about that, that’s going to be your strongest lever that’s going to hit your cost to build and that’s what’s going to drive up those rents that are necessary. Is all of this immense pressure you’re going to see in the labor costs.

4. Test-Time Search: A Path To AGI – Akash Bajwa

The GPT family of models performed poorly relative to o3 on the ARC benchmark because large models memorise knowledge rather than reasoning processes…

…As an example, Meta intentionally overtrained Llama 3 on 15 trillion tokens to lower inference costs (as they served their billions of users). The model weights become more optimised for common patterns and in-distribution tasks, trading off generalisability to novel tasks.

This architecture combined with ‘internet scale’ data has produced incredible recent advances, but the next leap will come from a new paradigm – instead of outputs, models will be trained on reasoning steps…

…This new vector of scaling will rely on a combination of synthetic and human generated reasoning data. As we’ll see, both will be expensive forms of reinforcement learning (o3’s performance of 87.5% on ARC AGI in high-compute mode cost thousands of $ per task)…

…Synthetic data will be most useful for domains where functional verification is possible, e.g. code, maths and engineering…

…Scaling inference time compute is in line with the Bitter Lesson – there are only 2 techniques that scale indefinitely with compute: learning & search.

DeepMind’s AlphaGo used Monte Carlo Tree Search during test time to attain superhuman status – if stripped of this capabilities, it drops in Elo from ~5,200 to 3,000 (top humans are around ~3,800)…

…The exorbitant costs stem from the many, many Chains Of Thought generated as the model searches for the chains that lead to the right answer – all of the other tokens are useless, but cost a lot to generate…

…Functionally verifiable domains are the most amenable to synthetic CoTs because engineering the reward is much easier than in domains where subjectivity is involved…

…Code execution provides an unambiguous, binary reward signal – either the code executes successfully or it fails, creating clearly defined success criteria for training.

In functionally verifiable domains, the correct CoT tokens become training data…

…Over time, this should have a deflationary effect on the inference cost of reasoning models, as we’ve seen with frontier models in the pre-training paradigm…

…As pre-training gains plateau (or become too expensive), we’ve found a new vector of scaling (test time search) that is demonstrating a path to truly general intelligence.

Data acquisition/generation remains the bottleneck on progress, not compute. Microsoft’s announcement of $80bn in capex for 2025 underscores the Street’s underestimation of hyperscaler capex and compute buildout.

The implications of inference scaling run up and down the stack. Instead of the densely interconnected supercomputers of the pre-training paradigm, we will see more distribution of workloads, perhaps some even running locally. How will market share evolve as companies look to optimise test time search workloads – will AI ASICs eat into Nvidia market share?

Instead of prohibitively expensive pre-training runs, enterprises developing their own models may opt to train smaller models with reasoning cores and decide when to scale up test time search for certain economically valuable tasks. The result is the alchemy of capex to opex and fixed costs to variable costs. CIOs will decide which tasks merit more investment and test time search – inevitably, this will still be cheaper than human labour.

5. Don’t Freak Out – Ben Carlson

The common theme across the Apollo missions was the sheer amount of planning involved.  There were months and months of simulations and training exercises to review every possible scenario. They wanted every process to be automatic.

But there was always the risk of an unplanned error, considering they were propelling these giant hunks of metal through space using rocket fuel that would allow them to reach speeds of more than 24,000 miles per hour…

…When Apollo 13 had an explosion mid-flight, it wasn’t something anyone thought could have been even a remote possibility. Astronaut Jack Swigert explained it after the fact like this:

Nobody thought the spacecraft would lose two fuel cells and two oxygen tanks. It couldn’t happen. If somebody had thrown that at us in the simulator, we’d have said, ‘Come on, you’re not being realistic.’

This is why NASA trained the astronauts in one skill more than any other leading up to their space flights — the art of not panicking. The only reason they could turn the Apollo 13 spacecraft around 200,000 miles from earth following an explosion onboard is because the astronauts and everyone on the ground remained levelheaded. No one freaked out.

Or if they were freaking out internally, they didn’t act on those emotions.

In a nutshell, that is successful investing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 February 2025:

1. DeepSeek: The View from China – Jordan Schneider, Irene Zhang, Angela Shen, and Yiwen

In this newsletter, we share a translation of insights from a January 26 closed-door session hosted by Shixiang 拾象, a VC spun out from Sequoia China. Attended by dozens of AI researchers, investors, and industry insiders, the event captures how the Chinese AI community is processing the DeepSeek shock…

…The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant…

…In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough…

…Why did DeepSeek catch up so fast?

Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.

One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier….

…AI is similar to a step function, where the compute requirements for followers have decreased by a factor of 10. Followers have historically had lower compute costs, but explorers still need to train many models. The exploration of new algorithms and architectures will not stop. Behind the step function, there are significant investments by many people, meaning compute investments will continue to advance. Many resources will also be allocated to products. Apart from reasoning, there are other directions that are compute-intensive. While the vast amount of compute resources spent by explorers may not be visible, without such investment, the next “step” might not occur. Additionally, many are dissatisfied with current architectures and RL methods, and progress will continue.

When exploring directions, performance achieved with 10,000 GPUs may not always be significantly better than that of 1,000 GPUs, but there is a threshold somewhere. It’s unlikely that meaningful results can be achieved with only 100 GPUs because the iteration time for each solution would be too long…

…The question of why OpenAI and Anthropic did not do work in DeepSeek’s direction is a question of company-specific focus. OpenAI and Anthropic might have felt that investing their compute towards other areas was more valuable.

One hypothesis for why DeepSeek was successful is that unlike Big Tech firms, DeepSeek did not work on multi-modality and focused exclusively on language. Big Tech firms’ model capabilities aren’t weak, but they have to maintain a low profile and cannot release too often. Currently, multimodality is not very critical, as intelligence primarily comes from language, and multimodality does not contribute significantly to improving intelligence…

…2025 will, first and foremost, see interest in new architectures beyond Transformers. Some initial exploration is already underway, aiming to reduce costs while pushing the boundaries of intelligence. Secondly, the potential of reinforcement learning (RL) has yet to be tapped into completely. On the product side, there is significant interest in agents, though they have yet to see widespread application…

…It is reported that Meta is still in the process of reproducing DeepSeek, but so far, this has not significantly impacted their infrastructure or long-term roadmap. In the long run, beyond exploring the boundaries of the technology, cost efficiency must also be considered. Lowering costs will let us have more fun…

…From the developer’s perspective, models like Claude-3.5-Sonnet have been specifically trained for tool use, making them highly suitable for agent development. In contrast, models like DeepSeek have not yet focused on this area, but the potential for growth with DeepSeek is immense…

…Currently, reinforcement learning (RL) solves problems with standard answers but has not achieved breakthroughs beyond what AlphaZero accomplished. In fact, it is often simpler. Distillation addresses problems with standard answers, and RL methods work effectively when training with such answers. This explains why distillation and RL have made rapid progress in recent years.

Humanity’s demand for intelligence is vastly underestimated. Many critical problems, such as cancer and SpaceX’s heat shield materials, remain unsolved. Existing AI primarily automates tasks, but there are numerous unsolved challenges ahead. Looking forward, the potential for explosive growth is immense, and the advancement of intelligence cannot stop…

…Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.

The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency. DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization…

…Open source controls the margins of the whole market. If open source can do 95% of what closed source can do and closed source is too expensive, then open source can be used completely. If the capabilities of open source and closed source do not differ greatly, then this presents a big challenge for closed source…

…AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.

2. Explaining International Valuations –  Daniel Rasmussen

Perhaps the single greatest divergence in equity markets has been the continued outperformance of US versus international equities—and thus the widening of the valuation gap between the US and the rest of the world…

…By far the most significant difference, explaining about half the valuation gap, is the domicile of listing. US-listed stocks are substantially more expensive than internationally listed stocks for no reason other than the place of listing.

It’s particularly interesting that the regression shows having a higher percentage of sales in the US results in cheaper valuations. A key driver of this is that several of the US tech giants most responsible for high US equity valuations having a relatively low percentage of sales in the US (Alphabet, Microsoft, and Tesla at around 50%; Apple, Netflix, Meta, and NVIDIA at around 40%). The big question, then, is why half the valuation gap is explained simply by being listed on US exchanges. Even large internationally listed companies with >40% of their revenue coming from the US, like Toyota, Mitsubishi, Roche or Deutsche Telekom (which owns T-Mobile), trade at steep value multiples relative to US peers.

Were a larger percentage of the valuation gap explained by fundamentals, we’d expect such a gap to persist. But given that the valuation gap is primarily explained simply by the location of listing, we think there’s a strong reason to expect a convergence—and therefore to favor international over US-listed stocks, despite their terrible relative performance over the past decade.

3. The Most Impressive Prediction of All Time – Jeffrey Emanuel

My candidate for the most impressive prediction of all time came from a person who is practically unknown in the West except for a relatively small group of historians and people interested in niche subjects. The person I’m thinking of is named Pyotr Durnovo, and he was an Imperial Russian government official who lived from 1842 to 1915.

We will discuss more about him later and how his life experience may have prepared him to be able to make such an impressive prediction, but the short version of it is that he initially studied to be in the Navy and served there for around a decade, and then became the Director of Police for the Ministry of Internal Affairs for the entire Russian Empire under Tsar Alexander III. Later, he served as the Minister of the Interior under Tsar Nicholas II (the one who was ultimately executed with his family by the Bolsheviks in 1917 during the Russian Revolution).

So what is this prediction he made, anyway, and why is it so impressive? Well, in 1914, six months prior to the outbreak of World War 1, Durnovo wrote a truly remarkable ~7,600-word memorandum for Tsar Nicholas II and his top 2 or 3 ministers, which we know was given to them, since it was found in Nicholas’ papers and later published in 1922 by communist historians after the revolution. If they had only read it carefully and took its warnings more seriously, the world we live in today might look very different!…

…For one, it predicted an imminent war on the horizon, which he ultimately blamed on the collision course between England and Germany, which were the two greatest industrial powers at the time. This was certainly not some earth shattering or special prediction; a lot of people predicted some kind of big conflict, and it was often said that “war was in the air” at the time…

…It’s how he analyzed the situation, and then used that reasoning to predict the exact groupings of countries that would participate in the conflict and on which side, and how the situation would evolve from there, that is so impressive…

…His predictions about alliances and national behaviors were almost unbelievably specific and ran counter to the conventional wisdom of the time:

  • He predicted that Italy would not side with Germany despite being part of the Triple Alliance, and would instead join the opposing side if victory seemed likely, seeking territory from both Austria and Turkey. This is exactly what happened; Italy joined the Allies in 1915 after negotiating for territorial concessions.
  • He predicted that Romania would remain neutral until it was clear which side would win, then join the victorious side to claim territory. This also came true— Romania entered the war in 1916 on the Allied side after significant Russian successes.
  • Most surprsingly, he predicted that Bulgaria would side against Serbia and by extension against Russia, despite Russia being Bulgaria’s historic liberator from Ottoman rule— a prediction that seemed almost unthinkable to most observers at the time. This came true exactly as he foresaw, with Bulgaria joining the Central Powers in 1915.
  • He correctly predicted that Serbia and Montenegro would side against Austria, while Greece would likely remain neutral until the outcome was more or less predetermined.
  • He predicted unrest among Muslims in the Caucasus and Turkestan (which occurred).
  • He predicted the possibility of Afghanistan moving against Russia (which happened in 1919).
  • He predicted serious complications in Poland (the Polish-Soviet War of 1919-1921).
  • He predicted an uprising in Finland if Sweden joined Germany (Finland did declare independence in 1917)

…If all of that weren’t already so ridiculous to get right, he went way beyond all that to realize that, regardless of who won, the war would lead to “social revolution” in both the defeated AND victorious countries, starting with the losing side and then spreading to the winners. This was perhaps his most extraordinary prediction, as it came true in spectacular fashion:

  • Russia, despite being on the winning side, experienced the Bolshevik Revolution in 1917; we will go into much more detail about these predictions below.
  • Germany, after losing the war, experienced the German Revolution of 1918-1919; Durnovo predicted that unrest and revolution would be specifically tied to economic factors and class interests rather than purely political ones: he outlined how German workers would turn against the agricultural interests that had dominated pre-war German policy once defeat cut off their export markets and industrial employment, and this exact dynamic played out in the German Revolution of 1918-1919.

Now, you might object here that “Well, it’s not that crazy to believe there might be a revolution in a country which suffered massive losses in a catastrophic war; lots of people might have predicted that.” But the thing is, Durnovo went so far beyond merely predicting that there would be a Russian Revolution. He basically predicted every contour of the Revolution, the driving forces behind it, how it impacted different segments of Russian society, and how it would all unfold, step by step!…

…So how was Durnovo able to accomplish this incredible feat of prediction? Obviously, he was a genius of the first order, which is perhaps not so surprising given that he was a close relative of the famous Tolstoy family. But raw IQ is certainly not enough, nor is being well informed and knowledgeable. What kind of man could see so clearly what virtually everyone else missed? He was a complex character whose very contradictions likely enabled his extraordinary insights; he was, at the same time:

  • A conservative police chief who often expressed liberal thoughts in private
  • A supposed reactionary who opposed anti-Semitic measures and defended Jews
  • A cynical operator who nevertheless would help others when he could
  • A man capable of both strict officialdom and surprising gentleness
  • A high official who preferred informal interactions (his subordinates would warn visitors not to address him as “Your Excellency”)

These contradictions suggest someone who wasn’t bound by conventional ideological frameworks or social expectations— a crucial trait for seeing beyond accepted wisdom. He also had a wide range of professional experience that prepared him to see things in a multi-faceted, sophisticated way, as by 1915, he had done the following:

  • Naval officer (9 years of far-sea cruises)
  • Military legal training
  • Assistant Prosecutor in various parts of Russia
  • Director of Police Department for 10 years
  • Assistant Minister of Interior under multiple ministers
  • Minister of Interior
  • Member of State Council

This combination of experiences was extraordinary and atypical to say the least:

  • His naval and legal background gave him insight into the military, maritime trade, and the Russian legal system.
  • His prosecutorial work exposed him to conditions across Russia, not just in the big cities.
  • His police work gave him unparalleled insight into social discontent and the strategies and thinking of professional revolutionaries like Lenin, Stalin, and Trotsky.
  • His ministerial positions showed him the workings (and limitations) of state power.

He also occupied a unique position as both an insider and an outsider: 

  • He was from old nobility but not wealthy or particularly influential
  • He reached high office but was temporarily dismissed in disgrace (a sordid story in which Durnovo had his secret police officers search the private letters of a foreign ambassador— inside an embassy building no less— so they could steal love letters sent by Durnovo’s mistress to the ambassador; when the ambassador complained to Tsar Alexander III, he was furious, ordering his minister to “remove this swine within twenty-four hours.”)
  • He was a conservative who often disagreed with other conservatives
  • He understood both state power and its limitations

This dual perspective may have freed him from the groupthink that afflicted both conservative and liberal circles.

4. USA, Inc – Michael Batnick

Consider this face blower of a stat from Goldman: “Since 1992, earnings growth in the US has outpaced earnings in non-US developed economies by an annual average of 2.4 percentage points.”

Most of the world is barely earning more than they were prior to the pandemic. The U.S. looks like an unstoppable freight train…

…The one sided performance has driven valuations between us and the rest of the world to record levels. We’ve all seen a version of these charts before…

…BUT! These charts aren’t comparing apples with apples. Goldman notes that only 1% of the U.K. market is in technology companies. Another example they cite is that energy is 5% of S&P 500 earnings, 19% of UK, and just 1% of Japan. We’re not comparing apples with apples.

They did a great job adjusting for differences in sector weights…

…The U.S. still trades at a premium to the rest of the world ex-India, but not as much as the prior chart would have you believe. Before any adjustments, the Eurozone trades at a 39% discount to the U.S. And after the adjustments, that falls to 23%.

5. DeepSeek FAQ – Ben Thompson

Let’s work backwards: what was the V2 model, and why was it important?

The DeepSeek-V2 model introduced two important breakthroughs: DeepSeekMoE and DeepSeekMLA. The “MoE” in DeepSeekMoE refers to “mixture of experts”. Some models, like GPT-3.5, activate the entire model during both training and inference; it turns out, however, that not every part of the model is necessary for the topic at hand. MoE splits the model into multiple “experts” and only activates the ones that are necessary; GPT-4 was a MoE model that was believed to have 16 experts with approximately 110 billion parameters each.

DeepSeekMoE, as implemented in V2, introduced important innovations on this concept, including differentiating between more finely-grained specialized experts, and shared experts with more generalized capabilities. Critically, DeepSeekMoE also introduced new approaches to load-balancing and routing during training; traditionally MoE increased communications overhead in training in exchange for efficient inference, but DeepSeek’s approach made training more efficient as well.

DeepSeekMLA was an even bigger breakthrough. One of the biggest limitations on inference is the sheer amount of memory required: you both need to load the model into memory and also load the entire context window. Context windows are particularly expensive in terms of memory, as every token requires both a key and corresponding value; DeepSeekMLA, or multi-head latent attention, makes it possible to compress the key-value store, dramatically decreasing memory usage during inference.

I’m not sure I understood any of that.

The key implications of these breakthroughs — and the part you need to understand — only became apparent with V3, which added a new approach to load balancing (further reducing communications overhead) and multi-token prediction in training (further densifying each training step, again reducing overhead): V3 was shockingly cheap to train. DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.

That seems impossibly low.

DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses; from the V3 paper:

Lastly, we emphasize again the economical training costs of DeepSeek-V3, summarized in Table 1, achieved through our optimized co-design of algorithms, frameworks, and hardware. During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs. Consequently, our pre- training stage is completed in less than two months and costs 2664K GPU hours. Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M. Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.

So no, you can’t replicate DeepSeek the company for $5.576 million.

I still don’t believe that number.

Actually, the burden of proof is on the doubters, at least once you understand the V3 architecture. Remember that bit about DeepSeekMoE: V3 has 671 billion parameters, but only 37 billion parameters in the active expert are computed per token; this equates to 333.3 billion FLOPs of compute per token. Here I should mention another DeepSeek innovation: while parameters were stored with BF16 or FP32 precision, they were reduced to FP8 precision for calculations; 2048 H800 GPUs have a capacity of 3.97 exoflops, i.e. 3.97 billion billion FLOPS. The training set, meanwhile, consisted of 14.8 trillion tokens; once you do all of the math it becomes apparent that 2.8 million H800 hours is sufficient for training V3. Again, this was just the final run, not the total cost, but it’s a plausible number.

Scale AI CEO Alexandr Wang said they have 50,000 H100s.

I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.

Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.

Meanwhile, DeepSeek also makes their models available for inference: that requires a whole bunch of GPUs above-and-beyond whatever was used for training…

Is this why all of the Big Tech stock prices are down?

In the long run, model commoditization and cheaper inference — which DeepSeek has also demonstrated — is great for Big Tech. A world where Microsoft gets to provide inference to its customers for a fraction of the cost means that Microsoft has to spend less on data centers and GPUs, or, just as likely, sees dramatically higher usage given that inference is so much cheaper. Another big winner is Amazon: AWS has by-and-large failed to make their own quality model, but that doesn’t matter if there are very high quality open source models that they can serve at far lower costs than expected.

Apple is also a big winner. Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU (neural processing unit) have access to a shared pool of memory; this means that Apple’s high-end hardware actually has the best consumer chip for inference (Nvidia gaming GPUs max out at 32GB of VRAM, while Apple’s chips go up to 192 GB of RAM).

Meta, meanwhile, is the biggest winner of all. I already laid out last fall how every aspect of Meta’s business benefits from AI; a big barrier to realizing that vision is the cost of inference, which means that dramatically cheaper inference — and dramatically cheaper training, given the need for Meta to stay on the cutting edge — makes that vision much more achievable.

Google, meanwhile, is probably in worse shape: a world of decreased hardware requirements lessens the relative advantage they have from TPUs. More importantly, a world of zero-cost inference increases the viability and likelihood of products that displace search; granted, Google gets lower costs as well, but any change from the status quo is probably a net negative…

...How did DeepSeek make R1?

DeepSeek actually made two models: R1 and R1-Zero. I actually think that R1-Zero is the bigger deal…

…R1-Zero, however, drops the HF part — it’s just reinforcement learning. DeepSeek gave the model a set of math, code, and logic questions, and set two reward functions: one for the right answer, and one for the right format that utilized a thinking process. Moreover, the technique was a simple one: instead of trying to evaluate step-by-step (process supervision), or doing a search of all possible answers (a la AlphaGo), DeepSeek encouraged the model to try several different answers at a time and then graded them according to the two reward functions.

What emerged is a model that developed reasoning and chains-of-thought on its own…

…Here again it seems plausible that DeepSeek benefited from distillation, particularly in terms of training R1. That, though, is itself an important takeaway: we have a situation where AI models are teaching AI models, and where AI models are teaching themselves.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms, Microsoft, Netflix, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 January 2025:

1. Thoughts On A Month With Devin – Hamel Husain, Isaac Flath, and Johno Whitaker

Unlike typical AI assistants, Devin operates through Slack and spins up its own computing environment. When you chat with Devin, you’re talking to an AI that has access to a full computing environment – complete with a web browser, code editor, and shell. It can install dependencies, read documentation, and even preview web applications it creates…

…The experience is designed to feel like chatting with a colleague. You describe what you want, and Devin starts working. Through Slack, you can watch it think through problems, ask for credentials when needed, and share links to completed work. Behind the scenes, it’s running in a Docker container, which gives it the isolation it needs to safely experiment while protecting your systems. Devin also provides a web interface, which also allows you to gain access to its envirnoment and watch it work with IDEs, Web Browsers and more in real time…

…Our first task was straightforward but real: pull data from a Notion database into Google Sheets. Devin tackled this with surprising competence. It navigated to the Notion API documentation, understood what it needed, and guided me through setting up the necessary credentials in Google Cloud Console. Rather than just dumping API instructions, it walked me through each menu and button click needed – saving what would typically be tedious documentation sleuthing. The whole process took about an hour (but only a few minutes of human interaction). At the end, Devin shared a link to a perfectly formatted Google Sheet containing our data.

The code it produced was a bit verbose, but it worked. This felt like a glimpse into the future – an AI that could handle the “glue code” tasks that consume so much developer time. Johno had similar success using Devin to create a planet tracker for debunking claims about historical positions of Jupiter and Saturn. What made this particularly impressive was that he managed this entirely through his phone, with Devin handling all the heavy lifting of setting up the environment and writing the code…

…Over the course of a month, we systematically documented our attempts across these categories:

  1. Creating new projects from scratch
  2. Performing research tasks
  3. Analyzing & Modifying existing projects

The results were sobering. Out of 20 tasks, we had 14 failures, 3 successes (including our 2 initial ones), and 3 inconclusive results. Even more telling was that we couldn’t discern any pattern to predict which tasks would work. Tasks that seemed similar to our early successes would fail in unexpected ways…

…Working with Devin showed what autonomous AI development aspires to be. The UX is polished – chatting through Slack, watching it work asynchronously, seeing it set up environments and handle dependencies. When it worked, it was impressive.

But that’s the problem – it rarely worked. Out of 20 tasks we attempted, we saw 14 failures, 3 inconclusive results, and just 3 successes. More concerning was our inability to predict which tasks would succeed. Even tasks similar to our early wins would fail in complex, time-consuming ways…

…This reflects a pattern we’ve observed repeatedly in AI tooling. Social media excitement and company valuations have minimal relationship to real-world utility. We’ve found the most reliable signal comes from detailed stories of users shipping products and services. For now, we’re sticking with tools that let us drive the development process while providing AI assistance along the way.

2. Transcript: The Hidden History of Eurodollars, Part 1: Cold War Origins – Joe Weisenthal, Tracy Alloway, Lev Menand, and Josh Younger

Tracy (01:30):
It can be admittedly confusing. So why don’t we just define it right away. So eurodollars are dollar-denominated bank deposits held at foreign banks or overseas branches of US banks. And you can think of them as basically offshore dollars that sit outside the US banking system and kind of away from the Federal Reserve. They’re basically a very special form of money. You could call them shadow money.

Joe (01:57):
And it’s totally gigantic. So it’s almost $10 trillion. And I just find it so interesting, right? Because when I think of dollars, they’re either coming from, you know, the government spends dollars into existence or US bank credit. US banks [have a] license to de facto create dollars or deposits at will. And yet, eurodollars are kind of this weird thing, I guess because they’re not that.

Tracy (02:21):
Yeah, they’re not either of those. And eurodollars didn’t just spring up fully formed out of thin air. They were the result of a series of decisions all aimed at solving particular problems…

…Josh (04:27):
So eurodollars are among the most important financial instruments in the world and they are really the backbone of the global dollar system. But they come from very humble beginnings, very idiosyncratic start. And really it all started in Yugoslavia…

…So in 1945 in November, there’s a communist revolution and the US is miffed in a bunch of ways, but one of them is that the old government owes them money. And so the question is, how are they going to get it? And a few months later, Tito asked for his gold back because the Yugoslavia government had $70 million worth of gold in New York. And the Secretary of State, who was George Marshall of the Marshall Plan, he realizes he’s got a bargaining chip, which is the gold. It’s in New York and they don’t get it back until they settle their claims.

Now, even people within the State Department were kind of skeptical of this, the Yugoslavian government is obviously furious. And so are the Russians who, at this point, you know, Tito and Stalin have a falling out eventually a few years later. But at this point, they’re quite closely aligned..

…The Russians get the sense that the US is willing to use gold as a bargaining chip. They’d previously actually been building up dollar balances in New York. This is this kind of a misnomer about the post-war period. There’s this sense that that the Russians are extracting all their resources from the US, but they’re actually building up reserves of dollars because the thought is ‘We’re probably going to need to trade with these people. We have a trading company based in the US and they need resources.’ And so they’re building up foreign currency deposits and gold, but in 1947, they realize it’s not going to go well, potentially. And they pull all the gold out. They actually just called banks in New York and they say ‘We want our gold back.’ A massive reversal of the policy.

And the question is, where’s it going to go? And so they need dollars because the US dollar is the currency of foreign exchange. If they want to trade with the West, they have to trade in dollars. They need gold because gold is the basis for the monetary system. And so the question is, where can they put gold and dollars in a safe place that’s still on the right side of what was then already known as the iron curtain?

And so it turns out Paris is the ticket. They’ve actually been secretly stockpiling cash in gold in Paris. They put it in briefcases. They would fly people to Paris and put it in the consulate offices. They would just build up piles of cash and gold. And in particular, there’s a bank — BCEN — I won’t try to do it in French. And BCEN is owned by, or run by, a notorious communist sympathizer, who has a very good relationship with the Politburo. And so this is a friendly bank. And so they take on deposit the Soviet money and BCEN’s moniker in the Telex system they used to communicate was “Eurobank.”

And so, eurodollars were initially, in the late forties, just deposits issued by Eurobank, BCEN, generally for the Soviets, although also for the Chinese. And slowly this starts to percolate. There’s another communist-owned bank in London. There’s one in Brussels, which DCIA just describes as run by ‘someone with few scruples, I think is the way they put it. And so there’s some friendlies across Europe who are willing to take their money and the eurodollar market begins this way, which is preemptive sanctions evasion, basically…

…And so the first use case of eurodollars is sanctions evasion. The second use is to facilitate cross-Iron Curtain trade, although that’s a pretty small business. And so the third, and much larger business, is cross-border interest rate arbitrage. And that sounds really technical, but what it’s really doing is using foreign exchange markets and derivative markets to source dollars that the UK in particular needs in this post-war environment.

So imagine a eurodollar bank, a euro bank, takes in a eurodollar deposit, which means it gets a dollar in cash — let’s think of a physical bill, that’s an asset. It issues a eurodollar liability. And then, what is it going to do next? Because it needs to do some sort of investing. And what it does is it exchanges that dollar asset for a sterling cash, and it invests that sterling cash in some short term sterling investment — short bills or something like that. And after it does that, it says ‘I want to hedge my foreign exchange risk, because now I have a dollar liability and a sterling asset. So I’m going to use the foreign exchange forward market to agree to sell that sterling back for dollars at some point in the future at a fixed price that we agree on today.’

So that’s the bank’s position. Who’s on the other side of that trade? Let’s say a corporation, a manufacturing entity, they make radios, and that radio production process requires inputs. Those inputs are imported. And so that radio production company needs dollars with which to buy the raw materials that it uses to make the radio that it then sells for dollars in foreign markets. And so, they get those dollars from the eurobank, in exchange for the sterling they have on hand, they go buy all the parts, but they want to make sure that they know how much they’re going to receive in local currency at the end of the production process. When they sell that radio abroad, they don’t want the value of the dollar to go down. So they sell those dollars forward in exchange for sterling. And so they’ve entered into a derivative agreement, which is the opposite of the one that the euro bank has or the euro banking system.

And so then they put together the radio, they sell it abroad, they receive dollar proceeds, they turn those into sterling, which is what they pay their employees in, that’s what they pay for their land and equipment in. And that exchange rate was the one they agreed upon in advance through the foreign exchange forward contract. And so, basically what’s happening is the euro banks are pulling in dollars from abroad, distributing them through the foreign exchange market that’s trading onshore to those that need dollars today, and then providing hedges to those that will receive dollars in the future. And in the case of the euro bank, the dollars they’ll owe in the future, potentially, to their eurodollar deposit holder.

Lev (18:32):
Think about this from the perspective of the City of London coming out of the war and those bankers and the world that they grew up in, which is a world that we’ve completely forgotten, but was the world of sterling dominance before the First World War and the role that the empire played in financing global trade.

What we’re looking at in the 1950s is a group of London-based financial institutions trying to figure out a way to continue their dominance in a global economy that runs on dollars now and not on sterling. And so, the eurodollars are sort of worth the risk to the City of London, and to some extent to UK financial regulators like the Bank of England, because they need to fix their business model for a dollar world, and they want to get in on the dollar world…

…Josh (20:43):
And so this cross-border interest rate arbitrage is really just the way markets distribute the currency according to who needs it and provide the hedges that facilitate the functioning of British corporations as well. It’s what we’d call now like a use case, right? This is like a real underlying use case that doesn’t involve the Soviet Union for dollar deposits issued by non-US banks, which is, you can’t emphasize enough how fundamentally strange that is because if I tried to make dollars by writing it on piece of paper, I don’t think I’d get very far. But at the time, that’s essentially what these banks are doing.

And in particular London is a more, let’s say, reputable locale, particularly banks that are not known to be communist sympathizers. There’s a little bit of a funny thing about being a communist bank, but we won’t get into that specifically, but these are blue chip banks in London issuing dollar deposits. And that means you can use them for things and you can feel more comfortable…

…Lev (26:54):
Although, just let’s size this a little bit, right? It was a billion dollars in, say, 1960, which is maybe the equivalent of $50 billion today…

…So we have way more to go in terms of the growth of this market subsequent to 1960. It’s still pretty nascent in 1960…

…Josh (31:08):
So the question at this point is, it’s a nascent market, it’s half a Tether, and it’s unclear whether or not it’s become a big major global actor. We know it eventually becomes that, but at the time, that’s super unclear, but it becomes eventually and soon the solution to a big problem. So eurodollars are the solution to big problem because, in the background of all of this buildup, there’s massive trouble brewing and the whole global edifice of the dollar system is starting to crack.

And the question is, you know, how are we going to save it? Or should we?

3. Emergent Layers, Chapter 1: Scarcity, Abstraction & Abundance – Alex Danco

One foundational principle of the tech world is that as it builds upwards and outwards into the rest of the world, it’s doing so by building on top of these abundant resources and progressively leveraging them. We can think about the world that we know and understand today — with its constraints, and business models and maturing industries that are generally understood by all — as forming a layer, which we’ll call layer i. In time, as certain elements become abstracted and subsequently abundant, others emerge as newly scarce, or in play for new reasons and in new business models. The critical skill for understanding how this works (which is worth practicing!) is being able to work one’s way up and down between stack layers so as to understand when an abundant and scalable element has blossomed at layer i of a stack, and its scarce, non-scalable counterpart has emerged at a new layer — which we’ll call layer i+1…

…Microsoft

The original scarce resource at layer i = PC hardware. In the early days of PCs, manufacturers could compete along many axes of performance — memory, speed, functionality, and so forth — while being sufficiently differentiated from one another. But it was very hard to standardize common functions and applications that people could run across any computer, making it difficult for these use cases to grow rapidly — until Bill Gates and Paul Allen realized, Hey, there isn’t a software industry yet but there’s gonna be, so we should start it. Microsoft abstracted away the capabilities of a computer into software, so now anyone else could write their own software on top of Microsoft’s software without having to worry about the underlying machinery. PCs became an abundantly available commodity, and Microsoft became dominant and mega-profitable. A new scarce resource emerged at layer i+1: the ability to connect these PCs and get them to talk to one another…

…Facebook

Scarce resource at layer i = connections between humans using the internet. The internet was awash in people and content, but authentic human interaction was still relatively scarce and difficult. As such, all of the attempts at connecting people to content and advertising and services were feature-stuffed, spammy, bloated and bad. The critical step forward that Facebook accomplished was abstracting away the “reciprocal friendship” into a functioning social graph. And we’ve seen what’s happened since: Facebook, and social connectivity in general, has exploded and become a newly abundant resource. Facebook became dominant and mega-profitable…

…One critical aspect of this layering is that at each higher level of abstraction, the lever with which one can create value and extract profit becomes successively longer. You can see this by looking at market cap per employee of these dominant companies:

Intel: 106k employees, 55B revenue, 149B mkt cap

Microsoft: 120k employees, 93B revenue, 429B mkt cap

Google / Alphabet: 60k employees 75B revenue, 510B mkt cap

Facebook: 13k employees, 6B revenue, 320B mkt cap…

…A non-obvious but critical point to appreciate here is that for of the first n movers mobilizing around a scarce element, the arrival and eventual dominance of the last mover will be seen as a Black Swan event of sorts. By abstracting away the scarce resource instead of organizing around its scarcity, these companies become the first to be fully playing in the sandbox at level i+1, as opposed to the non-scalable scarcity-governed sandbox at level i…

…The last decade saw plenty of startups go after the transportation market, and I’m sure all of them described themselves as “scalable” in their investor decks. Meanwhile, the whole valley was busy passing on Uber because it was initially just a better way to do a black car service, and few people understood the true scalable potential in abstracting away the driver-rider trust required for UberX. The take home lesson here should be taken to heart: when the first n companies go after an issue, no matter what language they use in their pitch, their business models typically don’t truly venture beyond the constraints at layer i that anybody can see and understand. They’re easier to work through, make more sense to “rational investors”, and require fewer non-linear leaps of thinking to understand. As such, when the last mover emerges at level i+1, they’re a Black Swan event: few people foresaw their opportunity, their impact is enormous, and everybody rationalizes what happened after the fact…

…At level i+1 of the stack, the newly valuable resource is that which emerges as scarce out of the transition from scarcity to abstraction to abundance at layer i.

4. The Default Position: LevFin’s Latest Game Just Got Shut Down…Sort Of – JunkBondInvestor

Serta was no small player. We’re talking about the company behind Serta and Beautyrest—the beds you see in every department store in America. But by 2020, they were in serious trouble. Drowning in debt and sales were tanking.

That’s when a group of savvy lenders saw their opportunity. Already holding a chunk of Serta’s debt, they approached with what would become lawyers’ new favorite playbook.

The deal? A group holding 51% of their term loans would provide new money, but only if they got to exchange their old loans for new “super-senior” debt that jumps to the front of the line. The other 49%? They didn’t even get a phone call.

Here’s a sobering fact: non-participating lenders saw their position so deeply subordinated that their recovery prospects plummeted. The new super-senior debt was worth nearly full value, while the excluded lenders saw their position crater.

But here’s where they screwed up.

Their loan agreement only allowed “open market purchases.” Serta’s lawyers tried arguing that their private backroom deal counted as “open market” because… well, just because.

The Fifth Circuit wasn’t having any of it. They said what everyone was thinking: A private deal with hand-picked lenders isn’t an “open market” any more than a private club is a public park…

…On the exact same day—I’m not making this up—a New York court looked at pretty much the identical deal from Mitel Networks and said “Sure, go right ahead.”…

…Mitel pulled the exact same move as Serta. They were drowning in debt, so they cut a deal with friendly lenders to jump them to the front of the line. New super-priority debt paper. Everyone else got pushed to the back.

So what made this different from Serta?

Three words. That’s it. Instead of requiring “open market purchases,” Mitel’s agreement just said they could “purchase by way of assignment.” No mention of open markets anywhere.

The New York court basically said: “Look, if you didn’t want the company doing private deals, you should have said so in the contract.” Those excluded lenders who were screaming about their “sacred rights”? The court told them their rights weren’t so sacred after all.

Here’s the brutal truth—the same transaction either flies or dies based entirely on a few words in your documents. If that doesn’t scare the hell out of every lender out there, it should.

5. Tyler Cowen – The #1 Bottleneck to AI progress Is Humans – Dwarkesh Patel and Tyler Cowen

Dwarkesh Patel 00:00:11
Why won’t we have explosive economic growth, 20% plus, because of AI?

Tyler Cowen 00:00:17
It’s very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can’t use AI very well.

Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I’m guessing education is 6 to 7%. The nonprofit sector, I’m not sure the number, but you add it all up, that’s half of the economy right there.

How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you’ll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.

Dwarkesh Patel 00:01:04
The mechanism behind cost disease is that there’s a limited amount of laborers, and if there’s one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?

Tyler Cowen 00:01:25
Cost disease is more general than that. Let’s say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.

Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.

Dwarkesh Patel 00:01:57
If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you’d have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?

Tyler Cowen 00:02:11
He and I would agree, I hope. I think I would tell him, “Hey, it’s going to take a long time.” And he’d say, “Hmm, I don’t see it happening yet. I think it’s going to take a long time.” And we’d shake hands and walk off into the sunset. And then I’d eat some of his rice or wheat or whatever, and that would be awesome.

Dwarkesh Patel 00:02:29
But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don’t just eat it away, you could agree with that, right?

Tyler Cowen 00:02:38
I don’t know what the word “could” means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you’d call prediction markets are not forecasting super rapid growth anytime soon…

…Dwarkesh Patel 00:03:13
In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.

Tyler Cowen 00:03:26
I don’t agree.

Dwarkesh Patel 00:03:27
Why not buy the models?

Tyler Cowen 00:03:28
His model is far too much a one-factor model, right? Population. I don’t think it’s very predictive. We’ve had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.

So it’s really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it’s unclear what’s happened, but it’s also fragile. There’s the perspective of the economist, but also that of the anthropologist, the sociologist.

They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there’s any simple lever you can push on, intelligence or not, that’s going to give you breakaway economic growth.

Dwarkesh Patel 00:04:11
What you just said, where you’re bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you’re going to be bottlenecked by the restaurants…

…Here’s a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.

We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.

Dwarkesh Patel 00:04:53
So about a year ago, your co-writer on Martial Revolution, Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That’s why you have to do semiconductors in Taiwan, because that’s where they’re putting their nominal amount of geniuses. We’re putting ours in finance and tech.

If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn’t going to grow any faster?

Tyler Cowen 00:05:32
I didn’t agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they’re amazingly low. They’re pretty insignificant.

People who are very successful, they’re very smart, but they’re people who have say eight or nine areas where they’re like, on a scale of 1 to 10, there are nine. Like they have one area where they’re just like an 11 and a half on a scale of 1 to 10. And then on everything else, they’re an eight to a nine and have a lot of determination.

And that’s what leads to incredible success. And IQ is one of those things, but it’s not actually that important. It’s the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 January 2025:

1. OpenAI o3 Breakthrough High Score on ARC-AGI-Pub – François Chollet

OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.

This is a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models. For context, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All intuition about AI capabilities will need to get updated for o3…

…The high-efficiency score of 75.7% is within the budget rules of ARC-AGI-Pub (costs <$10k) and therefore qualifies as 1st place on the public leaderboard!

The low-efficiency score of 87.5% is quite expensive, but still shows that performance on novel tasks does improve with increased compute (at least up to this level.)

Despite the significant cost per task, these numbers aren’t just the result of applying brute force compute to the benchmark. OpenAI’s new o3 model represents a significant leap forward in AI’s ability to adapt to novel tasks. This is not merely incremental improvement, but a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain.

Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: you could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy. Meanwhile o3 requires $17-20 per task in the low-compute mode. But cost-performance will likely improve quite dramatically over the next few months and years, so you should plan for these capabilities to become competitive with human work within a fairly short timeline.

o3’s improvement over the GPT series proves that architecture is everything. You couldn’t throw more compute at GPT-4 and get these results. Simply scaling up the things we were doing from 2019 to 2023 – take the same architecture, train a bigger version on more data – is not enough. Further progress is about new ideas…

…Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible…

…To adapt to novelty, you need two things. First, you need knowledge – a set of reusable functions or programs to draw upon. LLMs have more than enough of that. Second, you need the ability to recombine these functions into a brand new program when facing a new task – a program that models the task at hand. Program synthesis. LLMs have long lacked this feature. The o series of models fixes that.

For now, we can only speculate about the exact specifics of how o3 works. But o3’s core mechanism appears to be natural language program search and execution within token space – at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is presumably guided by some kind of evaluator model. To note, Demis Hassabis hinted back in a June 2023 interview that DeepMind had been researching this very idea – this line of work has been a long time coming.

So while single-generation LLMs struggle with novelty, o3 overcomes this by generating and executing its own programs, where the program itself (the CoT) becomes the artifact of knowledge recombination. Although this is not the only viable approach to test-time knowledge recombination (you could also do test-time training, or search in latent space), it represents the current state-of-the-art as per these new ARC-AGI numbers.

Effectively, o3 represents a form of deep learning-guided program search. The model does test-time search over a space of “programs” (in this case, natural language programs – the space of CoTs that describe the steps to solve the task at hand), guided by a deep learning prior (the base LLM). The reason why solving a single ARC-AGI task can end up taking up tens of millions of tokens and cost thousands of dollars is because this search process has to explore an enormous number of paths through program space – including backtracking.

2. Energy Cheat Sheet – Brian Potter

Most energy we consume gets wasted. Of the 93.6 quads (~27,400 TWh) the US consumed in 2023, only around 1/3rd of that went towards producing useful work. The rest was lost due to various inefficiencies, such as heat engine and transmission losses…

…Another obvious fact is that despite the burgeoning construction of renewable energy infrastructure, the majority of our energy still comes from burning hydrocarbons. Petroleum, coal, and natural gas combined are responsible for roughly 82% of total energy consumption in the US.

Related to this fact is that electricity generation is a relatively small fraction of our energy system: roughly ⅓ of energy inputs go towards generating electricity. For residential and commercial consumption, only around half of energy use comes from electricity. For industrial and transportation energy (the two largest sources of consumption), electricity is around 13% and less than 0.1%.

What this chart makes clear, but also sort of abstracts away, is the enormous amount of infrastructure we’ve built for moving around hydrocarbons. The US has close to 1 million oil and natural gas wells, 3 million miles of natural gas pipeline, 145,000 gas stations, and capacity to refine 18.4 million barrels of oil a day.

This is why environmental advocates often focus on electrifying everything: decarbonizing energy infrastructure requires much more than just building low-carbon sources of energy like solar panels and wind turbines — it requires fundamentally reworking how our society moves energy around. It’s also why eliminating roadblocks and bottlenecks to energy infrastructure construction is so important.

We can also dive deeper and look at a sector-by-sector breakdown of energy use. The residential sector uses around 11.5 quads (3370 TWh) of energy, a little over 12% of total US energy consumption…

…One major takeaway here is that most residential energy consumption goes into heating things up: Space heating (5.74 quads), water heating (1.69 quads), and clothes dryers (0.26 quads) together account for ⅔rds of residential energy consumption.4 You sometimes see air conditioners decried as wasteful by energy-minded environmentalists, but air conditioning is a much smaller share of energy consumption than heating…

…Most transportation energy in the US is consumed in the form of gasoline and diesel fuel, with a relatively small amount of jet fuel. If we look at it by transportation mode, most energy (~78%) is consumed by cars, trucks, and motorcycles…

…The huge amount of energy used by transportation also means that households are using a lot of energy that isn’t captured by the residential energy consumption statistics above. In fact, in a year, the average US household consumes more energy from burning gasoline (~24,000 kilowatt-hours) than what’s used by the entire rest of the house (~22,500 kilowatt-hours).

The commercial sector is not that different from the residential sector, with heating air and water using the largest fraction, with cooling and ventilation (ie: moving air around) also using large fractions.5 As with residential, its energy consumption is roughly split between electricity and natural gas…

…With industrial energy use, we see a lot of the same patterns that we see in other sectors. One is that utility electricity is a relatively small amount of industrial energy consumption (less than 20%). Most industrial energy comes from burning fuel (mostly natural gas) directly. Once again, we see that heating things up accounts for a huge fraction of energy consumption: roughly half of all manufacturing energy goes into process heating: If we add process heat to residential and commercial air and water heating, we find that roughly 20% of total US energy consumption goes towards heating things up…

…It’s clear that most energy used in the US is ultimately wasted, with only a small fraction being used to perform useful work (moving cars, heating homes, operating electronics, and so on). Moving energy around and changing its form can’t be done perfectly efficiently (thanks in part to the 2nd law of thermodynamics), and all those conversions we require to get energy where it needs to be and in the form we need it whittle away the energy available to get things done…

…The biggest source of losses is probably heat engine inefficiencies. In our hydrocarbon-based energy economy, we often need to transform energy by burning fuel and converting the heat into useful work. There are limits to how efficiently we can transform heat into mechanical work (for more about how heat engines work, see my essay about gas turbines).

The thermal efficiency of an engine is the fraction of heat energy it can transform into useful work. Coal power plant typically operates at around 30 to 40% thermal efficiency. A combined cycle gas turbine will hit closer to 60% thermal efficiency. A gas-powered car, on the other hand, operates at around 25% thermal efficiency. The large fraction of energy lost by heat engines is why some thermal electricity generation plants list their capacity in MWe, the power output in megawatts of electricity…

…The low thermal efficiency of ICE cars and heat engines in general and the high efficiency of electrical equipment (especially things like heat pumps) are the biggest counterweight to the high energy capacity of hydrocarbons. The gas tank on an ICE car technically stores much more energy than a Tesla battery pack but only a small fraction of that gasoline energy can be converted into useful motion. Switching to EVs, even if that electricity is still provided by burning fossil fuels, could save large amounts of energy (and thus carbon emissions), as it could mean switching from a 25% efficient gasoline engine to a 60% efficient combined cycle gas turbine. And of course, with electric vehicles, there’s the possibility of powering them by non-carbon emitting sources of electricity like solar or wind. 

3. Stocks Are More Expensive Than They Used to Be – Michael Batnick

In January 2018, they wrote an article, CAPE Fear: Why CAPE Naysayers Are Wrong. The article featured yours truly…

…It’s hard to believe seven years have passed since this article. It’s harder to believe that the S&P 500 is up almost 100% since their article came out, and delivered the highest 7-year performance for any CAPE starting at 33x. I did not see this coming. At all.

My whole thing was, yes, valuations are high. But companies are better today and deserve the premium multiple. I was not saying that a high CAPE is bullish. In fact, I ended most of my posts on this topic with the message of, “Expect lower returns.” I’ve never been happier to be wrong.

I want to return to some of the arguments I made, and what the CAPE zealots missed.

To use a long-term average that goes back to the late 1800s is foolish for three reasons. First, we didn’t have CAPE data back in 1929. It was first “discovered” in the late 90s. The discovery of data in financial markets changes the very essence of it. Markets are not governed by the laws of physics. They’re alive. They adapt and evolve and adjust, like an micro organism.

Second, the CAPE ratio has been rising over time since the 1980s. We’ve only visited the long-term average once in the last 25 years, and that was at the bottom of the GFC. If that’s what it takes to return to the long-term average, maybe you should reconsider what an appropriate comp level really is.

Third, and most important, the companies are far better today than they were in the past.

4. AI’s Uneven Arrival – Ben Thompson

What o3 and inference-time scaling point to is something different: AI’s that can actually be given tasks and trusted to complete them. This, by extension, looks a lot more like an independent worker than an assistant — ammunition, rather than a rifle sight. That may seem an odd analogy, but it comes from a talk Keith Rabois gave at Stanford:

So I like this idea of barrels and ammunition. Most companies, once they get into hiring mode…just hire a lot of people, you expect that when you add more people your horsepower or your velocity of shipping things is going to increase. Turns out it doesn’t work that way. When you hire more engineers you don’t get that much more done. You actually sometimes get less done. You hire more designers, you definitely don’t get more done, you get less done in a day.

The reason why is because most great people actually are ammunition. But what you need in your company are barrels. And you can only shoot through the number of unique barrels that you have. That’s how the velocity of your company improves is adding barrels. Then you stock them with ammunition, then you can do a lot. You go from one barrel company, which is mostly how you start, to a two barrel company, suddenly you get twice as many things done in a day, per week, per quarter. If you go to three barrels, great. If you go to four barrels, awesome. Barrels are very difficult to find. But when you have them, give them lots of equity. Promote them, take them to dinner every week, because they are virtually irreplaceable. They are also very culturally specific. So a barrel at one company may not be a barrel at another company because one of the ways, the definition of a barrel is, they can take an idea from conception and take it all the way to shipping and bring people with them. And that’s a very cultural skill set.

The promise of AI generally, and inference-time scaling models in particular, is that they can be ammunition; in this context, the costs — even marginal ones — will in the long run be immaterial compared to the costs of people, particularly once you factor in non-salary costs like coordination and motivation…

…What will become clear once AI ammunition becomes available is just how unsuited most companies are for high precision agents, just as P&G was unsuited for highly-targeted advertising. No matter how well-documented a company’s processes might be, it will become clear that there are massive gaps that were filled through experience and tacit knowledge by the human ammunition.

SaaS companies, meanwhile, are the ad agencies. The ad agencies had value by providing a means for advertisers to scale to all sorts of media across geographies; SaaS companies have value by giving human ammunition software to do their job. Ad agencies, meanwhile, made money by charging a commission on the advertising they bought; SaaS companies make money by charging a per-seat licensing fee. Look again at that S-1 excerpt I opened with:

Our business model focuses on maximizing the lifetime value of a customer relationship. We make significant investments in acquiring new customers and believe that we will be able to achieve a positive return on these investments by retaining customers and expanding the size of our deployments within our customer base over time…

The positive return on investment comes from retaining and increasing seat licenses; those seats, however, are proxies for actually getting work done, just as advertising was just a proxy for actually selling something. Part of what made direct response digital advertising fundamentally different is that it was tied to actually making a sale, as opposed to lifting brand awareness, which is a proxy for the ultimate goal of increasing revenue. To that end, AI — particularly AI’s like o3 that scale with compute — will be priced according to the value of the task they complete; the amount that companies will pay for inference time compute will be a function of how much the task is worth. This is analogous to digital ads that are priced by conversion, not CPM.

The companies that actually leveraged that capability, however, were not, at least for a good long while, the companies that dominated the old advertising paradigm. Facebook became a juggernaut by creating its own customer base, not by being the advertising platform of choice for companies like P&G; meanwhile, TV and the economy built on it stayed relevant far longer than anyone expected. And, by the time TV truly collapsed, both the old guard and digital advertising had evolved to the point that they could work together.

If something similar plays out with AI agents, then the most important AI customers will primarily be new companies, and probably a lot of them will be long tail type entities that take the barrel and ammunition analogy to its logical extreme. Traditional companies, meanwhile, will struggle to incorporate AI (outside of whole-scale job replacement a la the mainframe); the true AI takeover of enterprises that retain real world differentiation will likely take years.

None of this is to diminish what is coming with AI; rather, as the saying goes, the future may arrive but be unevenly distributed, and, contrary to what you might think, the larger and more successful a company is the less they may benefit in the short term. Everything that makes a company work today is about harnessing people — and the entire SaaS ecosystem is predicated on monetizing this reality; the entities that will truly leverage AI, however, will not be the ones that replace them, but start without them.

5. Don’t let interest-rate predictions dictate your investment decisions – Chin Hui Leong

A little over a year ago, the US Federal Reserve signalled its intention to cut interest rates three times in 2024. This commentary sparked a flurry of predictions, with market watchers vying to outguess the Fed on the number, timing, and size of these cuts. Goldman Sachs, for instance, boldly predicted five cuts.

We ended up with just three interest-rate cuts in 2024 – a significant miss, to say the least…

…According to Visual Capitalist, four firms – Morgan Stanley, Bank of America, Citigroup and Nomura – pencilled in a one-percentage-point cut for 2024. Credit should be given where it’s due: their forecasts were right.

However, did getting these predictions right matter in the end? As it turns out, not so much.

Morgan Stanley, Bank of America and Citi set 2024’s S&P 500 price targets at 4,500, 5,000 and 5,100 respectively… 

…The S&P 500, of course, closed the year at 5,881…

…Forecasts and expectations may look similar, but they are different. My friend Eugene Ng puts it best: Forecasts rely on knowing when something will occur. Expectations, on the other hand, are the acknowledgement of what’s likely to occur without professing insight into when it will happen.

For example, it’s reasonable to expect the stock market to fall by 10 per cent or more sometime in the future. After all, history has shown that corrections are a common occurrence…

…In my eyes, calmness can be achieved by having the right expectations, and preparing well for any market turbulence even when we don’t know when the market will fall.

If you are prepared, you will have fewer worries. If you worry less, you will stand a better chance of doing better than average. And that’s more than any investor can hope for, whether the forecasts are right or wrong.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Deepmind), Meta Platforms (parent of Facebook), and Tesla. Holdings are subject to change at any time.