What We’re Reading (Week Ending 10 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 September 2023:

1. Rediscovering Berkshire Hathaway’s 1985 Annual Meeting – Kingswell

In the mid-1980s, Berkshire Hathaway’s annual meeting was an entirely different beast than today’s weekend-long “Woodstock for Capitalists”. Attendees didn’t have to book their hotel rooms months in advance or wake up before dawn just to get in line outside of an arena. There was no mad rush for seats once the doors opened.

It was a quieter, simpler chapter in Berkshire’s history.

So quiet, in fact, that 1985’s annual meeting was held on a Tuesday. And, instead of a cavernous arena, Warren Buffett and Charlie Munger opted for the Red Lion Inn in downtown Omaha. Approximately 250 shareholders attended the meeting and the ensuing Q&A session lasted only — only? — two hours…

HOW TO VALUE A BUSINESS: “Do a lot of reading,” replied Buffett.

Generally speaking, he recommended the writings of Benjamin Graham and Philip Fisher for those trying to sharpen their investment mindset — and annual reports and trade magazines for evaluating particular businesses and industries.

Reading, he insisted, is more important than speaking with company executives or other investors. In fact, Buffett admitted that he had recently purchased a substantial amount of Exxon stock before talking to any of that company’s executives. “You’re not going to get any brilliant insights walking into the [Exxon] building,” he said.

And, at least in the money game, size matters.

It’s easier to determine the value of a large business than a small one, Buffett said. If someone buys a gas station, for example, another station opening across the street could have a major effect on the value of the first station…

A COUPLE OF LAUGHS & A ROUND OF APPLAUSE: No annual meeting is ever complete without some of that trademark Warren Buffett wit.

  • Will the federal deficit be substantially reduced? “I’ll believe it when I see it.”
  • What about so-called “junk” bonds? “I think they’ll live up to their name,” Buffett quipped.

2. Germany Is Losing Its Mojo. Finding It Again Won’t Be Easy –  Bojan Pancevski, Paul Hannon, and William Boston

Two decades ago, Germany revived its moribund economy and became a manufacturing powerhouse of an era of globalization.

Times changed. Germany didn’t keep up. Now Europe’s biggest economy has to reinvent itself again. But its fractured political class is struggling to find answers to a dizzying conjunction of long-term headaches and short-term crises, leading to a growing sense of malaise.

Germany will be the world’s only major economy to contract in 2023, with even sanctioned Russia experiencing growth, according to the International Monetary Fund…

…At Germany’s biggest carmaker Volkswagen, top executives shared a dire assessment on an internal conference call in July, according to people familiar with the event. Exploding costs, falling demand and new rivals such as Tesla and Chinese electric-car makers are making for a “perfect storm,” a divisional chief told his colleagues, adding: “The roof is on fire.”

The problems aren’t new. Germany’s manufacturing output and its gross domestic product have stagnated since 2018, suggesting that its long-successful model has lost its mojo.

China was for years a major driver of Germany’s export boom. A rapidly industrializing China bought up all the capital goods that Germany could make. But China’s investment-heavy growth model has been approaching its limits for years. Growth and demand for imports have faltered…

…Germany’s long industrial boom led to complacency about its domestic weaknesses, from an aging labor force to sclerotic services sectors and mounting bureaucracy. The country was doing better at supporting old industries such as cars, machinery and chemicals than at fostering new ones, such as digital technology. Germany’s only major software company, SAP, was founded in 1975.

Years of skimping on public investment have led to fraying infrastructure, an increasingly mediocre education system and poor high-speed internet and mobile-phone connectivity compared with other advanced economies.

Germany’s once-efficient trains have become a byword for lateness. The public administration’s continued reliance on fax machines became a national joke. Even the national soccer teams are being routinely beaten…

…Germany today is in the midst of another cycle of success, stagnation and pressure for reforms, said Josef Joffe, a longtime newspaper publisher and a fellow at Stanford University.

“Germany will bounce back, but it suffers from two longer-term ailments: above all its failure to transform an old-industry system into a knowledge economy, and an irrational energy policy,” Joffe said…

…Germany still has many strengths. Its deep reservoir of technical and engineering know-how and its specialty in capital goods still put it in a position to profit from future growth in many emerging economies. Its labor-market reforms have greatly improved the share of the population that has a job. The national debt is lower than that of most of its peers and financial markets view its bonds as among the world’s safest assets.

The country’s challenges now are less severe than they were in the 1990s, after German reunification, said Holger Schmieding, economist at Berenberg Bank in Hamburg.

Back then, Germany was struggling with the massive costs of integrating the former Communist east. Rising global competition and rigid labor laws were contributing to high unemployment. Spending on social benefits ballooned. Too many people depended on welfare, while too few workers paid for it. German reliance on manufacturing was seen as old-fashioned at a time when other countries were betting on e-commerce and financial services.

After a period of national angst, then-Chancellor Gerhard Schröder pared back welfare entitlements, deregulated parts of the labor market and pressured the unemployed to take available jobs…

… Private-sector changes were as important as government measures. German companies cooperated with employees to make working practices more flexible. Unions agreed to forgo pay raises in return for keeping factories and jobs in Germany…

… Booming exports to developing countries helped Germany bounce back from the 2008 global financial crisis better than many other Western countries.

Complacency crept in. Service sectors, which made up the bulk of gross domestic product and jobs, were less dynamic than export-oriented manufacturers. Wage restraint sapped consumer demand. German companies saved rather than invested much of their profits.

Successful exporters became reluctant to change. German suppliers of automotive components were so confident of their strength that many dismissed warnings that electric vehicles would soon challenge the internal combustion engine. After failing to invest in batteries and other technology for new-generation cars, many now find themselves overtaken by Chinese upstarts…

…BioNTech, a lauded biotech firm that developed the Covid-19 vaccine produced in partnership with Pfizer, recently decided to move some research and clinical-trial activities to the U.K. because of Germany’s restrictive rules on data protection.

German privacy laws made it impossible to run key studies for cancer cures, BioNTech’s co-founder Ugur Sahin said recently. German approvals processes for new treatments, which were accelerated during the pandemic, have reverted to their sluggish pace, he said…

…One recent law required all German manufacturers to vouch for the environment, legal and ethical credentials of every component’s supplier, requiring even smaller companies to perform due diligence on many foreign firms, often based overseas, such as in China…

…German politicians dismissed warnings that Russian President Vladimir Putin used gas for geopolitical leverage, saying Moscow had always been a reliable supplier. After Putin invaded Ukraine, he throttled gas deliveries to Germany in an attempt to deter European support for Kyiv…

…One problem Germany can’t fix quickly is demographics. A shrinking labor force has left an estimated two million jobs unfilled. Some 43% of German businesses are struggling to find workers, with the average time for hiring someone approaching six months.

Germany’s fragmented political landscape makes it harder to enact far-reaching changes like the country did 20 years ago. In common with much of Europe, established center-right and center-left parties have lost their electoral dominance. The number of parties in Germany’s parliament has risen steadily.

3. GLP-1 Drugs: Not a Miracle Cure for Weight Loss – Biocompounding

Weight loss drugs have been the talk of the town for the last couple of months. The weight loss drugs on the market are Wegovy, Ozempic from Novo Norodisk (NVO), and Mounjaro from Eli Lilly (LLY)…

…These drugs consist of a natural hormone called GLP1…

…GLP-1 drugs mimic the action of a hormone called glucagon-like peptide 1, a natural hormone produced by the body in the gut. When blood sugar levels start to rise after a meal, the body produces this hormone to achieve multiple functions as seen in the image above. By producing and administering this hormone as a therapeutic, the drug will elicit similar effects seen with the natural hormone…

…Apart from increasing insulin production, GLP-1 can also help regulate body weight. GLP-1 improves glycaemic control and stimulates satiety, leading to reductions in food intake and thus body weight. Besides gastric distension and peripheral vagal nerve activation, GLP-1RA induces satiety by influencing brain regions involved in the regulation of feeding, and several routes of action have been proposed. GLP-1 can also reduce gastric emptying, so you don’t feel hungry so fast.

However, apart from the positives GLP1 drugs also cause muscle loss, lessen bone density, and lower your resting metabolic rate.

A research paper published in 2019, reported the percentage of weight loss comprising fat mass vs the proportion comprising lean body mass in patients using the different GLP1 drugs…

…This means that while GLP1’s can help to reduce obesity, individuals using the drugs need to be mindful to preserve their lean mass which requires exercising regularly to ensure they limit the loss of lean mass and improve their basal metabolic rate.

4. An Interview with Daniel Gross and Nat Friedman about the AI Hype Cycle – Ben Thompson, Daniel Gross, and Nat Friedman

NF: I think one of the interesting trends that we’ve seen in the last six months that we weren’t seeing a year ago is basically the application of large models to things that were previously some form of human intellectual labor or productivity labor. So in a way, what they’re doing in these cases is the models are automating or replacing or augmenting some part of a company. They’re competing not with existing software products but with parts of companies.

An example of one that Daniel and I were just talking to recently, we won’t name the company, but they automate filing bids on public tenders for businesses that do business with the government in different jurisdictions, and the time savings of this is totally enormous for these companies, and the upside for them is huge. It’s replacing a raft of internal and external consultants who were doing copywriting and bid preparation and just lots of fairly mechanical but still nothing-to-sneeze-at intellectual labor that produced bid documents. There’s material revenue upside for being able to bid on more things and win more bids, and this company’s growing like crazy, like a weed, so that would be one example.

Another example, there’s a whole sector now of these avatar platforms where people are basically able to produce personalized videos of someone saying, “Hey Ben, I saw that you were interested in our product and I wanted to tell you a little bit about us” and being able to basically generate text, feed that into an avatar platform that generates a realistic video that’s customized and using that in advertising, using it in personal outreach, using it in training materials. There’s some competing with non-consumption here where some of those videos would never have been produced because it would’ve just been too costly, and there’s some like, “Hey, God, I used to have to spend a ton of time doing this, now I can do it quite quickly”. Another example that’s like that, and by the way, all of the avatar, I mean I can name some of those Synthesia, D-ID, HeyGen, they’re all doing great, all of these companies are growing really well.

Another similar category is optimizing e-commerce. There used to be an entire — there still is — an entire industry of consultants and experts and companies who know how to do the SEO around product titles and descriptions and make sure that you have an Amazon landing page that converts, and some of that knowledge and know-how is getting crystallized into models and agent-like tool chains, and the testing can now be done automatically and you can use language models to run this kind of thing. I think this is interesting because these are all niches that really weren’t happening six or nine months ago, and in every category I just mentioned, there’s a company that’s making or soon will be making tens of millions of dollars doing this productively, creating real economic value for their customers and in some cases competing with teams of people or consultants…

...Does this just confirm the thesis though that the most compelling aspects for AI are number one mostly in the enterprise? Again, because enterprises are going to think about costs in a, I hesitate to use the word rational, but in a traditionally rational way, “It’s worth this huge upfront investment because it will pay off X amount over Y axis of time” as opposed to consumers which are more about an experience and may not think about the lifetime cost or value of something, along with this data point where whoever has the data wins. Is that just the reality or is there still opportunities for new entrants in this space?

DG: I think the story of progress is one where things will often, I think, start off looking at the enterprise as a way to make the existing thing better, that idea that the first TV shows or cameras pointed at radio shows, the horseless carriage and all that sort of stuff. So I think there’s a lot of V1 AI, let’s just accelerate or automate a lot of the human interaction with text just because we can do text synthesis now with computers. But the native use cases that’ll come out I think slightly later are going to be consumer ones — those I think will be entirely different things that are not replacing a process that existed before, they’re doing something that was never possible before and so there are consumer experiences today that are not really like anything else on the Internet.

Well, the two that I had on here were that seemed to still have a lot of traction are still growing are Midjourney and Character.AI, which are completely novel experiences and about fantasy and imagination and something that couldn’t be done previously.

DG: Yeah, it’s sort of funny, they told us the robots are going to be really good at blue collar jobs and really terrible at human psychology — that it’ll be the final realm of the human-to-human connection. Of course, it turns out the robots are fantastic at psychology and have zero dexterity for doing actual labor. But Character.AI is a good example and there’s now a bunch of these new kinds of native behavior, and it’s always interesting to ask of these behaviors. So you’re talking to an agent all day on Character, I find the good question to be asking is, “What were you doing previously?” as a way to figure out what this actually is, and the share of time that’s usually being taken is from gaming or social media. It’s really hard, I think, to forecast, to look at the iPhone and to forecast Uber or to look at the Internet and forecast even something like Amazon bots. They’re usually going to be, I think, consumer experiences. Those are the ones that are going to be the really disruptive stuff and the enterprise I think will get a lot of the obvious. We had a person here and now maybe we have a person in a co-pilot model.

That’s kind the trade-off of there being a top-down decision maker that thinks about things like lifetime value.

DG: They’ll do the rational thing.

They’re only going to do the obvious things.

DG: Yeah, and I think if businesses get disrupted by AI in any way, it will be something around a totally native, ideally a different user interface, an acceptance of a customer experience that’s a bit worse, which is usually your Clayton Christensen sort of downmarket disruption, but scales much more. I was actually thinking the companies that are trying to build, “We’re going to do your high-end legal work with AI”, I’m not exactly sure when that’ll work because the models still have this issue with hallucinating things and making things up. Whereas the low end, I was going to call a lawyer for $500 an hour to ask a particular question about my apartment lease, but instead I’m going to talk to legal GPT, that stuff I think will probably be much more impactful…

There’s an aspect here — one of the questions with the coding bit is Stack Overflow and sites like that have taken the biggest hit, but is this a sustainable future? I think this is a broader question about do we run out of data on the Internet. Is there going to be a data manufacturing industry?

NF: There is already. I think this is the secret story just beneath the surface of what’s happening. Everyone knows about the GPUs, you got to have the GPUs, they’re very expensive, we’re talking about the Nvidia supply chain. All of us know about CoWoS and wafer packaging and Ajinomoto Films and all these things.

But the other key input is data and the readily available tokens you can scrape off the Internet are quickly exhausted, and so there is currently happening beneath the surface, a shadow war for data where the largest AI labs are spending huge amounts of money, like huge amounts of money to acquire more valuable tokens, either paying experts to generate it, working through labeling companies like Scale AI or others. There’s a new crop of startups in that space as well and we think more is going to happen there and it’s going to be a really interesting space to watch.

So there’s a way in which you need these really high IQ, high-value tokens in order to train your models, and the average piece of data you scrape off a random website kind is equal to all the other data that you have, but you’ll pay extra for really valuable training data, and so people are producing it. I don’t know the exact numbers, but I’ve heard rumors that Google is spending a billion dollars this year on generating new training data, and if you’re going to spend billions and billions on your CapEx to build out your GPU training clusters, spending some fraction of that or maybe an equal amount in generating data, which is a kind of CapEx as well kind of makes sense. Someone told me the other day experts are the new GPUs and so there’s this wave of spending on experts who are going to generate tokens that can be valuable.

Then of course the secondary question there is what the legal regime will ultimately be for training. We’re operating in the US, UK, and in Europe under this fair use regime now where it’s fair use for you to scrape text off the Internet as long as it’s public and you’re not going through paywalls or user walls to get it and then you can in aggregate train machine learning models on it. That’s kind of the bright letter of the law, but people don’t always feel good about that and so will the law change, will there be a kind of DMCA for AI? And which way will it cut? I think we don’t know yet and so there may be a war for data in more ways than one over the next couple of years…

For the record, Nvidia’s results are going to come out in about 12 hours, so we don’t know what’s going to happen yet, but one of the most interesting questions broadly speaking is what is going to happen in the GPU space? Nvidia — do they have a moat, is it going to be a sustainable advantage? Obviously, they have a double advantage right now, in that they have the best hardware and they have CUDA, but there’s massive efforts on both sides to take that away. Can they build up a sustainable advantage that will persist?

NF: For the next couple of years, it’s Nvidia and it’s TPU and those are the only players that are really viable.

Google’s Tensor Processing Unit.

NF: Yeah, it’s a huge strategic asset for Google. I mean, they’re the only company basically that has an independent, not fully independent because obviously they overlap when it gets down to the fabs, and some other parts of the supply chain but they’re not subject to Jensen allocating them H100s. They can just kind of allocate their own and by all accounts, their TPU v5, they’re producing in absolute record numbers.

Easier to deal with TSMC than to deal with Jensen is what you’re saying.

NF: Yeah, I mean, at least they don’t have that one Jensen choke point. I mean, Jensen right now is dealing with overwhelming demand and limited supply, and so he’s having to very carefully allocate GPUs, and it’s sort of a very central resource distribution mechanism and allocation mechanism. It’s kind of wild. So even if you say, “Oh, AMD’s chips are going to be as good,” they’re just not going to produce them in numbers that matter next year and so I think my take is, there’s only two players for the next couple of years that matter, and my take is also that we will be supply-constrained, because there will be more AI applications that take off and need huge inference capacity, and there will be more people trying to train large models.

Is there a hype cycle aspect where we actually look back in a few years, and there were way too many GPUs bought and produced, and we actually end up with an overhang? Basically what happened with Nvidia last year, but at a 100x, a 1000x scale and that actually ends up being a huge accelerant for AI, because you end up with super cheap inference because you have all this depreciated GPUs that were bought up in 2023 and 2024, and then it all crashed. Actually going back to the dot-com bubble and all the fiber that got laid by companies that immediately went out of business.

NF: You might have a dark fiber, “How many shortages are not followed by a glut?” is always the interesting question. They usually do get followed by a glut and I think one scenario in which that happens is I’m a very strong believer in scaling laws for these big general reasoning models. Essentially, the more training data and the more flops you put in, you’re just going to get a better and better model out, and we’ve seen this now over several orders of magnitude, it’s just incredibly consistent. We saw it with GPT-1 and GPT-2, and GPT-3, and now GPT-4, and we’ll see it I think with GPT-5. So, it’s possible that there’s some escape velocity that occurs where a few labs are the only ones who can afford to train the GPT-5 or GPT-6 equivalent models, and all of the startups and businesses that were getting essentially a sub-scale amount of GPU, unless they were doing something incredibly domain specific, those are no longer needed. So, you’ll have I don’t know, three or four companies that afford to train the $10 billion model, and that’s actually a limited number of GPUs.

5. Respect and Admiration – Morgan Housel

This isn’t universal, but there are cases when people’s desire to show off fancy stuff is because it’s their only, desperate, way to gain some sense of respect and admiration. They don’t have any wisdom, intelligence, humor, empathy, or capacity for love to gain people’s respect. So they rely on the only remaining, and least effective, lever: Look at my car, beep beep, vroom vroom…

…My guess is that if your favorite comedian, or actor, or athlete turned out to be broke, you wouldn’t care. It wouldn’t impact how much you admire them, because you admire them for talents that money can’t buy.

Even when Amazon was huge and successful, Jeff Bezos used to drive a Honda Accord. Today he has a $500 million yacht. Is he respected and admired more for it? Not in the slightest. He could ride a Huffy bike and people would consider him the greatest entrepreneur of our era, because he is. Steve Jobs didn’t have any furniture. It didn’t matter. He’s a genius. He’s Steve Jobs. Material stuff makes no difference when you’re respected and admired for internal traits…

…Once you see people being respected and admired for reasons that have nothing to do with the stuff they own, you begin to wonder why you have such a strong desire for those possessions. I tend to view material desire as a loose proxy for the inverse of what else you have to offer the world. The higher my desire for fancy stuff, the less real value I have to offer.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 03 September 2023:

1. China Reaches Peak Gasoline in Milestone for Electric Vehicles – Colin McKerracher

Earlier this month, Chinese oil giant Sinopec made a surprise announcement that mostly flew under the radar. It’s now expecting gasoline demand in China to peak this year, two years earlier than its previous outlooks.

The main culprit? The surging number of electric vehicles on the road…

…China has been the largest driver of global growth for refined oil products like gasoline and diesel over the last two decades. But EV adoption rates in China are now soaring, with August figures likely to show plug-in vehicles hitting 38% of new passenger-vehicle sales. That’s up from just 6% in 2020 and is starting to materially dent fuel demand.

Fuel demand in two and three-wheeled vehicles is already in structural decline, with BNEF estimating that 70% of total kilometers traveled by these vehicles already switched over to electric. Fuel demand for cars will be the next to turn, since well over 5% of the passenger-vehicle fleet is now either battery-electric or plug-in hybrid. The internal combustion vehicle fleet is also becoming more efficient due to rising fuel-economy targets.

Diesel demand for heavier vehicles will keep growing for a bit longer, but even there a seismic shift is underway. Electric, fuel cell and battery-swapping options have quickly climbed to 12% of light commercial vehicle sales and 4% to 5% of medium and heavy commercial vehicle sales. That heavy-duty figure is likely to climb to over 10% by 2025.

Combine all those segments, and BNEF expects total oil demand for road transport in China to peak late next year. Demand won’t drop off a cliff anytime soon — fleet turnover in the trucking segment in particular will take time — but it still marks a major shift for global oil demand patterns. It also has big implications for refiners that need to quickly adjust the mix of products they produce.

It also called out the effects China’s ride-hailing fleet is having on urban gasoline demand.

Vehicles used for ride-hailing in China are far more likely to be electric — their share is nearing 40% of the fleet — than those that are privately owned. Electric ride-hailing vehicles are also more productive than their gasoline-powered counterparts, accounting for 50% of the kilometers traveled on market leader Didi’s ride-hailing platform in December…

…The Sinopec announcement highlights how looking just at the fleet of vehicles can lead one to miss the full story with respect to energy impact…

…The speed that oil gets squeezed out of the transport mix depends on how fast countries like China switch over the number of kilometers traveled to electric — not just the number of cars and trucks.

2. Peasant Logic and the Russian GKO Trade – Joe Pimbley

Later in 1998, after Russia blew up, I attended a public risk management conference in Paris. And one of the speakers was Allen Wheat, CEO of Credit Suisse at the time. I didn’t know Wheat, but he impressed me as a blunt, direct-speaking guy. He talked about Credit Suisse’s version of the GKO trade. He didn’t mention a short position in a Russian-issued dollar bond, so maybe Credit Suisse didn’t bother with the credit risk hedge. But he talked about the GKO and rubles and the cross-currency forwards Credit Suisse executed with Russian banks…

…Interesting to me, Wheat’s story was not that he got to the bottom of this controversy and figured out what part of the loss owed to market risk and what part owed to credit risk. Wheat’s conclusion to his board of directors was that Credit Suisse had a problem with its “risk management philosophy.” It had market risk and credit risk silos when really risk management must be integrated. It’s unproductivee to distinguish market risk from credit risk if things are going to fall between the cracks and nobody’s going to take responsibility for understanding the complete risk picture.

Clearly, that’s a nice message, even if you wonder why Wheat didn’t work through the finger-pointing and hold people to account. Who can argue against an integrated approach to risk? But Wheat admitted he got chastised by his board when he presented that conclusion. The board said, “Allen, we think we understand what’s wrong here. It’s good to do all your analysis and get deep into the details, but at some point, you’re not seeing the big picture. You really need to use ‘peasant logic.’”

Wheat explained that “peasant logic” was the board’s term for what we might call “common sense,” but I like peasant logic better. The board said, “You people worry about how good your models are and you wonder about using two years of historical data or five years of historical data, and whether one is better than the other, and how much data you should have. We think you should have looked at the big picture and said, ‘messing around with 40% yields means there’s a lot of risk here. This is an unstable government and currency situation.’ We think you aren’t seeing the forest for the trees.”

So this was Wheat’s point: sometimes it’s good to forget the data and models and use peasant logic. In this case, if there are abnormal returns, there must be some abnormal risk…

…Then it came time for questions, and from the back of the room, someone had to shout out his question to be heard. And as soon as he started speaking, you could tell it’s a Russian accent and the guy is Russian. Being Russian lent authenticity to his remark, “You want historical data. I’ll give you 75 years of historical data. Russia has never honored any debt obligation.”

…Unfortunately, Wheat’s reaction was to be annoyed. Wheat didn’t say, “Wow, what a great way to look at this. Why are we trusting Russian debt?” And he also didn’t say, “That’s a great example of the peasant logic the board was trying to impress upon me.”

The Russian continued. “I work for Merrill Lynch and we did this trade also and lost a lot of money. Beforehand, I told them it was a terrible trade because of Russia’s history and they didn’t listen to me because I’m just a mathematician.” Wheat still hadn’t cottoned to the idea that the Russian was helping him make his point about peasant logic, so he said in a rather dismissive, sarcastic way, “Well, I wish we had you working for us, then we wouldn’t have lost money. Right?”

Now it’s easy in hindsight, when you know how something worked out, to say “Aha, I knew such and such.” But still, I thought the Russian added to Wheat’s remarks and his remarks really made Wheat’s point. This guy in the audience was demonstrating peasant logic. The traders put all these fancy complex pieces together and think they’re really smart, but what the heck are they doing lending money to a government that, to this guy who is closer to it than the rest of us, you shouldn’t trust? 

3. Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors – Dylan Patel and Daniel Nishball

The statement that may not be obvious is that the sleeping giant, Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year. The path is clear to 20x by the end of next year given their current infrastructure buildout. Whether Google has the stomach to put these models out publicly without neutering their creativity or their existing business model is a different discussion…

…Access to compute is a bimodal distribution. There are a handful of firms with 20k+ A/H100 GPUs, and individual researchers can access 100s or 1,000s of GPUs for pet projects. The chief among these are researchers at OpenAI, Google, Anthropic, Inflection, X, and Meta, who will have the highest ratios of compute resources to researchers. A few of the firms above as well as multiple Chinese firms will 100k+ by the end of next year, although we are unsure of the ratio of researchers in China, only the GPU volumes.

One of the funniest trends we see in the Bay area is with top ML researchers bragging about how many GPUs they have or will have access to soon. In fact, this has become so pervasive over the last ~4 months that it’s become a measuring contest that is directly influencing where top researchers decide to go. Meta, who will have the 2nd most number of H100 GPUs in the world, is actively using it as a recruiting tactic.

Then there are a whole host of startups and open-source researchers who are struggling with far fewer GPUs. They are spending significant time and effort attempting to do things that simply don’t help, or frankly, matter. For example, many researchers are spending countless hours agonizing on fine-tuning models with GPUs that don’t have enough VRAM. This is an extremely counter-productive use of their skills and time.

These startups and open-source researchers are using larger LLMs to fine-tune smaller models for leaderboard style benchmarks with broken evaluation methods that give more emphasis to style rather than accuracy or usefulness. They are generally ignorant that pretraining datasets and IFT data need to be significantly larger/higher quality for smaller open models to improve in real workloads.

Yes, being efficient with GPUs is very important, but in many ways, that’s being ignored by the GPU-poors. They aren’t concerned with efficiency at scale, and their time isn’t being spent productively. What can be done commercially in their GPU-poor environment is mostly irrelevant to a world that will be flooded by more than 3.5 million H100s by the end of next year. For learning, experimenting, smaller weaker gaming GPUs are just fine…

…While the US and China will be able to keep racing ahead, the European startups and government backed supercomputers such as Jules Verne are also completely uncompetitive. Europe will fall behind in this race due to the lack of ability to make big investments and choosing to stay GPU-poor. Even multiple Middle Eastern countries are investing more on enabling large scale infrastructure for AI.

Being GPU-poor isn’t limited to only scrappy startups though. Some of the most well recognized AI firms, HuggingFace, Databricks (MosaicML), and Together are also part of this GPU-poor group. In fact, they may be the most GPU-poor groups out there with regard to both the number of world class researchers per GPU and the number of GPUs versus the ambition/potential customer demand. They have world class researchers, but all of them are limited by working on systems with orders of magnitude less capabilities. These firms have tremendous inbound from enterprises on training real models, and on the order of thousands of H100s coming in, but that won’t be enough to grab much of the market.

Nvidia is eating their lunch with multiple times as many GPUs in their DGX Cloud service and various in-house supercomputers. Nvidia’s DGX Cloud offers pretrained models, frameworks for data processing, vector databases and personalization, optimized inference engines, APIs, and support from NVIDIA experts to help enterprises tune models for their custom use cases. That service has also already racked up multiple larger enterprises from verticals such as SaaS, insurance, manufacturing, pharmaceuticals, productivity software, and automotive. While not all customers are announced, even the public list of Amgen, Adobe, CCC, ServiceNow, Accenture, AstraZeneca, Getty Images, Shutterstock, Morningstar, Evozyne, Insilico Medicine, Quantiphi, InstaDeep, Oxford Nanopore, Peptone, Relation Therapeutics, ALCHEMAB Therapeutics, and Runway is quite impressive.

4. Making Sense Of The China Meltdown Story – Louis-Vincent Gave

It is impossible to turn to a newspaper, financial television station or podcast today without getting told all about the unfolding implosion of the Chinese economy. Years of over-building, white elephants and unproductive infrastructure spending are finally coming home to roost. Large property conglomerates like Evergrande and Country Garden are going bust. And with them, so are hopes for any Chinese economic rebound. Meanwhile, the Chinese government is either too incompetent, too ideologically blinkered, or simply too communist to do anything about this developing disaster.

Interestingly, however, financial markets are not confirming the doom and gloom running rampant across the financial media…

…At Gavekal, we look at bank shares as leading indicators of financial trouble. When we see bank shares break out to new lows, it is usually a signal that investors should head for the exit as quickly as possible. This was certainly the case in 2007-08 in the US. Between February 2007 and July 2008 (six weeks before the collapse of Lehman Brothers), banks shares lost -60% of their value…

…Now undeniably, Chinese bank shares have not been the place to be over the past few years. Nonetheless, Chinese bank shares are still up a significant amount over the last decade. And this year, they have not even taken out the low of 2022 made on October 31st following the Chinese Communist Party congress. To be sure, the chart below is hardly enticing, even if the slope of the 200-day moving average is positive. Still, Chinese bank shares do not seem to be heralding a near-term financial sector Armageddon…

…China is the number one or two importer of almost every major commodity you can think of. So, if the Chinese economy were experiencing a meltdown, you would expect commodity prices to be soft. Today, we are seeing the opposite. The CRB index has had a strong year so far in 2023, and is trading above its 200-day moving average. Moreover, the 200-day moving average now has a positive slope. Together, all this would seem to point towards an unfolding commodity bull market more than a Chinese meltdown…

…Jacques Rueff used to say that exchange rates are the “sewers in which unearned rights accumulate.” This is a fancy way of saying that exchange rates tend to be the first variable of adjustment for any economy that has accumulated imbalances. On this front, the renminbi has been weak in recent months, although, like Chinese equities, it has yet to take out October’s lows.

That is against the US dollar. Against the yen, the currency of China’s more direct competitor, Japan, the renminbi continues to grind higher and is not far off making new all-time highs. And interestingly, in recent weeks, the renminbi has been rebounding against the South Korean won.

This is somewhat counterintuitive. In recent weeks, oceans of ink have been spilled about how China is the center of a developing financial maelstrom. Typically, countries spiraling down the financial plughole do not see their currencies rise against those of their immediate neighbors and competitors…

…In other words, a range of data points seems to indicate that Chinese consumption is holding up well. This might help to explain why the share prices of LVMH, Hermès, Ferrari and most other producers of luxury goods are up on the year. If China really was facing an economic crash, wouldn’t you expect the share prices of luxury good manufacturers to at least reflect some degree of concern?…

…Staying on the US treasury market, it is also odd how Chinese government bonds have outperformed US treasuries so massively over the past few years. Having gone through a fair number of emerging market crises, I can say with my hand on my heart that I have never before seen the government bonds of an emerging market in crisis outperform US treasuries. Yet since the start of Covid, long-dated Chinese government bonds have outperformed long-dated US treasuries by 35.3%.

In fact, Chinese bonds have been a beacon of stability, with the five-year yield on Chinese government bonds spending most of the period since the 2008 global crisis hovering between 2.3% and 3.8%. Today, the five-year yield sits at the low end of this trading band. But for all the negativity out there, yields have yet to break out on the downside…

…While the Chinese government debt market has been stable, the pain has certainly been dished out in the Chinese high yield market. Yields have shot up and liquidity in the Chinese corporate bond market has all but evaporated. Perhaps this is because historically many of the end buyers have been foreign hedge funds, and the Chinese government feels no obligation to make foreign hedge funds whole. Or perhaps it is because most of the issuers were property developers, a category of economic actor that the CCP profoundly dislikes.

Whatever the reasons, the Chinese high yield debt market is where most of the pain of today’s slowdown has been—and continues to be—felt. Interestingly, however, it seems that the pain in the market was worse last year than this year. Even though yields are still punishingly high, they do seem to be down from where they were a year ago…

…Why the sudden drumbeat about collapsing Chinese real estate and impending financial crisis when the Chinese real estate problem has been a slow-moving car crash over the past five years, and when, as the charts above show, markets don’t seem to indicate a crisis point?

At least, markets outside the US treasury market don’t seem to indicate a crisis point. So could the developing meltdown in US treasuries help to explain the urgency of the “China in crisis” narrative?…

…Basically, US treasuries have delivered no positive absolute returns to any investor who bought bonds after 2015. Meanwhile, investors who bought Chinese government bonds in recent years are in the money, unless they bought at the height of the Covid panic in late 2021 and early 2022. This probably makes sense given the extraordinary divergence between US inflation and Chinese inflation.

None of this would matter if China was not in the process of trying to dedollarize the global trade in commodities and was not playing its diplomatic cards, for example at this week’s BRICS summit, in an attempt to undercut the US dollar (see Clash Of Empires). But with China actively trying to build a bigger role for the renminbi in global payments, is it really surprising to see the Western media, which long ago gave up any semblance of independence, highlighting China’s warts? Probably not. But the fact that the US treasury market now seems to be entering a full-on meltdown adds even more urgency to the need to highlight China’s weaknesses.

A Chinese meltdown, reminiscent of the 1997 Asian crisis, would be just what the doctor ordered for an ailing US treasury market: a global deflationary shock that would unleash a new surge of demand and a “safety bid” for US treasuries. For now, this is not materializing, hence the continued sell-off in US treasuries. But then, the Chinese meltdown isn’t materializing either.

5. Why China’s economy ran off the rails – Noah Smith

This is a pretty momentous happening, since a lot of people had started to believe — implicitly or explicitly — that China’s economy would never suffer the sort of crash that periodically derails all other economies. That was always wrong, of course, and now the bears are coming out for a well-deserved victory lap…

…Anyway, OK, here is my quick story of what happened to China. In the 1980s, 90s, and early 2000s, China reaped huge productivity gains from liberalizing pieces of its state-controlled economy. Industrial policy was mostly left to local governments, who wooed foreign investors and made it easy for them to open factories, while the central government mostly focused on big macro things like making capital and energy cheap and holding down the value of the currency. As a result, China became the world’s factory, and its exports and domestic investment soared. As did its GDP.

At this time there were also substantial tailwinds for the Chinese economy, including a large rural surplus population who could be moved to the cities for more productive work, a youth bulge that created temporarily favorable demographics, and so on. China was also both willing and able to either buy, copy, or steal large amounts of existing technology from the U.S. and other rich countries.

Meanwhile, during this time, real estate became an essential method by which China distributed the gains from this stupendous economic growth. It was the main financial asset for regular Chinese people, and land sales were how local governments paid for public services.

Then the 2008 financial crisis hit the U.S., and the Euro crisis hit Europe. The stricken economies of the developed nations were suddenly unable to keep buying ever-increasing amounts of Chinese goods (and this was on top of export markets becoming increasingly saturated). Exports, which had been getting steadily more and more important for the Chinese economy, suddenly started to take a back seat:..

… The government told banks to lend a lot in order to avoid a recession, and most of the companies they knew how to shovel money at were in the real estate business in some way. That strategy was successful at avoiding a recession in 2008-10, and over the next decade China used it again whenever danger seemed to threaten — such as in 2015 after a stock market crash.

Maybe China’s leaders were afraid of what would happen to them if they ever let growth slip, or maybe they didn’t really think about what the costs of this policy might be. In any case, China basically pivoted from being an export-led economy to being a real-estate-led economy. Real-estate-related industries soared to almost 30% of total output.

That pivot saved China from recessions in the 2010s, but it also gave rise to a number of unintended negative consequences. First, construction and related industries tend to have lower productivity growth than other industries (for reasons that aren’t 100% clear). So continuing to shift the country’s resources of labor and capital toward those industries ended up lowering aggregate productivity growth. Total factor productivity, which had increased steadily in the 2000s, suddenly flatlined in the 2010s:..

…This productivity slowdown probably wasn’t only due to real estate — copying foreign technology started to become more difficult as China appropriated all the easier stuff. Nor was productivity the only thing weighing on China’s growth — around this same time, surplus rural labor dried up. Anyway, put it all together, and you get a slowdown in GDP growth in the 2010s, from around 10% to around 6% or 7%:

But 6-7% is still pretty darn fast. In order to keep growth going at that pace, China had to invest a lot — around 43% of its GDP, more than in the glory days of the early 2000s, and much more than Japan and Korea at similar points in their own industrial development.

Only instead of deploying that capital efficiently, China was just putting it toward increasingly low-return real estate. The return on assets for private companies collapsed:

Much of this decline was due simply to the Chinese economy’s shift toward real estate; if you strip out real estate, the deterioration in the private sector looks much less severe…

…So even as the pivot to real estate was adding to a long-term slowdown in China’s growth, it was also generating a bubble that would eventually cause an acute short-term slowdown as well. If there’s a grand unified theory of China’s economic woes, it’s simply “too much real estate”.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Adobe, Alphabet (parent of Google), and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 27 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 27 August 2023:

1. Why Lehman Brothers Failed When It Did – Joe Pimbley

In 2008, security firms operated with high leverage and significant amounts of short-term debt. Lehman had $26 billion of equity supporting $639 billion of assets and its high leverage was not unusual among security firms. But at that ratio, a 4% decline in assets wipes out equity. Meanwhile, reliance on the continuous rolling of short-term debt requires the security firm to always maintain lender confidence. Lenders’ perception of solvency becomes more important than the actual fact of solvency.

When the highly leveraged, short-term debt, security firm business model met the asset-value destruction of the Great Financial Crisis, Lehman was not the only security firm to fail. All major US firms failed to one degree or another. Besides Lehman’s outright bankruptcy, Bear Stearns and Merrill Lynch were merged into commercial banks. I believe Goldman Sachs and Morgan Stanley would have defaulted on their short-term borrowings had the Fed not permitted them to convert to bank holding companies and gain access to discount window liquidity…

…A place to begin chronicling factors specific to Lehman’s failure is the beginning of 2006. That was when the firm’s management decided to make more long-term investments.[2] Rather than remaining focused on security distribution and brokerage, Lehman increased its own holdings in commercial real estate, leveraged loans, and private equity. In our report to the bankruptcy court, we described this strategic change as a shift from the “moving business” to the “storage business.”

One year later in early 2007, Lehman management viewed the incipient financial crisis as an opportunity for the firm to gain market share and revenue from competitors that were retrenching and lowering their risk profiles. Lehman did not think the subprime mortgage crisis would spread to the general economy or even to its growing commercial real estate portfolio. Lehman had boldly taken on assets and assumed risk in the 2001-02 economic downturn. Its risk-taking back then had paid off and it hoped such contrarian boldness would again prove profitable.

Lehman’s pace of principal investments in commercial real estate, leveraged loans, and private equity increased in the first half of 2007 as other security firms reduced risk and hunkered down. It committed $11 billion to acquire Archstone REIT in May 2007 and ended up funding the riskiest $6 billion of that in October when it couldn’t find enough buyers to take it out of its commitment. Other bridge loans and bridge equity positions also became similarly stuck on its balance sheet. Its mortgage subsidiaries were slow to stop making residential mortgage loans and Lehman ended up holding mortgage-backed bonds and mortgage-bond-backed collateralized debt obligations it couldn’t sell.

To take on these risky assets, Lehman’s management raised all its internal risk limits: firm-wide, line-of-business, and even single-name risk limits. Or they ignored the limits they had set. Management was not fulsome in its disclosures to its board of directors about the risks it assumed and Lehman’s board did not press management for important information. In theory, Lehman’s compensation policy penalized excessive risk taking, but in practice it rewarded employees on revenue with minimal attention to associated risk.

Not only were these investments risky from the perspective of potential market value losses; they were risky from the point of view of financing. By their nature, real estate, leveraged loans, and private equity are hard to value and less liquid. It is difficult to determine how quickly and how severely they could lose value. These characteristics mean the ability to finance these assets cannot be assumed. If lenders worry about the realizable value of assets offered as loan security, they will lower the amount they will lend against those assets or cease lending against them altogether. Most of Lehman’s secured debt had overnight tenors, so lenders could stop rolling over their loans to Lehman on any business day!

Lehman’s management only began to cut back on leveraged loan acquisitions in August 2007 and they waited until later in 2007 to cut back on commercial real estate purchases. Yet deals in the pipeline caused Lehman’s assets to grow $95 billion to $786 billion over the quarter ending February 2008. The firm did not begin to sell assets in earnest until March 2008, but only got assets down to $639 billion by May 2008.

Lehman’s management deliberately deceived the world about the firm’s financial condition. Management used an accounting trick to temporarily remove $50 billion of assets from the firm’s balance sheet at the end of the first and second quarters of 2008. In so-called “repo 105” transactions, Lehman pledged assets valued at 105% or more of the cash it received. Relying on a legal opinion from a UK law firm addressing English law, Lehman deducted the assets from its balance sheet. No other security firm used this stratagem in 2008 and Lehman did not disclose its use.

Lehman’s management touted the firm’s “liquidity pool,” the sum of cash and assets readily convertible into cash and as late as two days before bankruptcy claimed this pool equaled $41 billion. In fact, only $2 billion of those assets were readily monetizable.

From January to May 2008, while its competitors raised equity, Lehman did not. Lehman’s management rejected offers from interested investors because they did not want to issue equity at a discount to market price. Management thought doing so would make the firm seem vulnerable. Lehman did not issue common stock in 2008 until a $4 billion issuance in June.

2. China’s 40-Year Boom Is Over. What Comes Next? – Lingling Wei and Stella Yifan Xie

For decades, China powered its economy by investing in factories, skyscrapers and roads. The model sparked an extraordinary period of growth that lifted China out of poverty and turned it into a global giant whose export prowess washed across the globe.

Now the model is broken.

What worked when China was playing catch-up makes less sense now that the country is drowning in debt and running out of things to build. Parts of China are saddled with under-used bridges and airports. Millions of apartments are unoccupied. Returns on investment have sharply declined.

Signs of trouble extend beyond China’s dismal economic data to distant provinces, including Yunnan in the southwest, which recently said it would spend millions of dollars to build a new Covid-19 quarantine facility, nearly the size of three football fields, despite China having ended its “zero-Covid” policy months ago, and long after the world moved on from the pandemic…

…What will the future look like? The International Monetary Fund puts China’s GDP growth at below 4% in the coming years, less than half of its tally for most of the past four decades. Capital Economics, a London-based research firm, figures China’s trend growth has slowed to 3% from 5% in 2019, and will fall to around 2% in 2030.

At those rates, China would fail to meet the objective set by President Xi Jinping in 2020 of doubling the economy’s size by 2035. That would make it harder for China to graduate from the ranks of middle-income emerging markets and could mean that China never overtakes the U.S. as the world’s largest economy, its longstanding ambition.

Many previous predictions of China’s economic undoing have missed the mark. China’s burgeoning electric-vehicle and renewable energy industries are reminders of its capacity to dominate markets. Tensions with the U.S. could galvanize China to accelerate innovations in technologies such as artificial intelligence and semiconductors, unlocking new avenues of growth. And Beijing still has levers to pull to stimulate growth if it chooses, such as by expanding fiscal spending.

Even so, economists widely believe that China has entered a more challenging period, in which previous methods of boosting growth yield diminishing returns…

…The transition marks a stunning change. China consistently defied economic cycles in the four decades since Deng Xiaoping started an era of “reform and opening” in 1978, embracing market forces and opening China to the West, in particular through international trade and investment.

During that period, China increased per capita income 25-fold and lifted more than 800 million Chinese people out of poverty, according to the World Bank—more than 70% of the total poverty reduction in the world. China evolved from a nation racked by famine into the world’s second-largest economy, and America’s greatest competitor for leadership.

Academics were so enthralled by China’s rise that some referred to a “Chinese Century,” with China dominating the world economy and politics, similar to how the 20th century was known as the “American Century.”

China’s boom was underpinned by unusually high levels of domestic investment in infrastructure and other hard assets, which accounted for about 44% of GDP each year on average between 2008 and 2021. That compared with a global average of 25% and around 20% in the U.S., according to World Bank data.

Such heavy spending was made possible in part by a system of “financial repression” in which state banks set deposit rates low, which meant they could raise funds inexpensively and fund building projects. China added tens of thousands of miles of highways, hundreds of airports, and the world’s largest network of high-speed trains.

Over time, however, evidence of overbuilding became apparent.

About one-fifth of apartments in urban China, or at least 130 million units, were estimated to be unoccupied in 2018, the latest data available, according to a study by China’s Southwestern University of Finance and Economics…

…Guizhou, one of the poorest provinces in the country with GDP per capita of less than $7,200 last year, boasts more than 1,700 bridges and 11 airports, more than the total number of airports in China’s top four cities. The province had an estimated $388 billion in outstanding debt at the end of 2022, and in April had to ask for aid from the central government to shore up its finances.

Kenneth Rogoff, a professor of economics at Harvard University, said China’s economic ascent draws parallels to what many other Asian economies went through during their periods of rapid urbanization, as well as what European countries such as Germany experienced after World War II, when major investments in infrastructure boosted growth.

At the same time, decades of overbuilding in China resembles Japan’s infrastructure construction boom in the late 1980s and 1990s, which led to overinvestment.

The solution for many parts of the country has been to keep borrowing and building. Total debt, including that held by various levels of government and state-owned companies, climbed to nearly 300% of China’s GDP as of 2022, surpassing U.S. levels and up from less than 200% in 2012, according to Bank for International Settlements data.

Much of the debt was incurred by cities. Limited by Beijing in their ability to borrow directly to fund projects, they turned to off-balance sheet financing vehicles whose debts are expected to reach more than $9 trillion this year, according to the IMF.

Rhodium Group, a New York-based economic research firm, estimates that only about 20% of financing firms used by local governments to fund projects have enough cash reserves to meet their short-term debt obligations, including bonds owned by domestic and foreign investors…

…In Beijing’s corridors of power, senior officials have recognized that the growth model of past decades has reached its limits. In a blunt speech to a new generation of party leaders last year, Xi took aim at officials for relying on borrowing for construction to expand economic activities…

…The most obvious solution, economists say, would be for China to shift toward promoting consumer spending and service industries, which would help create a more balanced economy that more resembles those of the U.S. and Western Europe. Household consumption makes up only about 38% of GDP in China, relatively unchanged in recent years, compared with around 68% in the U.S., according to the World Bank.

Changing that would require China’s government to undertake measures aimed at encouraging people to spend more and save less. That could include expanding China’s relatively meager social safety net with greater health and unemployment benefits.

Xi and some of his lieutenants remain suspicious of U.S.-style consumption, which they see as wasteful at a time when China’s focus should be on bolstering its industrial capabilities and girding for potential conflict with the West, people with knowledge of Beijing’s decision-making say.

The leadership also worries that empowering individuals to make more decisions over how they spend their money could undermine state authority, without generating the kind of growth Beijing desires.

A plan announced in late July to promote consumption was criticized by economists both in and outside China for lacking details. It suggested promoting sports and cultural events, and pushed for building more convenience stores in rural areas.

Instead, guided by a desire to strengthen political control, Xi’s leadership has doubled down on state intervention to make China an even bigger industrial power, strong in government-favored industries such as semiconductors, EVs and AI.

While foreign experts don’t doubt China can make headway in these areas, they alone aren’t enough to lift up the entire economy or create enough jobs for the millions of college graduates entering the workforce, economists say. 

3. LTCM: 25 Years On – Marc Rubinstein

To understand, it helps to model LTCM not as a hedge fund but as a bank (although it’s also true that the best model for a bank is often a hedge fund). Roger Lowenstein, author of When Genius Failed, acknowledges as much in the subtitle of his book: “The Rise and Fall of Long-Term Capital Management: How One Small Bank Created a Trillion-Dollar Hole.” 

The model reflects LTCM’s heritage. John Meriwether ran the arbitrage desk at Salomon Brothers becoming vice chair of the whole firm, in charge of its worldwide Fixed Income Trading, Fixed Income Arbitrage and Foreign Exchange businesses. In the years 1990 to 1992, proprietary trading accounted for more than 100% of the firm’s total pre-tax profit, generating an average $1 billion a year. LTCM was in some ways a spin-off of this business.

Indeed, LTCM partners viewed their main competitors as the trading desks of large Wall Street firms rather than traditional hedge funds. Thus, although they structured their firm as a hedge fund (2% management fee, 25% performance fee, high watermark etc) they did everything they could to replicate the structure of a bank. So investors were required to lock-up capital initially for three years to replicate the permanent equity financing of a bank (hence “Long-Term Capital Management”). They obtained $230 million of unsecured term loans and negotiated a $700 million unsecured revolving line of credit from a syndicate of banks. They chose to finance positions over 6-12 months rather than roll financing daily, even at the cost of less favourable rates. And they insisted that banks collateralise their obligations to the fund via a “two way mark-to-market”: As market prices moved in favour of LTCM, collateral such as government bonds would flow from their counterparty to them.

If there was one risk LTCM partners were cognisant of it is that they might suffer a liquidity crisis and not be able to fund their trades. It was a risk they took every effort to mitigate. 

But in modelling themselves as a bank, they forgot one key attribute: diversification.

“We set up Long-Term to look exactly like Salomon,” explains Eric Rosenfeld. “Same size, same scope, same types of trades… But what we missed was that there’s a big difference between the two: Long-Term is a monoline hedge fund and Salomon is a lot of different businesses – they got internal diversification from their other business lines during this crisis so therefore they could afford to have taken on more risk. We should have run this at a lower risk.”

It’s a risk monolines in financial services often miss. And LTCM wasn’t the only monoline to fall victim to market conditions in 1998. In the two years that followed, eight of the top 10 subprime monolines in the US declared bankruptcy, ceased operations or sold out to stronger firms. The experience prompted some financial institutions – such as Capital One – to embrace a more diversified model.

When the global financial crisis hit in 2007, monoline firms went down first. And in the recent banking crisis of 2023, those banks that failed were characterised by lower degrees of diversification.

There’s another factor that also explains the downfall of LTCM, one that similarly has echoes in the banking sector. At the end of August, LTCM was bruised but it was far from bankrupt. It had working capital of around $4 billion including a largely unused credit facility of $900 million, of which only $2.1 billion was being used for financing positions.

But the fax Meriwether sent clients on September 2 triggered a run on the bank. “We had 100 investors at the time, and a couple of fax machines,” recalls Rosenfeld. “By the time we got to investor 50, I noticed that the top story on Bloomberg was us… All eyes were on us. We were like this big ship in a small harbour trying to turn; everyone was trying to get out of the way of us.”

While the August losses reflected a flight to quality as investors flocked to safe assets, the September losses reflected a flight away from LTCM. The price of a natural catastrophe bond the firm held, for example, fell by 20% on September 2, even though there had been no increase in the risk of natural disaster and the bond was due to mature six weeks later. As the firm was forced to divulge more information to counterparties over the course of September, the situation worsened. “The few things we had on that the market didn’t know about came back quickly,” Meriwether later told the New York Times. “It was the trades that the market knew we had on that caused us trouble.”

In addition, illiquid markets gave counterparties leeway in how to mark positions, and they used the opportunity to mark against LTCM to the widest extent possible so that they would be able to claim collateral to mitigate against a possible default (the flipside of the “two way mark-to-market”). The official inquiry into the failure noted that by mid-September, “LTCM’s repo and OTC [over-the-counter] derivatives counterparties were seeking as much collateral as possible through the daily margining process, in many cases by seeking to apply possible liquidation values to mark-to-market valuations.” And because different legs of convergence trades were held with different counterparties, there was very little netting. In index options, such collateral outflows led to around $1 billion of losses in September. 

Nicholas Dunbar, who wrote the other bestselling book about LTCM, Inventing Money, quotes a trader at one of LTCM’s counterparties (emphasis added):

“When it became apparent they [LTCM] were having difficulties, we thought that if they are going to default, we’re going to be short a hell of a lot of volatility. So we’d rather be short at 40 [at an implied volatility of 40% per annum] than 30, right? So it was clearly in our interest to mark at as high a volatility as possible. That’s why everybody pushed the volatility against them, which contributed to their demise in the end.”

The episode is a lesson in endogenous risk. It’s a risk that differentiates securities markets from other domains governed by probability. “The hurricane is not more or less likely to hit because more hurricane insurance has been written,” mused one of LTCM’s partners afterwards. “In the financial markets this is not true. The more people write financial insurance, the more likely it is that a disaster will happen, because the people who know you have sold the insurance can make it happen. So you have to monitor what other people are doing.”

4. Why the Era of Historically Low Interest Rates Could Be Over – Nick Timiraos

At issue is what is known as the neutral rate of interest. It is the rate at which the demand and supply of savings is in equilibrium, leading to stable economic growth and inflation.

First described by Swedish economist Knut Wicksell a century ago, neutral can’t be directly observed. Instead, economists and policy makers infer it from the behavior of the economy. If borrowing and spending are strong and inflation pressure rising, neutral must be above the current interest rate. If they are weak and inflation is receding, neutral must be lower.

The debate over where neutral sits hasn’t been important until now. Since early 2022, soaring inflation sent the Federal Reserve racing to get interest rates well above neutral.

With inflation now falling but activity still firm, estimates of the neutral rate could take on greater importance in coming months. If neutral has gone up, that could call for higher short-term interest rates, or delay interest-rate cuts as inflation falls. It could also keep long-term bond yields, which determine rates on mortgages and corporate debt, higher for longer…

…Analysts see three broad reasons neutral might go higher than before 2020.

First, economic growth is now running well above Fed estimates of its long-run “potential” rate of around 2%, suggesting interest rates at their current level of 5.25% and 5.5% simply aren’t very restrictive.

“Conceptually, if the economy is running above potential at 5.25% interest rates, then that suggests to me that the neutral rate might be higher than we’ve thought,” said Richmond Fed President Tom Barkin. He said it is too soon to come to any firm conclusions.

That said, a model devised by the Richmond Fed, which before the pandemic closely tracked Williams’s model, put the real neutral rate at 2% in the first quarter.

Second, swelling government deficits and investment in clean energy could increase the demand for savings, pushing neutral higher. Joseph Davis, chief global economist at Vanguard, estimates the real neutral rate has risen to 1.5% because of higher public debt…

…Third, retirees in industrial economies who had been saving for retirement might now be spending those savings. Productivity-boosting investment opportunities such as artificial intelligence could push up the neutral rate.

And business investment depreciates faster nowadays and is thus less sensitive to borrowing costs, which would raise neutral. It is dominated by “computers and software, and much less office buildings, than it used to be,” Summers said during a lecture in May…

…Fed Chair Jerome Powell has in the past warned against setting policy based on unobservable estimates such as neutral, which he compared to navigating by the celestial stars.

Last December, he said the Fed would be careful about fine-tuning interest rates based on such estimates—for example, because falling inflation pushes real rates well above neutral. “I don’t see us as having a really clear and precise understanding of what the neutral rate is and what real rates are,” Powell said.

Some economists reconcile the debate by differentiating between short-run and longer-run neutral. Temporary factors such as higher savings buffers from the pandemic and reduced sensitivity to higher rates from households and businesses that locked in lower borrowing costs could demand higher rates today to slow the economy.

But as savings run out and debts have to be refinanced at higher rates in the coming years, activity could slow—consistent with a neutral rate lower than it is now.

5. Defining, Measuring, and Managing Technical Debt – Ciera Jaspan and Collin Green

We took an empirical approach to understand what engineers mean when they refer to technical debt. We started by interviewing subject matter experts at the company, focusing our discussions to generate options for two survey questions: one asked engineers about the underlying causes of the technical debt they encountered, and the other asked engineers what mitigations would be appropriate to fix this debt…

…This provided us with a collectively exhaustive and mutually exclusive list of 10 categories of technical debt:

  • Migration is needed or in progress: This may be motivated by the need to scale, due to mandates, to reduce dependencies, or to avoid deprecated technology.
  • Documentation on project and application programming interfaces (APIs): Information on how your project works is hard to find, missing or incomplete, or may include documentation on APIs or inherited code.
  • Testing: Poor test quality or coverage, such as missing tests or poor test data, results in fragility, flaky tests, or lots of rollbacks.
  • Code quality: Product architecture or code within a project was not well designed. It may have been rushed or a prototype/demo.
  • Dead and/or abandoned code: Code/features/projects were replaced or superseded but not removed.
  • Code degradation: The code base has degraded or not kept up with changing standards over time. The code may be in maintenance mode, in need of refactoring or updates.
  • Team lacks necessary expertise: This may be due to staffing gaps and turnover or inherited orphaned code/projects.
  • Dependencies: Dependencies are unstable, rapidly changing, or trigger rollbacks.
  • Migration was poorly executed or abandoned: This may have resulted in maintaining two versions.
  • Release process: The rollout and monitoring of production needs to be updated, migrated, or maintained.

We’ve continued to ask engineers (every quarter for the last four years) about which of these categories of technical debt have hindered their productivity in the previous quarter. Defying some expectations, engineers do not select all of them! (Fewer than 0.01% of engineers select all of the options.) In fact, about three quarters of engineers select three or fewer categories. It’s worth noting that our survey does not ask engineers “Which forms of technical debt did you encounter?” but only “Which forms of technical debt have hindered your productivity?” It’s well understood that all code has some technical debt; moreover, taking on technical debt prudently and deliberately can be a correct engineering choice.4 Engineers may run into more of these during the course of a quarter, but their productivity may not be substantially hindered in all cases.

The preceding categories of technical debt have been shown in the order of most to least frequently reported as a hindrance by Google engineers in our latest quarter. We don’t expect this ordering to generalize to other companies as the ordering probably says as much about the type of company and the tools and infrastructure available to engineers as it does the state of the code base. For example, Google engineers regularly cite migrations as a hindrance, but large-scale migrations are only attempted at all because of Google’s monolithic repository and dependency system;5 other companies may find that a large-scale migration is so impossible that it is not even attempted. A fresh start-up might have few problems with dead/abandoned code or code degradation but many hindrances due to immature testing and release processes. While we do expect there to be differences across companies in how much engineers are hindered by these categories, we believe the list itself is generalizable.

Our quarterly engineering survey enables us to measure the rate at which engineers encounter and are hindered by each type of technical debt, and this information has been particularly useful when we slice our data for particular product areas, code bases, or types of development. For example, we’ve found that engineers working on machine learning systems face different types of technical debt when compared to engineers who build and maintain back-end services. Slicing this data allows us to target technical debt interventions based on the toolchain that engineers are working in or to target specific areas of the company. Similarly, slicing the data along organizational lines allows directors to track their progress as they experiment with new initiatives to reduce technical debt.

However, we find quarterly surveys are limited in their statistical and persuasive power…

…Our goal was then to figure out if there are any metrics we can extract from the code or development process that would indicate technical debt was forming before it became a significant hindrance to developer productivity. We ran a small analysis to see if we could pull this off with some of the metrics we happened to have already…

…The results were disappointing, to say the least. No single metric predicted reports of technical debt from engineers; our linear regression models predicted less than 1% of the variance in survey responses. The random forest models fared better, but they had high precision (>80%) and low recall (10%–25%). That is, these models could identify parts of the code base where a focused intervention could reduce technical debt, but they were also going to miss many parts of the code base where engineers would identify significant issues.

It is quite possible that better technical debt indicator metrics do exist for some forms of technical debt. We only explored objective metrics for three types of technical debt, and we only sought to use existing metrics, rather than attempting to create new metrics that might better capture the underlying concepts from the survey.

However, it’s also possible that such metrics don’t exist for other types of technical debt because they are not about the present state of a system, but a relation between the system’s present state and some unimplemented ideal state. An engineer’s judgments about technical debt concern both the present state and the possible state. The possible states of the world are something that mathematical models cannot incorporate without the modeler’s direct intervention. For example, the fact that a project’s code base consists entirely of code written in Python 2 is not technical debt in a world where there is no loss of functionality compared to another language or version or outside pressure to migrate. However, in a world where Python 3 is a preferred or required alternative, that same corpus of Python 2 constitutes a needed migration. The present state of the world—from the perspective of a model—is identical in these two instances, but the possible world has changed. Humans consider the possible world in their judgments of technical debt. If a model were to incorporate explicit rules that capture aspects of the possible world (for example, if a model were designed to count every file in Python 2 as technical debt because the human modeler knows Python 3 is an alternative), then the change would be detectable to the model. If we could capture this judgment as it evolves, it could form the basis for better measurements of technical debt…

…While we haven’t been able to find leading indicators of technical debt thus far, we can continue to measure technical debt with our survey and help to identify teams that struggle with managing technical debt of different types. To that end, we also added the following questions to our engineering survey:

  • To what extent has your team deliberately incurred technical debt in the past three months?
  • How often do you feel that incurring technical debt was the right decision?
  • How much did your team invest in reducing existing technical debt and maintaining your code?
  • How well does your team’s process for managing technical debt work?

Combined with the survey items about the types of technical debt that are causing productivity hindrances, these questions enable the identification of teams that are struggling, reveal the type(s) of technical debt they are struggling with, and indicate whether they are incurring too much debt initially or whether they are not adequately paying down their existing debt. These are useful data, especially when teams can leverage them under guidance from experts on how to manage their technical debt. Fortunately, we have such experts at Google. Motivated in part by our early findings on technical debt, an interested community within Google formed a coalition to help engineers, managers, and leaders systematically manage and address technical debt within their teams through education, case studies, processes, artifacts, incentives, and tools. The coalition’s efforts have included the following:

  • Creating a technical debt management framework to help teams establish good practices. The framework includes ways to inventory technical debt, assess the impact of technical debt management practices, define roles for individuals to advance practices, and adopt measurement strategies and tools.
  • Creating a technical debt management maturity model and accompanying technical debt maturity assessment that evaluates and characterizes an organization’s technical debt management process and helps grow its capabilities by guiding it to a relevant set of well-established practices for leads, managers, and individual contributors. The model characterizes a team’s maturity at one of four levels (listed here from least to most mature):
    • Teams with a reactive approach have no real processes for managing technical debt (even if they do occasionally make a focused effort to eliminate it, for example, through a “fixit”). Teams with a proactive approach deliberately identify and track technical debt and make decisions about its urgency and importance relative to other work.
    • Teams with a strategic approach have a proactive approach to managing technical debt (as in the preceding level) but go further: designating specific champions to improve planning and decision making around technical debt and to identify and address root causes.
    • Teams with a structural approach are strategic (as in the preceding level) and also take steps to optimize technical debt management locally—embedding technical debt considerations into the developer workflow—and standardize how it is handled across a larger organization.
  • Organizing classroom instruction and self-guided courses to evangelize best practices and community forums to drive continual engagement and sharing of resources. This work also includes a technical talk series with live (and recorded) sessions from internal and external speakers.
  • Tooling that supports the identification and management of technical debt (for example, indicators of poor test coverage, stale documentation, and deprecated dependencies). While these metrics may not be perfect indicators, they can allow teams who already believe they have a problem to track their progress toward fixing it.

Overall, our emphasis on technical debt reduction has resulted in a substantial drop in the percentage of engineers who report that their productivity is being extremely to moderately hindered by technical debt or overly complicated code in their project. The majority of Google engineers now feel they are only “slightly hindered” or “not at all hindered” by technical debt, according to our survey. This is a substantial change and, in fact, is the largest trend shift we have seen in five years of running the survey.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 20 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 20 August 2023:

1. TIP569: An Investor’s Guide To Clear Thinking w/ Chris Mayer – Clay Finck and Chris Mayer

[00:18:10] Clay Finck: And I think about how a lot of times people will attach a label to something, and when I relate this to investing, someone might think they’re a growth investor, they want higher growth, and when they see that a stock is like a value stock, then they’ll just like not even look at it and not even understand what it is.

[00:18:28] Clay Finck: And I think about how some of your holdings. Are in what some people might call unattractive industries. I just think about how you dug underneath the surface and just because it might be in what people call an unattractive industry, it still can be a very attractive long-term business.

[00:18:45] Chris Mayer: Absolutely, and this has happened to me multiple times. I know that I have old Dominion Freight lines in the portfolio. It’s this trucking company and most people trucking, look at it. It’s an unattractive industry. Why would you want to be involved in that? It’s lots of competition by then. You get into Old Dominion and you see that it’s return on invested capital is huge and it’s got this deep competitive advantage, over everyone else, and it’s been taking market share, doubles market share over the last decade.

[00:19:11] Chris Mayer: And then you see. In terms of results, it’s, it would be silly to just say I don’t own trucking companies because the economic to that are not something you expect to see. It’s a real outlier, even within its own industry, and I’ve had that before too. I never had too much success with retail or retail stocks and retail.

[00:19:28] Chris Mayer: But I own Dino Polska, which is Polish Grocery Store. And again, that’s getting beyond just its category and looking at the underlying economics, which is phenomenal for that business. And it made me want to look further. And so ultimately it’s been a very successful investment so far. So again real-world consequences for taking these labels at face value and in your, if your willingness to dig behind them can lead to some real insights.

[00:19:51] Chris Mayer: It seems really obvious. Sometimes when I talk about general semantics to people, they’ll be like, yeah it just seems so obvious, but it’s not the way people behave. They behave exactly like we’re talking about. They’re taking the label at face value and they’re allowing it to do their thinking for them.

[00:20:05] Chris Mayer: They’re not looking beyond it. Not looking behind it, and it’s lots of examples. We’ve talked already about a bunch.

[00:20:12] Clay Finck: You also caution against confusing correlation with causation. Don’t fight the Fed is a phrase that gets thrown around a lot. And you’re right. Whenever you see an if x then Y statement, then you should distrust it.

[00:20:27] Clay Finck: And when I think about what drives stock market returns, I tend to think about sustainable growth. And free cash flows will ultimately drive long-term shareholder returns. And this book really makes me. Question a lot of my assumptions. So I want to just turn that question to you and have you talk about what you believe drives long-term stock returns.

[00:20:50] Chris Mayer: I’ll answer that, but first I’ll go back a little bit and on the if then the problem is that, and finance people do this all the time, is they want to just change one variable. So they’ll say okay, if interest rates go up, then the stops are going to go down because oh raises everyone, discount rate and the cash flows were discounted.

[00:21:09] Chris Mayer: Cash flows now at this higher rate and asset values will fall. The problem is, of course, in the real world, You can never just change one variable. There’s like all these other things that change at the same time. The underlying cash flows change. Expectations change. All kinds of things change. And so you can have a result that then is then surprising.

[00:21:26] Chris Mayer: So here we’ve had a period of time where the Fed has increased rates at a faster clip than ever has in the markets ripping. And there are lots of examples in the past where if you had known ahead of time what some outcome was going to be, you would still be wrong on the investment side. So one of my favorites in the book, ’cause I think I got this from Michael O.

[00:21:42] Chris Mayer: Higgins, pointed out and he had an example where even if you knew the price of gold more than doubled over some period of time. You thought to yourself, that’s pretty good. Logically I’m going to buy the largest gold miner Newmont. And then if you roll forward, Newmont’s stock actually fell 5% during that time again, ’cause it wasn’t just one variable to change.

[00:22:00] Chris Mayer: Newmont has costs that went up a lot. There are other factors in the business expectations involved. So you had a dramatically different outcome than you would’ve thought based on the initial conclusion. So that’s why you have to distrust any if then if X happens, then Y And when it comes to market.

[00:22:16] Chris Mayer: Because there are so many other things going involved going on. So when it comes to, you know what drives long-term returns, I think it helps just to get down to really basic stuff. So a business, you think of it as a pile of capital. And what rate can it increase that capital over the next 10 years that’s the fundamentals that drive returns.

[00:22:36] Chris Mayer: So it’s some kind of return on invested capital plus growth rate over time that really drives returns. What return you may get is also a function of the price that you pay. So in those three things, you know you have everything. And mathematically it can’t work out any other way. One of those three things has to lead to returns now.

[00:22:54] Chris Mayer: Being able to forecast or figure out what return on invested capital’s going to be over the next years and what’s the growth rate going to be and what kind of valuation going to be, that’s probably impossible to know. We’re all making the best guesses and what we can, based on our research and digging into why certain businesses are able to generate such returns and that’s what we do.

[00:23:16] Clay Finck: You’re a big believer in’s. Sarnoff wrote that the price of stock varies inversely with the thickness of its research file, and the fattest files are found in stocks that are the most troublesome and will decline the furthest. The thinnest files are reserved for those that appreciate the most. In short, I sort of see this as the best idea.

[00:23:38] Clay Finck: They really stand out to you and they don’t require extraordinary levels of research to build that conviction. And I think this points to what you mentioned there, you want us to find the essential elements of what’s going to lead to this business’s success and then understand the factors that play into that.

[00:23:56] Clay Finck: And you filter out. About everything else. In a way, it’s drastically simplifying the extremely complex world around us, which is really liberating to do as an investor. So I’d love for you to talk more about Sosnoff’s law. 

[00:24:12] Chris Mayer: That’s beautifully put there, Clay. That’s good. That, that’s exactly it. You hit it right on its nose.

[00:24:17] Chris Mayer: I spent a lot of time trying to figure out what kind of essential things to know about a business that’s usually less than a handful of things. Really key the really important things. And then the rest of it are details that are not that important in the long term, although they could, might be important in the short term.

[00:24:33] Chris Mayer: It might have big impacts in particular quarters or whatever, but, Long term, they don’t matter much. So I spent a lot of time on that. When it comes to sauce, I was always a he-wrote a book called Humble on Wall Street, and I think it came out in the seventies. So the thickness of the research file is something that doesn’t hold up as well over time, but we get the metaphor.

[00:24:52] Chris Mayer: And he was big on a couple of things I learned from him. One was he really emphasized the skin and the game aspect, but also I liked Sosnoff’s law because it jives with my experience as well. When you’re really laboring over an idea and you have to rely on detailed spreadsheets and assumptions to justify it, it’s probably not a good idea.

[00:25:10] Chris Mayer: The ones that are really great are the ones that just jump out at you, and you’re just really excited and it seems obvious. I mean that again, it jives with my own experience. Some of the in best investments I’ve made have had very short I write little internal memos to myself, and some of ’em have been very short, and they’ve been great and the ones that I have to spend a lot of time on, sometimes those don’t do as well…

…[00:30:27] Clay Finck: Related to this idea of everything changes. I think there’s this profound mental model you sort of introduced to me that this time is always different. People try and make comparisons today to previous times in the past, and they’re trying to make predictions about what’s going to happen.

[00:30:45] Clay Finck: Is the stock market going to crash? Are we going to enter a recession? This mental model of this time is always different, which is again, very liberating. Because even some of the great investors talk about how history tends to repeat itself. Maybe it rhymes but not repeat. Exactly. And I think about how companies are always changing, market dynamics are always changing and everything is changing again.

[00:31:06] Clay Finck: And you talk about indexes and how they’re changing. So people will look at it. The S&P 500, and they’re not really looking at the companies in that index. They’re looking at what the price say in 2003, what’s the price in 2023? What are the multiples between the two? And the reality is that you’re comparing things that are entirely different because the index itself changes.

[00:31:29] Clay Finck: The top holdings in 2003 were much different than in 2023. 

[00:31:35] Chris Mayer: Yeah, that’s an important thing. That’s, again, mixes in with a lot of stuff we’ve talked about. The S&P index is a name, that has a label and people treat it as if it’s this comparison over. Decades of time and that it’s a valid comparison.

[00:31:49] Chris Mayer: But you know, just look at the top 10 and the S&P. Now look at it 20 years ago, look at it, 20 years before that, substantially different. And the mix of companies is significantly different. I think the S&P only added financials in the seventies or something like that. So there’s been. A lot of big changes to the index over time, and that’s going to skew your numbers price, earnings ratio, or whatever.

[00:32:10] Chris Mayer: So, that’s been very important and I love that this time it’s a different example too because I think it was Templeton who made that famous, where he said, this time is different, is most dangerous, blah, blah, blah. And I get the idea behind it. The idea behind it is investors want to try to defend bubbles or something, and we all know that they come to an end at some point.

[00:32:29] Chris Mayer: So there’s. There’s some to that, but then the other side is that this time is always different from every other time before that details are always different companies, different people. It’s a different world than now, than it was 20 years ago or 20 years before that in mind. That is the case, which may prevent you from falling into some traps.

[00:32:47] Chris Mayer: Finance, people in finance do this all the time. And Twitter, how many times you’ll see, now they call it X. You’ll see charts where someone will say I have some bear market going like this. And they’ll have a present. It’ll be like this, oh my God, it matches up perfectly and has no validity whatsoever.

[00:33:03] Chris Mayer: At all. Nothing to do with anything, but people love to do that. 

[00:33:08] Clay Finck: Just to use an example here, they might look at the S&P 500. I’m just throwing out numbers. These aren’t based on numbers. I actually looked it up, we’ll say the multiple on the S&P was 20 in 2003, whatever it was. And today we’ll say it’s higher than that.

[00:33:23] Clay Finck: We’ll say it’s much higher today than multiple today, and people will assume that, oh, we’re way above the historical means. So eventually things tend to revert to the mean. So is reversion to the mean itself a flawed concept? 

[00:33:39] Chris Mayer: Yeah I have another outlier opinion on this, which is that yeah, the versions are mean that people talk about is it is very problematic because there is no real mean, it’s your imagination.

[00:33:50] Chris Mayer: It’s a concept we’ve created, but it doesn’t, there’s no mean, there’s no market. No market says I have to go to this mean, and that mean is always changing, as you pointed out. You could look at the multiple now today, and the SB is a lot higher than it was say in 2003, but in 2003, Some of the biggest companies might include ExxonMobil, which might’ve been a very large company.

[00:34:08] Chris Mayer: 2003 might have been slower growth, more capital intensive businesses that are part of that index versus now there’s. Reasons why they might be very different and it doesn’t make sense to say that today’s S&P has to go to some mean that’s constructed based on constituents that aren’t even in the index today.

[00:34:25] Chris Mayer: I think that’s an overlooked thing. Mean version. You have to be careful again with what are the components. That you’re saying has to mean revert. It might be one thing if you’re looking at a company that does the same thing now, exact same thing it did now 20 years ago, and the margins don’t change very much and suddenly you’ve got a little dip.

[00:34:42] Chris Mayer: There might be something, some way to defend, a reversion to mean, but I’m very skeptical of those kinds of arguments.

[00:34:48] Clay Finck: Again, I think it’s another case where people are just maybe simplifying too much. They’ll be like this company’s trading at the lowest multiple it’s ever been. I’m like, have you looked at the business and the actual where things are trending, where the world is trending?

[00:35:03] Chris Mayer: Sure. Yeah. I know there’s a prominent example, like I know a lot of people are getting excited about, say Danaher, and because it’s traded at the lowest PE it’s traded at in however many years. Do you look at the return on invested capital in Danaher? It’s been in decline. It’s not the same business that it was that people remember in their head at this Great.

[00:35:21] Chris Mayer: High performing conglomerate for all those years, it’s maybe it will get back there. Maybe there’s a thesis that it gets back there. But a lot of times when you see a company trading at the lowest level it’s ever traded at, there’s a reason. And be careful about just assuming that you can buy this today and go back to it, it’ll mean revert, and you’ll make this great return…

…[00:42:39] Clay Finck: Another thing that really stands out to me as I read more and more of your work is your very relaxed nature and your ability to not take yourself too seriously. I want to read a bit here from your book, you write Laugh More. I. Life may not be a joke, but it is often funny.

[00:42:57] Clay Finck: If you keep in mind the abstractions. Most of the serious business of the world seems portentous, trivial, silly, and ridiculous. You can’t help but laugh at it. I read this and I think about this and I think about Buffett and Munger and I see some similar characteristics in that they don’t take themselves too seriously and they truly want to enjoy life.

[00:43:17] Clay Finck: So I’d love for you to talk about. How this maybe ties into investing because you’re managing a fund, you’re managing other people’s money real money at risk, yet you’re able to detach yourself in a way and not become too overwhelmed by it, and not take yourself too seriously. 

[00:43:34] Chris Mayer: Yeah, I would say this is learned too.

[00:43:36] Chris Mayer: This is something I’ve had to work at, but it helps to do the a hundred baggers book, looking at the long-term performance of companies. One lesson that’s inescapable from doing all that is you realize that things that seem momentous at certain points in time, I. Really just sort of bleed out and are almost imperceptible over a longer period of time.

[00:43:54] Chris Mayer: So certain quarters, or even where stock prices can make violent moves, 10, 15% move at the time they seem like, wow, it gets stressed out. Something drops 15% or whatever. But you look back in time, even severe bear markets and you look back in time and it’s a little bump in a chart. So when you zoom out, keep a bigger-picture perspective.

[00:44:13] Chris Mayer: That’s helped me a lot. It’s really helped me a lot to do that. But I do think it’s really important. I mean it’s, I think I’ve enjoyed it a lot more the way I am now. Just more relaxed about it. I’m a little more detached, taking a good long view rather than just being so intense where you’re so focused on the moment and the quarter or whatever is going to happen.

[00:44:32] Chris Mayer: And so those guys, buffet, Munger, they’re wise in a lot of ways. And this is one too when Buffet says he. Tap dance into work every day and enjoys it. Some of it has to be this. He can’t take it that seriously. 

2. China is no 1990s Japan – but it could have been – Robert Canell

So let’s take a look at what is happening in China and pick apart the deflation argument. Firstly, let’s look for evidence of a bubble because if we are going to argue that it is about to burst, it needs to be there in the first place.

In 1984, land prices for commercial property in Tokyo grew at a respectable 7.2% annual pace, The following year, this accelerated to 12.5%, and the year after that, to 48.2%. By 1987, commercial property land prices were rising at a 61.1% YoY pace. It was once suggested that the 1.5 square kilometres of land surrounding the Imperial Palace in Tokyo, were worth more than all the land in California. And whether or not that calculation stacks up (it sounds highly questionable), it shows just how extreme things had become.

Yes, Japan had a bubble. If we use similar land price data for Beijing for both residential and commercial property, then there are certainly periods when prices accelerate sharply. The most recent period where this happened was between 2014 and 2017 when residential property prices accelerated at about a 20% annual pace. But it has slowed since and is showing small declines now…

…Turning now to the equity markets. If we superimpose the recent price developments of the Shanghai composite index onto the Tokyo stock exchange in the period running up to the bubble, what we see is that China’s stock market has for some time been extremely average. There is no sense at all here of an excessive surge that requires a long period of dismal performance to compensate. That’s not to suggest a particularly bright future for Chinese stocks, but it beats a Japan-style collapse.

Ruling out a deflationary collapse is clearly a positive standpoint. But we also don’t see Chinese growth at much more than 5% over the coming few years. And we have a tough time explaining to people why this is actually a perfectly reasonable growth rate which doesn’t require a panicked response. But here goes…

In previous years, China’s GDP growth had taken a disproportionate boost from property development. Not only does construction provide a substantial direct boost to activity and labour demand, but it also requires a lot of inputs from industry: cement, steel, copper, aluminium, PVC etc. That also provides a big boost to things like energy demand. And new property sales also require furnishings, and that in turn pushes up this aspect of retail spending.

But the amount of growth that construction was delivering to the economy had grown to totally unsustainable levels. In some years, in nominal terms, construction contributed up to almost three percentage points of total GDP growth, often about a third of the total.

To try to highlight how anomalous this was, if you look at average Asian GDP growth rates pre-Covid relative to GDP per capita, China was a huge outlier, growing several percentage points faster than you would expect for an economy of its state of development. And that deviation can be largely put down to growth generated by excessive construction activity. This was essentially construction-driven GDP “bought” with debt and ultimately, unsustainable.

Maintaining this sector at pre-Covid growth rates could have ended up in disaster. Maybe a Japan-style disaster. What the Chinese authorities have done, quite sensibly, is to nip this in the bud before this happens, though this of course is going to mean reversion to slower (more sustainable) growth rates that are more in line with an economy of China’s stage of economic development.

3. Buffett’s 44% CAGR and Various Types of High Quality Investments – John Huber

Warren Buffett initially invested in 5 Japanese stocks in 2020 and I don’t think many people realize how successful this investment has been so far:

That initial basket investment is up over 200%: a 3x in 3 years, or 44% CAGR on that initial investment. Each stock is up over 2x, one is up 5x, and the basket in aggregate up 3x. He’s added to the basket since, and those add on purchases have also done well…

…Just like how we would rate an investment result, a good business is one that makes a lot of money relative to the money that you had to put into it (i.e. high return on capital).

But the most value gets created in companies that see increasing returns on capital (i.e. high incremental returns on capital; e.g. a company where returns rise from 12% to 18%, etc…). I’ve spent a lot of time thinking about Buffett’s investments in Japan (which is now a top 5 investment) and also in energy (which is his largest equity investment behind Apple). The common theme is something that might surprise most people and I think probably isn’t fully appreciated: both groups have rising returns on capital.

I see three things that Buffett probably saw (among other things) in Japan and also in energy:

  1. Cheap valuations
  2. Rising ROIC’s
  3. Significant change in capital allocation policies

(These traits also applied to Apple when he first invested in 2016). Buffett has always prioritized value. We know he has a preference for quality companies but he’s always been a value focused investor who wants a high FCF yield (more so than Munger). He has said “price is my due diligence” and we know from both his words and actions (especially in the earlier years) that he prefers quality, but he demands value.

But, he also wants quality businesses. And despite the stodgy historical returns, these groups are exhibiting current ROIC’s that exceed those of most of the FANG stocks and other high fliers. And not just better ROIC’s but also more rational capital allocation. There isn’t much growth in his Japanese trading companies, but if you pay 7x FCF for a stock that is returning all of that FCF via buybacks and dividends, you earn a 15% annual return even with no growth and no increase in the multiple.

I’ve written about 3 engines: a stock’s return is the product of three simple factors: growth, change in multiple, and capital returns (change in shares outstanding plus any dividends). Over the past decade, many investors focused on the first engine exclusively, ignoring the 2nd and 3rd. This worked over the last decade, but I would not expect it to work going forward. Growth is an important input into value, but it is just one of those three engines. If you pay too much, engine #2 becomes a drag (P/E contraction). If you own a stock that’s diluting through share issuance, engine #3 is a drag. It’s possible to earn high returns from one engine that overcomes the other two, but this is rare.

The best stocks often have all three engines working — sometimes only in surprisingly modest amounts individually, but collectively they can produce fabulous results. For example, a stock that grows earnings at 5%, has a P/E go from 8 to 12 over a 5 year period, and returns all its earnings via buybacks and/or dividends will provide you with approximately 23% total annual returns over that 5 year period. Growth engine gave you just 5%, but you received an 8.4% annual tailwind from the P/E multiple and approximately 10% additional returns from buybacks and dividends…

…Remember: a good business isn’t one that has an interesting or exciting narrative, it’s one that makes a lot of money relative to the money invested into it. Buffett obviously doesn’t get influenced by narratives or growth stories. He’s only interesting in finding great investments. And great investments tend to come from good businesses that are undervalued. And good businesses tend to have two common themes: strong returns on capital and good management that are rationally allocating free cash flow. Japanese stocks and energy stocks lack exciting narratives, but they have these key ingredients that are found in most quality investments: good returns on capital, smart capital allocation, and low valuations. All three engines are working in these two investment areas for Buffett. I think this is what interested him in Apple, it is what interested him in Japan and energy, and it is what has led these investments to become so successful.

Rising returns on capital simply means more earnings per unit of capital invested. These rising ROIC’s can happen in three ways:

1 — increasing the denominator (reinvesting all capital into the business at high rates of return)

2 — increasing the numerator while keeping the denominator flat (i.e. higher earnings on same levels of capital), or

3 — and most surprising to most people — a similar value creation can also come from a shrinking denominator while keeping earnings flat — reducing excess cash levels through buybacks (which reduces the denominator). This means no growth but increasing quality of earning power, which frees up more and more cash to be used for buybacks. This can be especially effective when the rising FCF occurs on stocks with low multiples, as the company can gets a better return (higher FCF yield) on its own shares.

4. Fundamentals simply do not matter in China’s stock markets – Michael Pettis

It is tempting to try to find meaning in the so-called “A-share premium”. This is the persistent valuation gap between the shares of Chinese companies that trade in Shanghai or Shenzhen (known as A-shares) and the shares of the same companies that trade in Hong Kong (H shares)…

…Normally, when onshore and offshore markets are separated by capital controls — and arbitrage is restricted, as is the case in China — onshore markets trade at a discount to the major offshore markets. This makes the Chinese A-share premium all the more anomalous. So why is the same share worth so much more on the mainland than it is offshore?

One theory is that it reflects differing views on political risk, with mainlanders less worried than foreigners about the risk of a political “event” disrupting business prospects. Another theory is that it shows that mainland investors are more optimistic about Chinese growth prospects than offshore investors. A third theory is that it reflects an information asymmetry in which onshore investors have access to a higher quality of information than offshore investors, and so are able to discount future growth prospects at a lower rate.

But none of these explanations makes any sense. They all assume, incorrectly, that prices in the Chinese stock market reflect a fundamental “view” about growth prospects, measured as the present value of future expected cash flows.

They do not, and never have. It has been almost impossible during the past few decades to find a credible correlation between the performance of the Chinese stock market and any measure of growth prospects or profitability. Monthly surges or drops of 10-20 per cent or more occur far too often to suggest any relation with normal economic volatility…

…The problem is that in a market in which macroeconomic data is questionable, financial statements are not credible, corporate governance is unclear, government intervention is unpredictable, and interest rates are repressed, it is impossible to be a fundamental investor except at very low prices, driven down by the high discount rates all this uncertainty requires. Investors whose effect is to drive capital according to its most productive use, in other words, are pretty much priced out of the mainland markets. That is why, for all the promises by local fund managers of their sophisticated fundamental selection process, mainland markets are wholly speculative.

In fact the Chinese stock market is really a Keynesian beauty contest: “winners” are rewarded not for choosing the best-looking contestants, but rather for their ability to figure out the consensus. Successful investors are not those who understand the economy, in other words, but rather those who are good at interpreting government signalling, recognising shifts in liquidity and, above all, quickly discerning or even setting off changes in market consensus…

…It takes many years for a stock market to develop the qualities that allow and encourage fundamental investing. Mainland Chinese markets are slowly moving in that direction, but for now share prices provide no meaningful information at all about China’s economy. The A-share premium probably reflects nothing more than excess domestic liquidity.

5. Robotaxis Are Coming to Los Angeles. Everywhere Could Be Next – Alex Kantrowitz

Cruise is expanding its self-driving taxi operation to Los Angeles amid a year of huge growth for autonomous driving.

The GM subsidiary’s entry into the second-largest city in the U.S.—which I reported first today at Big Technology—comes as it’s increasing its autonomous rides by 49 percent per month and already doing more than 10,000 rides per week. In L.A., Cruise will begin testing soon and then expand to self-driving ride-hailing. It will be the company’s eighth city of operation, up from one at the start of this year. And it won’t be the last…

…As Cruise spreads across the U.S. and Alphabet’s Waymo robotaxi service grows along with it, autonomous driving is finally delivering after years of false hype. The technology went from a perpetual “six months away” to chauffeuring masses of riders this year as both companies gathered experience in pilot cities and used that knowledge to expand to others.

The hardest part of autonomous driving, in reality, was getting to this point. As soon as cars could navigate one or two major cities on their own, the CEO said, expanding to more cities became less of a technology problem and more of a vehicle supply issue. With that supply steadily coming online, rapid scaling should be next.

“Last year, we were operating tens of autonomous vehicles. We’re currently operating hundreds—almost 400 concurrently at peak. Next year, there’ll be thousands. And then it’ll continue at least 10 times growth every year for the foreseeable future,” Vogt said.

Both Cruise and Waymo have found that their technology adapts well across cities, without having to retrain it from the ground up. After adjusting for some city-specific features—like the shape of traffic lights or the nature of traffic circles—their robotaxis can start driving through new cities fairly quickly…

…Waymo is also testing on freeways in the San Francisco area, taking on autonomous driving’s next frontier. Currently, neither Waymo nor Cruise offers ride-hailing customers the option to take freeways. But it shouldn’t be that far away. “On 101, 280, 380, you’ll see our cars at all times of day driving with other cars, at speed, making lane changes, etc.,” Nalavade said. “Hopefully, in the coming months, there’ll be some announcements about our freeways.”

Riding in self-driving cars has become commonplace in some cities already, something I experienced in San Francisco over the past two weeks. In approximately a dozen rides with Waymo and Cruise, I hailed autonomous rides via their apps (similar to Uber and Lyft) and got into their cars alone, in a totally empty vehicle, with no human behind the wheel. It was at first a bit nerve-racking. Then it felt normal. I soon ignored the experience completely. Now I don’t want to ride any other way.

There’s a lot to like about the autonomous vehicles—even if their rollout in San Francisco has been far from perfect. In my experience, they ride smoother than any human driver. Their apps accept ride requests immediately (if the services have enough supply). Their cabins feel private (though there are cameras). And there’s no awkwardness around tip, conversation, climate, or music. Everything is at the rider’s discretion.

From a safety standpoint, both companies claim that data shows that the cars are better than human drivers, although some of the disruption they’ve caused in the Bay Area has inspired a whimsical protest movement intended to stop the tech’s expansion. But once you’re in the vehicle, the stats only confirm what you’re seeing. The cars are cautious, not distracted, not drunk, and they navigate turns and stops with ease.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet and Apple. Holdings are subject to change at any time.

What We’re Reading (Week Ending 13 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 13 August 2023:

1. Why Dying Industries Can Make Great Investments – Brandon Beylo

There were four leading players in the gasoline additives industry during the early 1970s:

  • Ethyl
  • Dupont
  • PPG
  • Nalco

These companies produced billions of pounds of chemical products (additives) and made decent profits. That all changed in 1975.

In 1975, the Environmental Protection Agency (EPA) started enforcing its 1970 “Clean Air Act.” The regulation’s goal was to slowly eliminate the need for gasoline additives in cars. In 1975, car manufacturers were required to install catalytic converters to reduce toxic emissions.

But there was a problem. The converters couldn’t operate properly with the current additive-filled gasoline. It was a death sentence for the entire industry…

…Billions of production were reduced to nothing in two decades. Here’s the most crucial part of this entire saga and why I draw a comparison to today’s oil and gas space (emphasis added):

“Barriers to entry are another story. An insurmountable barrier protected the four firms in the business. The EPA’s regulatory announcement in 1973 posted an unmistakable ‘Do Not Trespass’ sign for any firms contemplating entering the lead-based additive industry.” …

…External forces like the EPA set the death date for the industry. But instead of killing it, it gave the existing competitors a chance to milk their industry for every profit dollar possible…

…Ethyl also generated supernormal returns from its dying “no growth” additives business (emphasis added):

“In 1998, after its additive revenues had declined to $117M, Ethyl still made $51M in operating profits, a 44% margin. The rest of the company had operating margins of 11%.” 

2. Can Robots Evolve Into Machines of Loving Grace? – Meghan O’Gieblyn

My talk was about emergent intelligence in AI, the notion that higher-level capacities can spontaneously appear in machines without having been designed. I’d focused primarily on the work of Rodney Brooks, who headed up the MIT Artificial Intelligence Lab in the late 1990s, and his “embodied intelligence” approach to robotics. Before Brooks came along, most forms of AI were designed like enormous disembodied brains, as scientists believed that the body played no part in human cognition. As a result, these machines excelled at the most abstract forms of intelligence—calculus, chess—but failed miserably when it came to the kinds of activities that children found easy: speech and vision, distinguishing a cup from a pencil. When the machines were given bodies and taught to interact with their environment, they did so at a painfully slow and clumsy pace, as they had to constantly refer each new encounter back to their internal model of the world.

Brooks’ revelation was that it was precisely this central processing—the computer’s “brain,” so to speak—that was holding it back. While watching one of these robots clumsily navigate a room, he realized that a cockroach could accomplish the same task with more speed and agility despite requiring less computing power. Brooks began building machines that were modeled after insects. He used an entirely new system of computing he called subsumption architecture, a form of distributed intelligence much like the kind found in beehives and forests. In place of central processing, his machines were equipped with several different modules that each had its own sensors, cameras, and actuators and communicated minimally with the others. Rather than being programmed in advance with a coherent picture of the world, they learned on the fly by directly interacting with their environment. One of them, Herbert, learned to wander around the lab and steal empty soda cans from people’s offices. Another, Genghis, managed to navigate rough terrain without any kind of memory or internal mapping. Brooks took these successes to mean that intelligence did not require a unified, knowing subject. He was convinced that these simple robot competencies would build on one another until they evolved something that looked very much like human intelligence.

Brooks and his team at MIT were essentially trying to re-create the conditions of human evolution. If it’s true that human intelligence emerges from the more primitive mechanisms we inherited from our ancestors, then robots should similarly evolve complex behaviors from a series of simple rules. With AI, engineers had typically used a top-down approach to programming, as though they were gods making creatures in their image. But evolution depends on bottom-up strategies—single-cell organisms develop into complex, multicellular creatures—which Brooks came to see as more effective. Abstract thought was a late development in human evolution, and not as important as we liked to believe; long before we could solve differential equations, our ancestors had learned to walk, to eat, to move about in an environment. Once Brooks realized that his insect robots could achieve these tasks without central processing, he moved on to creating a humanoid robot. The machine was just a torso without legs, but it convincingly resembled a human upper body, complete with a head, a neck, shoulders, and arms. He named it Cog. It was equipped with over 20 actuated joints, plus microphones and sensors that allowed it to distinguish between sound, color, and movement. Each eye contained two cameras that mimicked the way human vision works and enabled it to saccade from one place to another. Like the insect robots, Cog lacked central control and was instead programmed with a series of basic drives. The idea was that through social interaction, and with the help of learning algorithms, the machine would develop more complex behaviors and perhaps even the ability to speak.

Over the years that Brooks and his team worked on Cog, the machine achieved some remarkable behaviors. It learned to recognize faces and make eye contact with humans. It could throw and catch a ball, point at things, and play with a Slinky.

When the team played rock music, Cog managed to beat out a passable rhythm on a snare drum. Occasionally the robot did display emergent behaviors—new actions that seemed to have evolved organically from the machine’s spontaneous actions in the world. One day, one of Brooks’ grad students, Cynthia Breazeal, was shaking a whiteboard eraser and Cog reached out and touched it. Amused, Breazeal repeated the act, which prompted Cog to touch the eraser again, as though it were a game. Brooks was stunned. It appeared as though the robot recognized the idea of turn-taking, something it had not been programmed to understand. Breazeal knew that Cog couldn’t understand this—she had helped design the machine. But for a moment she seemed to have forgotten and, as Brooks put it, “behaved as though there was more to Cog than there really was.” According to Brooks, his student’s willingness to treat the robot as “more than” it actually was had elicited something new. “Cog had been able to perform at a higher level than its design so far called for,” he said.

Brooks knew that we are more likely to treat objects as persons when we are made to socially engage with them. In fact, he believed that intelligence exists only in the relationships we, as observers, perceive when watching an entity interact with its environment. “Intelligence,” he wrote, “is in the eye of the observer.” He predicted that, over time, as the systems grew more complex, they would evolve not only intelligence but consciousness as well. Consciousness was not some substance in the brain but rather emerged from the complex relationships between the subject and the world. It was part alchemy, part illusion, a collaborative effort that obliterated our standard delineations between self and other. As Brooks put it, “Thought and consciousness will not need to be programmed in. They will emerge.”

The AI philosopher Mark A. Bedau has argued that emergentism, as a theory of mind, “is uncomfortably like magic.” Rather than looking for distinct processes in the brain that are responsible for consciousness, emergentists believe that the way we experience the world—our internal theater of thoughts and feelings and beliefs—is a dynamic process that cannot be explained in terms of individual neurons, just as the behavior of a flock of starlings cannot be accounted for by the movements of any single bird. Although there is plenty of evidence of emergent phenomena in nature, the idea becomes more elusive when applied to consciousness, something that cannot be objectively observed in the brain. According to its critics, emergentism is an attempt to get “something from nothing,” by imagining some additional, invisible power that exists within the mechanism, like a ghost in the machine.

Some have argued that emergentism is just an updated version of vitalism, a popular theory throughout the 18th and 19th centuries that proposed that the world was animated by an elusive life force that permeates all things. Contrary to the mechanistic view of nature that was popular at that time, vitalists insisted that an organism was more than the sum of its parts—that there must exist, in addition to its physical body, some “living principle,” or élan vital. Some believed that this life force was ether or electricity, and scientific efforts to discover this substance often veered into the ambition to re-create it artificially. The Italian scientist Luigi Galvani performed well-publicized experiments in which he tried to bring dismembered frog legs to life by zapping them with an electrical current. Reports of these experiments inspired Mary Shelley’s novel Frankenstein, whose hero, the mad scientist, is steeped in the vitalist philosophies of his time.

When reading about Brooks and his team at MIT, I often got the feeling they were engaged in a kind of alchemy, carrying on the legacy of those vitalist magicians who inspired Victor Frankenstein to animate his creature out of dead matter—and flirting with the same dangers. The most mystical aspect of emergentism, after all, is the implication that we can make things that we don’t completely understand. For decades, critics have argued that artificial general intelligence—AI that is equivalent to human intelligence—is impossible, because we don’t yet know how the human brain works. But emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through ingenuity or accident—consciousness will emerge as a side effect of complexity. At some point nature will step in and finish the job.

It seems impossible. But then again, aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself. The philosopher Gillian Rose once described the act of writing as “a mix of discipline and miracle, which leaves you in control, even when what appears on the page has emerged from regions beyond your control.”

3. Only a cheaper rupee can spur Indian growth – Ashoka Mody

While other Asian policymakers, such as those in South Korea and China, have strategically used sizeable depreciations of their currencies to bolster export competitiveness, Indian elites bemoan every infinitesimal decline in the rupee’s value as a national humiliation. A unique economic and political confluence first entrenched this bogus pride in the country’s psyche in the mid-1960s. And since the 1990s, the country’s corporate leaders and new rich have wanted to maintain a strong rupee. As a result, the country’s export-based growth has suffered, as have jobs for low-skilled workers…

…In a rare sane moment in 1949, a newly independent India devalued the rupee from Rs3.3 to Rs4.8 per dollar, bringing relief to its uncompetitive economy. Indian manufacturers could earn profits even when they lowered dollar sale prices, which helped increase exports. Costlier imports slowed import growth, helping reduce the current-account deficit. But the task was never completed. With low productivity and high inflation, India could not match countries such as Japan in labour-intensive manufactured exports. The World Bank and the IMF financed India’s large current account deficit, creating the illusion that it did not need currency devaluation.

When those two institutions finally threatened to stop financing that deficit, the country’s officials foolishly negotiated the rate to Rs7.5 per dollar in June 1966. This too-little-too-late devaluation did not compensate for the rise in domestic production costs. Taiwan and South Korea raced ahead, helped by currency devaluations; Indian exports languished.

The perceived failure of the 1966 devaluation to spur exports forever tarnished Indian belief in an activist exchange rate policy. Rather than encouraging more aggressive nominal devaluation to offset the rise in production costs and thus achieve real depreciation, devaluation “by stealth” was always too little, too late. In the 1980s, China used aggressive exchange rate depreciation as key to its monumental export push…

…India’s accumulated cost-of-production disadvantage requires the rupee to drop to about Rs90 per dollar; Rs100 per dollar would provide an ideal cushion. But Indian authorities continue to avoid an activist exchange rate policy, and rely on dodgy policy tools:

4. The Infamous Coin Toss – Ole Peters

Imagine I offer you the following gamble. I toss a fair coin, and if it comes up heads I’ll add 50% to your current wealth; if it comes up tails I will take away 40% of your current wealth. A fun thing to do in a lecture on the topic is to pause at this point and ask the audience if they’d like to take the gamble. Some will say yes, other no, and usually an interesting discussion of people’s motivations emerges. Often, the question comes up whether we’re allowed to repeat the gamble, and we will see that this leads naturally to the ergodicity problem.

The ergodicity problem, at least the part of it that is important to us, boils down to asking whether we get the same number when we average a fluctuating quantity over many different systems and when we average it over time. If we try this for the fluctuating wealth in the Peters coin toss the answer is no, and this has far-reaching consequences for economic theory.

Let’s start with averaging wealth, xi​(t) over an ensemble of many different systems. In our case this corresponds to N different players, each starting with xi ​= $100, say, and each tossing a coin independently. After the coins have been tossed, about half of the people will have thrown heads, and the other half tails. As the number of players goes to infinity, N→∞, the proportions of heads and tails will approach 1/2 exactly, and half the players will have $150, the other half $60. In this limit, we know what the ensemble average will be, namely ⟨x(1)⟩=1/2($150+$60)=$105. For historical reasons, this average is also called the expected value, and for the Peters coin toss, it grows by 5% in every round of the gamble so that

⟨x(t)⟩=$100×1.05^t…

…To see that the gamble is not ergodic, let’s now find the average value of wealth in a single trajectory in the long-time limit (not in the large-ensemble limit). Here, as T grows, again the proportions of heads and tails converge to 1/2. But, crucially, a head and a tail experienced sequentially is different from two different agents experiencing them. Starting at x1​(0)=$100, heads takes us to x1​(1)=$150, and following this sequentially with tails, a 40% loss, takes us down to x1​(2)=$90 — we have lost 10% over two rounds, or approximately 5% per round. Since we lose 5% per round, averaged over time, individual wealth is guaranteed to approach zero (or negative infinity on logarithmic scales) in the long-time limit T→∞…

…We have thus arrived at the intriguing result that wealth averaged over many systems grows at 5% per round, but wealth averaged in one system over a long time shrinks at about 5% per round…

…The significance of this ergodicity breaking cannot be overstated… Third, one core problem of economics and politics is to address conflicts between an individual, for example a citizen, and a collective, for example a state. This is the question of societal organization, institutions, Rousseau’s social contract and so on. This problem can seem puzzling, and it often attracts naive answers, because the collective consists of individuals. How, then, can the interests of the individual be misaligned with those of the collective? One important answer is ergodicity breaking.

5. Samo Burja – The Great Founder Theory of History – Patrick O’Shaughnessy and Samo Burja

Patrick: [00:01:32] Samo, your writing has been amongst the most interesting that I’ve encountered in the last couple of years, just a tremendous variety of ideas and ways of looking at the world and history. One of the overarching things that you’re best known for is this lens on history, the called Great Founder Theory. I’d love you to just begin by laying out the core idea here how you came upon the idea and maybe what it opposes, the alternative view of history from the one that you’ve developed. I’d love to start there and then we’ll dive into lots of nooks and crannies together.

Samo: [00:02:03] To me, it seems that most of social science for the last 100 years has been focused on trying to find these macro deep patterns of human behavior, human history, sometimes being as hubristic to try to find immutable laws of history, such was the case in the early and the middle of the 20th century.

And while it certainly is the case that there are deep patterns that are worth studying in the nature of all civilization from the advent of agriculture to today, and while it certainly is true again that there is a deep current that’s changing our society that started with the Industrial Revolution 200 years ago, none of these patterns were set in stone. None of these patterns are fixed. So I think none of them really rise to the level of sociological loss.

And the reason why we can’t just predict future history for the next few hundred years isn’t people observe society and they come to alter it, exceptional individuals, great man’s theory of history says that perhaps don’t determine everything to how things transpire. But I think almost all of the exceptional institutions that have shaped human civilization, anything you can think of, be it organized religion, technological companies, political systems, they usually have an individual or a small group of people who deviate from the previous social norm, create a new type of organization, a new type of institution or honestly, just a new way of doing things as the old society fails.

And we see this over and over again. Again, to give historical examples, they might take a very religious and legible form such as creating, founding a new organized religion. Say, for example, Muhammad, reorganizing the tribal Arab societies into a cohesive, unified whole, ends up expanding and conquering most of the Middle East. They might take the form of, say, Confucius, who has this relatively modest social reform program, but ends up teaching something like 100 bureaucrats who travel the country and try to spread the so-called philosophy of reforming the dysfunctional Chinese states during the Warring States Period. And eventually, that comes to dominate their education system for the next 2,000 years.

Or it might be Charlemagne, refounding what is basically a tribal structure into something that ingests Roman law, creating the Frankish Empire as we think of it and laying the groundwork for Medieval European Feudalism. It’s not the case that Charlemagne or Muhammad or Confucius thought out the full effect of their reforms on society for 1,000 or 2,000 years, it’s just that it did shape human civilization for the next 1,000, 2,000 years. And if you removed any of them, history would have gone quite differently. Not necessarily because of their personal impact to winning to this or that battle, but from the perspective of reshaping the institutes that need to set the probabilities for these events…

Patrick: [00:09:33] If you think about the power of that, predicting the future is basically today plus progress in the same direction, what are different directions that stand out most to you as possible futures that might surprise people? If you take that 50-year hence example, what are things, trends, great founders, people that you’re watching that might affect the way the world looks in 50 years that are very different than how it looks today?

Samo: [00:09:58] I think there are some interesting surprises where most of the Middle East will probably fail to properly industrialize and develop any sort of high-tech energy, any sort of transition away from oil. However, an interesting exception to this might be the United Arab Emirates. People a few years ago were surprised that there was an Emirati mission to Mars. Now of course, this was mostly done by the Japanese space agency, yet significant partnership existed within UAE.

People also might be surprised to learn that they are building nuclear reactors for civilian use. They also are starting to manufacture all sorts of other equipment within the country. So UAE might be a very successful, highly developed country 50 years from now, if basically the current monarchs, the successors continue to be relatively directed, well governing, if they continue to agentically adapt to economic changes.

It’s the same kind of transformation that perhaps we saw with Singapore over 50 years after its initial independence under Lee Kuan Yew, where he sort of broke the mold in a whole variety of ways, and the usual advice for how a country should develop was ignored. And most of the countries that followed that advice didn’t develop, meanwhile Singapore did.

The other important one is that I think that the European institutions will decay much more than people are even now assuming. I think that significant chunks of Europe might become somewhat impoverished. And the key reason for this is that there are very few live players that is exceptional people, who can adjust to their circumstances in any position of power in Europe in the European system. Be it in the economic domain, there are a few exceptional new companies.

There’s a reason that European tech stagnates so profoundly. Russia has more unicorns than Germany. And Russia is not a well-functioning economy, but for whatever reason, it’s easier to create a tech start-up in Russia than it is in Germany. Acquire like a large market and so on, a large user base. Unless someone actively refounds European governments, the EU supranational bureaucracy or even something like key industrial sector in Europe, Europe will continue to lose not like one or two percentage point a year away, where at first, it’s imperceptible, then 20, 30, 40 years on, it just seems a vastly different place. I think thinking of Europe as the formerly developed world will become common.

Patrick: [00:12:49] What about the United States?

Samo: [00:12:51] The United States has some similar problems to Europe, it just has them to a much lesser degree. There’s some discussion recently of American dynamism, of reindustrialization, of things like the CHIPS Act, they’re supposed to reshore certain kinds of manufacturing in the United States. Obviously, the U.S. has a relatively healthy start-up scene. Obviously, artificial intelligence is advancing most rapidly. It’s advancing most rapidly in the United States, here in the Bay Area.

But I think ultimately, core problems of the U.S. government have not been resolved. The U.S. government is less functional, is less competent, is less cost-effective than it was 40 or 50 years ago. Whatever we think of other social changes, it is hard to deny that a government-run project will just be run worse than it would have been in 1960 or 1940.

In addition to this, outside of artificial intelligence, software companies and tech companies have experienced a real slowdown. The reason that so much capital flowed into AI wasn’t just because AI was wonderful and exciting, it’s because there was nothing else around. There’s a real, genuine breakthrough with ChatGPT, but what else is happening? What happened to cryptocurrencies? What happened to software is eating the world? All of that? Those mantras that created many, many new companies value the economic add of those companies was smaller. U.S. economic growth, I think, is somewhat overstated but real. Meanwhile, in Europe, I think we are already seeing the beginnings of just a contracting economy.

In some ways, Japan is a good example of where our future is going, both the United States, Europe and lots of the developed East Asian economies and some semi-developed ones like that of China are experiencing a massive demographic transition. And some of these things are very much exponential where, for example, when your population starts aging at first, you might even have an increase in total population. Since while a fewer people are born, previous generations are still alive and working. Eventually, you start to see a decrease in population and in 1 year is the population shrinking by 100,000, in a few years, might be 2 million, 3 million, 4 million just because it’s already baked in so deeply.

These are all compounding effects. So ultimately, that demographic headwind is something that only the United States is outrunning a little bit with the help of immigration, however, mostly it’s from rising productivity in the tech sector. And then the tech sector itself, it’s making a big bet on AI. If AI doesn’t work out, that is the big, economy-transforming bet, I think the U.S. will also slip into this kind of decaying state.

Patrick: [00:15:51] If you think about some of the key terms that you’ve mentioned, I want to pick on two, concept of a great founder and the concept of a live player, which is a term I love. I’ve sort of adopted my own version of it in lots of conversations. Maybe define what both of those terms mean and help us understand the relative frequency of those. How many great founders have there ever been in history? How many live players are there alive at any given time in your estimation? Give us your definition of those two terms and how common they are.

Samo: [00:16:20] Okay. I think every time a founder creates a new organization, this is a singular act of social creation. Even if it’s something relatively boring, like a technology company or a nonprofit organization, their peculiarities and who they pick as staffing, what the decisions that they’re making are.

Similarly, what would the great founder be, well, they’re the creator of a key new social institution. We think — one way to think of civilizations is that civilization isn’t a single organism. It’s less a tree and more a forest, where many individual institutions can be replaced, and it still is recognizably the same civilization, the same ecological pattern.

Say, if you were to look at Western civilization and observe that — I don’t know, say, the Catholic Church is much less important now than it was 100 years ago. Just because society secularized doesn’t mean it’s a different civilization yet. When exactly does the forest transform into savanna or something like that, we can have those discussions. But it is true that some institutions vastly more influential than others.

So having said that, how many unique pieces of — let’s think of it as social technology or unique civilization-defining institutions — are there per civilization? Well, I think that most civilizations have sort of like five to maybe eight unique things that they’re doing. So the total number of distinct and this macro historical sensibilizations for human history that we know of, I would say that it’s about 30 or so different civilizations. Most of them, of course, are long gone. No one is very much interested in Sumerian civilization. Except as a historical case study today, it doesn’t impact us in a new, profound way, except perhaps, in some ways, influencing biblical myths of the great flood and so on.

So then let’s say that about 30 civilizations, some of them still relevant, some of them ancient history, each of them having probably something like 10 or so great founders. So I think for all of human history, if you were to chart the impact of these individuals — and again, I want to emphasize, sometimes, it’s a small group of people. It might be an individual plus a few very close allies. Or it might be a partnership of two lawgivers or anything like this. Such small human clusters, you think we’re talking about 500 people at most. 500 people at most for human history.

And then for the term live player, all great founders are live players, not all live players are great founders. A live player is someone that is not operating off of an inherited script. An inherited script might be something like professionalism, political tradition. It can be anything. It can be a very successful script.

If I am a surgeon, and I worked exactly as I was trained, I’ll be doing a fairly good job. And I can repeat this exact program, this recipe that I’ve been taught and of course, apply it in my domain with some creativity. And society basically functions on the top of such roles, on surgeons, engineers, plumbers but also lawyers, politicians, priests.

A live player though is someone that can on the spot improvise, develop and create new social roles. And one of the surest indicators that someone is a live player is the ability to jump multiple industries. So if you see someone that succeeded in one industry and gone to a totally unrelated field of activity and succeeded there as well and then gone again and succeeded as well, it’s very unlikely that’s luck, and it’s very unlikely that they’re using the same recipe or the same insights for all three domains. And I think that’s sort of the strongest evidence that some individuals can recreate patterns of behavior and improvise, essentially, in a way that’s basically just very deep, very groundbreaking.

And I think the total number of live players in the world, probably right now is closer to about 50,000 or so. So it’s actually still extremely rare. A fun, historical example might be Arnold Schwarzenegger, who rebrands from being a weight lifting champion to being an actor to being a politician. Well, look, while he was an actor starring in blockbuster movies, people said, “Oh, he’s not really an actor.” And as soon as he became a politician, people said, “He’s not really a politician. He’s just an actor.” So it’s that kind of jump to a different activity that demonstrates an aliveness and adaptability.

Patrick: [00:20:58] I have so many questions about both. Maybe starting with great founder since there’s so many — fewer of them. What’s an example of somebody that we all might have the inclination as a great founder I’m going to make one up like Napoleon or something that is, in fact, not one and why. Like I just want to use an example or two to really drive home the point that potentially so few people have effectively driven what happens in the lives of how many billions of people that have lived through human history.

Samo: [00:21:27] For Napoleon, he won many battles. He was an exceptional general. But if he were to become a great founder, I’m not yet convinced that he is even, it would mostly be through the military and legal reforms that he instituted. Now when it comes to military reforms, I think that is a method of directing battles and so on with the general staff and everything. And that was somewhat influential I think it was over determined to have developed in that direction even without Napoleon.

The interesting thing though is that his code of law that’s spread across Europe, that could be argued to be profoundly influential. That was actually the moment when Europe stepped away from feudalism. We adopted a very different legal framework, say, deals were outlawed in Lat Europe. There would be an exaggeration to same markets were opened, but it would not be an exaggeration to say that people from all walks of life could suddenly enter positions in not just the French state, but all of these Puppet Republics and kingdoms that were set up.

Even in countries that were strictly opposed to Napoleon who are only coerced into alliances such as Austria and Prussia, some of these reforms were imitated because they were so administratively and politically successful. Now having said all of this, it’s still not clear Napoleon is a great founder. It might turn out that, in fact, the civil code in this reform is much more important than we think.

However, it seems to be a fairly short chapter in European history with it being not directly related to the industrial revolution as such. It doesn’t seem like Napoleon’s reforms were particularly conductive that France is becoming a great industrial power, 50 or 80 or 90 years later…

Patrick: [00:36:36] Another thing you’ve written about that I find fascinating, if a lot of people think about history, they might think of it as the march of hard technology. In this example I used earlier, ideas, building on ideas in a very kind of hard math and science way. Your idea is that these social technologies, which are installed by these great founders are actually upstream of hard technology and innovation. Can you describe that mechanism as you see it? And why you think the world works that way?

Samo: [00:37:05] I think the material technology that is hard technology that you described when social technology are intensely mutually symbiotic. You can’t have one without the other. Everything from, say, a Bronze Age Empire relying on the infrastructure of bringing copper from the Eastern Mediterranean and tin from the British Isles or Afghanistan, melting them down into weaponry. There’s a real technological and infrastructure base there. So Bronze Age Empires rely on that.

Everything from that to modern chip fabs where we need a planet’s worth of economies of scale so that a small island off the coast of China, we invest hundreds of billions of dollars into what, four factories who make the chips that are in every device we all have and carry with us daily, that’s crazy. That’s a crazy technological dependency from our society, but it goes the other way, too. The technology depends on the social.

I described global trade, well, the global trade rests on the chip fabs themselves. No, not really. Maybe you could say that, oh, it rests on the hard technology of the U.S. Navy. But wait, what is the U.S. Navy? If the U.S. Navy is keeping the world’s ocean safe and navigable for trade and then the U.S. has supported a system of free international trade, et cetera, et cetera, it becomes very murky. It becomes very hard to arrive at this phenomenon of the technology itself.

Most importantly, if the technology itself, the material technology was all that was driving forward human history, it would look much more like a ratchet. It wouldn’t look like this thing with fits and starts, this thing that has a rise and fall of very advanced civilizations all the time. It wouldn’t have civilizations going down behind alleys. Consider 16th Century Japan, very adept at gun powder warfare, very adapted using the gun. The gun is outlawed after Japan is unified.

When Japanese gun stagnate for the next 200 or 300 years, look, if it was just you introduced the gun to society and then modern warfare starts to develop, Japan wouldn’t have fallen behind the Western world. We often talk about, again, in the American methodological context. If you introduce personal firearms, that’s a force for liberty. Yet in much of Asia actually, in the 18th and 17th century, the introduction of firearms empowered large, centralized militaries. The rifle, in the hands of a Napoleonic soldier can be either a tool of despotism or a tool of liberation. It’s a mass exercise, not an individual exercise.

We’re discussing guns. What about the printing press? I mentioned Martin Luther earlier. Honestly, the first thing that was printed on the printing press wasn’t Martin Luther’s bible. It was indulgences. So it’s first used as a financial mechanism to fund the papacy and strengthen the papacy. It only later comes to be adopted to — into bibles, in German that is. And also, there were variants of the printing press that were introduced in Chinese society, in Korean society long before the printing press was invented in Europe. So a simplistic story where you say, “Oh, guns lead to personal liberation or printing press leads to information liberation, these are not org, these are possible routes you can go down with that technology.

And then finally, there are clear examples of technology advancing and then regressing. If it was purely the growth and development of a technical base with no social factor whatsoever, the Roman Empire would never have fallen or if it did fall, we wouldn’t have lost technologies such as Roman Concrete or Heron’s steam engine, which was a primitive steam engine used in Alexandria. We wouldn’t have lost mathematics, significant chunks of mathematics that were forgotten for 1,000 years.

People understood quite well in 200 BC, the earth is round. This was known. Eratosthenes calculated the size of the earth. So there are all sorts of interesting examples where we can show that scientific knowledge advances and regresses and more importantly, when we can show technology advances and then regresses. I feel like a lot of the advocates of a more hard technology view want to have it both ways.

They want technology to be all important, but they will acknowledge if press the technology is fragile. Some like wait, which is it. If technology is all important, except for it being very fragile. Wait maybe we should study the societal causes of that fragility. And they do acknowledge these are societal causes, they’re not like material causes. That’s the way to think about it. It’s that social organization is a prerequisite for technologies for material technologies and material technologies are a prerequisite for many kinds of social organization…

Patrick: [00:58:51] If you were to imagine someone extremely smart, thoughtful who you respect very deeply, who most disagrees with your world view. Curious, if like any actual person comes to mind, but also just like generically, like what you think the most different version of a world view is from your own that you find interesting for some reason?

Samo: [00:59:13] It’s an interesting and it’s a difficult question. I think that Peter Turchin has an interesting approach. He is an American not quite historian more like complex systems theorist. He’s founded his field called cliodynamics. You could think of it as almost Harry Seldon like.

Patrick: [00:59:31] It’s like a history.

Samo: [00:59:33] Yes, to produce like a macro mathematical model of history. He ends up finding what I think our patterns of distribution of elites applies this old sociological theory that’s actually from the 1900s. Wilfried Pareto already spoke of it, the idea of elite over production, where elites and societies stacked indirect surplus to achieve certain goals. However, people aspire to join the elites, eventually, there are too many elites for the society to sustain itself and the elites start fighting each other as to who gets to stay elite and that this is kind of the cycle of violence and piece and that you can pop these cycles over a long period of time.

It’s not so much that I think he is wrong in some of the patterns that he observes, and it’s just that I think that patterns hold until they don’t. These are long patterns that break. There is no clear statistical way to predict when these patterns break. And the breaking of the patterns we think sometimes is due to the work of great founders, where they might take a civilization that was on terminal decline which leads to self-destroy and emulate in like interminable civil wars and reorient a totally different political system directs those energy outward or you might have the periphery of declining Empire that manages to break away from that Empire to ban its independence, develop a whole new difference of social norms and it becomes the core of a totally new civilization.

These are the things that are not really well captured by these statistical models, and these are things that are not, I think, over determined. Now ultimately, we can get into debates of free will and you can say, “Oh, but every individual is a deterministic product of their society.” And sure, that’s true. If the fate of the society actually depends on a few individuals then it is not possible to study those individuals sociologically.

What’s happening inside my brain, it doesn’t make sense to use theories of the lead over production for that. Then it becomes maybe a psychological or biological question. And you know what, the brains of the great founders that shaped our world, they’re long decomposed. So we can’t actually study that. So we have to acknowledge the limits of our sociological knowledge, and it’s almost kind of like an event horizon consideration, even if theoretically, it’s fully deterministic.

So I would say that this leads me to my second disagreement. I think we lose far too much information about past societies to be able to develop such highly accurate quantitative models. We have decent quantitative information on, say, the economy of the last 100 years. And if you’ve ever tried to study even, say, economics or politics of 18th century Europe, you realize there’s all kinds of data that are really hard to access. And if you go even further back to the 14th Century, it gets even more difficult.

So really, we are operating with a very sparse data set. So yes, Peter Turchin, is very interesting. Maybe Steven Pinker rather well known. He thinks I completely disagree with the idea of a linear ratcheting development of human progress over time. I think that progress is always bounded by the civilization you find yourself in, and with these civilizations tend to be models. So there will be progress for a civilization until there’s not. And then this civilization fails.

And perhaps there’s a new civilization that picks up at a more advanced level, perhaps the new civilization is actually more primitive than what came before. I think that my view of history would not be that it’s exactly cyclical, not at all. There’s further evolution. We do think that the idea that we’ve been on a curve of very compounding material and moral progress for the last 10,000 years. I think that can be easily disproven.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 06 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 06 August 2023:

1. The Next Frontier For Large Language Models Is Biology – Rob Toews

One striking theme has emerged from the long march of research progress across biochemistry, molecular biology and genetics over the past century: it turns out that biology is a decipherable, programmable, in some ways even digital system.

DNA encodes the complete genetic instructions for every living organism on earth using just four variables—A (adenine), C (cytosine), G (guanine) and T (thymine). Compare this to modern computing systems, which use two variables—0 and 1—to encode all the world’s digital electronic information. One system is binary and the other is quaternary, but the two have a surprising amount of conceptual overlap; both systems can properly be thought of as digital.

To take another example, every protein in every living being consists of and is defined by a one-dimensional string of amino acids linked together in a particular order. Proteins range from a few dozen to several thousand amino acids in length, with 20 different amino acids to choose from.

This, too, represents an eminently computable system, one that language models are well-suited to learn.

As DeepMind CEO/cofounder Demis Hassabis put it: “At its most fundamental level, I think biology can be thought of as an information processing system, albeit an extraordinarily complex and dynamic one. Just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI.”…

…Proteins are involved in virtually every important activity that happens inside every living thing: digesting food, contracting muscles, moving oxygen throughout the body, attacking foreign viruses. Your hormones are made out of proteins; so is your hair…

…As mentioned above, every protein consists of a string of building blocks known as amino acids strung together in a particular order. Based on this one-dimensional amino acid sequence, proteins fold into complex three-dimensional shapes that enable them to carry out their biological functions.

A protein’s shape relates closely to its function. To take one example, antibody proteins fold into shapes that enable them to precisely identify and target foreign bodies, like a key fitting into a lock. As another example, enzymes—proteins that speed up biochemical reactions—are specifically shaped to bind with particular molecules and thus catalyze particular reactions. Understanding the shapes that proteins fold into is thus essential to understanding how organisms function, and ultimately how life itself works.

Determining a protein’s three-dimensional structure based solely on its one-dimensional amino acid sequence has stood as a grand challenge in the field of biology for over half a century. Referred to as the “protein folding problem,” it has stumped generations of scientists. One commentator in 2007 described the protein folding problem as “one of the most important yet unsolved issues of modern science.”

In late 2020, in a watershed moment in both biology and computing, an AI system called AlphaFold produced a solution to the protein folding problem. Built by Alphabet’s DeepMind, AlphaFold correctly predicted proteins’ three-dimensional shapes to within the width of about one atom, far outperforming any other method that humans had ever devised.

It is hard to overstate AlphaFold’s significance. Long-time protein folding expert John Moult summed it up well: “This is the first time a serious scientific problem has been solved by AI.”…

…AlphaFold was not built using large language models. It relies on an older bioinformatics construct called multiple sequence alignment (MSA), in which a protein’s sequence is compared to evolutionarily similar proteins in order to deduce its structure.

MSA can be powerful, as AlphaFold made clear, but it has limitations.

For one, it is slow and compute-intensive because it needs to reference many different protein sequences in order to determine any one protein’s structure. More importantly, because MSA requires the existence of numerous evolutionarily and structurally similar proteins in order to reason about a new protein sequence, it is of limited use for so-called “orphan proteins”—proteins with few or no close analogues. Such orphan proteins represent roughly 20% of all known protein sequences.

Recently, researchers have begun to explore an intriguing alternative approach: using large language models, rather than multiple sequence alignment, to predict protein structures.

“Protein language models”—LLMs trained not on English words but rather on protein sequences—have demonstrated an astonishing ability to intuit the complex patterns and interrelationships between protein sequence, structure and function: say, how changing certain amino acids in certain parts of a protein’s sequence will affect the shape that the protein folds into. Protein language models are able to, if you will, learn the grammar or linguistics of proteins.

The idea of a protein language model dates back to the 2019 UniRep work out of George Church’s lab at Harvard (though UniRep used LSTMs rather than today’s state-of-the-art transformer models).

In late 2022, Meta debuted ESM-2 and ESMFold, one of the largest and most sophisticated protein language models published to date, weighing in at 15 billion parameters. (ESM-2 is the LLM itself; ESMFold is its associated structure prediction tool.)

ESM-2/ESMFold is about as accurate as AlphaFold at predicting proteins’ three-dimensional structures. But unlike AlphaFold, it is able to generate a structure based on a single protein sequence, without requiring any structural information as input. As a result, it is up to 60 times faster than AlphaFold. When researchers are looking to screen millions of protein sequences at once in a protein engineering workflow, this speed advantage makes a huge difference. ESMFold can also produce more accurate structure predictions than AlphaFold for orphan proteins that lack evolutionarily similar analogues.

Language models’ ability to develop a generalized understanding of the “latent space” of proteins opens up exciting possibilities in protein science.

But an even more powerful conceptual advance has taken place in the years since AlphaFold.

In short, these protein models can be inverted: rather than predicting a protein’s structure based on its sequence, models like ESM-2 can be reversed and used to generate totally novel protein sequences that do not exist in nature based on desired properties.

All the proteins that exist in the world today represent but an infinitesimally tiny fraction of all the proteins that could theoretically exist. Herein lies the opportunity.

To give some rough numbers: the total set of proteins that exist in the human body—the so-called “human proteome”—is estimated to number somewhere between 80,000 and 400,000 proteins. Meanwhile, the number of proteins that could theoretically exist is in the neighborhood of 10^1,300—an unfathomably large number, many times greater than the number of atoms in the universe. (To be clear, not all of these 10^1,300 possible amino acid combinations would result in biologically viable proteins. Far from it. But some subset would.)..

…Using AI, we can for the first time systematically and comprehensively explore the vast uncharted realms of protein space in order to design proteins unlike anything that has ever existed in nature, purpose-built for our medical and commercial needs.

We will be able to design new protein therapeutics to address the full gamut of human illness—from cancer to autoimmune diseases, from diabetes to neurodegenerative disorders. Looking beyond medicine, we will be able to create new classes of proteins with transformative applications in agriculture, industrials, materials science, environmental remediation and beyond…

…Thanks to scientific breakthroughs that have made gene sequencing vastly cheaper and more accessible over the past two decades, the amount of DNA and thus protein sequence data available to train AI models is growing exponentially, far outpacing protein structure data.

Protein sequence data can be tokenized and for all intents and purposes treated as textual data; after all, it consists of linear strings of amino acids in a certain order, like words in a sentence. Large language models can be trained solely on protein sequences to develop a nuanced understanding of protein structure and biology.

This domain is thus ripe for massive scaling efforts powered by LLMs—efforts that may result in astonishing emergent insights and capabilities in protein science.

2. Country Risk: A July 2023 Update – Aswath Damodaran

What makes some countries riskier than others to operate a business in? The answer is complicated, because everything has an effect on risk, starting with the political governance system (democracy, dictatorship or something in between), the extent of corruption in the system, the legal system (and its protection for property rights) and the presence or absence of violence in the country (from wars within or without)…

…Things get even more complicated when you recognize that these drivers are often correlated with, and drive, each other. Thus, a country that is ravaged by war and violence is more likely to have a weak legal system and be corrupt.  Furthermore, all of these risk exposures are dynamic, and change over time, as governments change, violence from internal or external forces flares up.

As you assess these factors, you can see very quickly that country risk is a continuum, with some countries exposed less to it than others. It is for that reason that we should be cautious about discrete divides between countries, as is the case when we categorize countries into developed and emerging markets, with the implicit assumption that the former are safe and the latter are risky. To the extent that divide is not just descriptive, but also drives real world investment, both companies and investors may be misallocating their capital, and I will argue for finer delineations of risk…

… If your focus stays on economic risk, the question of whether democracies or authoritarian regimes are less risky for businesses to operate in depends in large part on whether these businesses are more unsettled by day-to-day continuous risk, which is often the case with democracies, where the rules can change when new governments gets elected, or by discontinuous risk, which can lie dormant for long periods, but when it does occur, it is larger and sometimes catastrophic, in an authoritarian government…

…In 2022, North America and Western Europe scored highest on the democracy index, and Middle East and Africa scored the lowest.

In my view, the question of whether businesses prefer the continuous change (or, in some cases, chaos) that characterizes democracies or the potential for discontinuous and sometimes jarring change in authoritarian regimes has driven the debate of whether a business should feel more comfortable investing in India, a sometimes chaotic democracy where the rules keep changing, or in China, where Beijing is better positioned to promise continuity. For three decades, China has won this battle, but in 2023, the battleground seems to be shifting in favor of India, but it is still too early to make a judgment on whether this is a long term change, or just a hiccup…

…When a country is exposed to violence, either from the outside or from within, it not only exposes its citizens to physical risk (of assault or death), but also makes it more difficult to run businesses within its borders. That risk can show up as costs (of buying protection or insurance) or as uninsurable risks that drive up the rates of return investors and businesses need to make, in order to operate…

…Iceland and Denmark top the list of most peaceful countries, but in a sign that geography is not destiny, Singapore makes an appearance on that list as well. On the lease peaceful list, it should come as no surprise that Russia and Ukraine are on the list, but Sub-Saharan Africa is disproportionately represented…

…Corruption is a social ill that manifests itself as a cost to every business that is exposed to it. As anyone who has ever tried to get anything done in a corrupt setting will attest, corruption adds layers of costs to routine operations, thus become an implicit tax that companies pay, where the payment instead of going to the public exchequer, finds its way into the pockets of intermediaries…

…Much of Western Europe, Australia & New Zealand and Canada/United States fall into the least corrupt category, but corruption remains a significant concern in much of the rest of the world. While it easy to attribute the corruption problem to politicians and governments, it is worth noting that once corruption becomes embedded in a system, it is difficult to remove, since the structure evolves to accommodate it…

…To operate a business successfully, you need a legal system that enforces contractual obligations and protects property rights, and does so in a timely manner. When a legal system allows contracts and legal agreements to be breached, and property rights to be violated, with no or extremely delayed consequences, the only businesses that survive will be the ones run by lawbreakers, and not surprisingly, violence and corruption become part of the package…

…By now, you can see the point about the correlation across the various dimensions of country risk, with the parts of the world (North America, Europe, Australia and Japan) that have the most democratic systems and the least corruption scoring highest on the legal protection scores. Conversely, the regions (Africa, large portions of Asia and Latin America) that are least democratic, with the most violence and corruption, have the most porous legal systems…

..Businesses and individuals that borrow money sometimes find themselves unable to meet their contractual obligations, and default, and so too can governments. The difference is that government or sovereign default has much greater spillover effects on all entities that operate within its borders, thus creating business risks…

…The most widely used measures of sovereign default risk come from a familiar source for default risk measures, the ratings agencies. S&P, Moody’s and Fitch, in addition to rating companies for default risk, also rate governments, and they rate them both on local currency debt, as well as foreign currency debt. The reason for the differentiation is simple, since countries should be less likely to default, when they borrow in their domestic currencies, than when they borrow in a foreign currency…

…One of the advantages of a market-based measure is that the market price reflects investor perceptions of risk at the moment. Sovereign Credit Default Swaps (CDS) offer a market-based measure of default risk, since investors buy these swaps as protection against default on government bonds. When the sovereign CDS market came into being a few decades ago, there were only a handful of countries that were traded, but the market has expanded, and there are traded credit default swaps on almost 80 countries in June 2023…

…The advantage of default spreads is that they provide an observable measure of risk that can be easily incorporated into discount rates or financial analysis. The disadvantage is that they are focused on just default risk, and do not explicitly factor in the other risks that we enumerated in the last section. Since these other risks are so highly correlated with each other, for most counties, it is true that default risk becomes an reasonable proxy for overall country risk, but there are some countries where this is not the case. Consider portions of the Middle East, and especially Saudi Arabia, where default risk is not significant, since the country borrows very little and has a huge cash cushion from its oil reserves. Investors in Saudi Arabia are still exposed to significant risks from political upheaval or unrest, and may prefer  a more comprehensive measure of country risk…

…In addition to capturing risks that go beyond default, Political Risk Services also measures risk scores for frontier markets (like Syria, Sudan and North Korea), which have no sovereign ratings. The minuses are that the scores are not standardized…   In addition, the fact that the country risk is measured with  scores may lead some to believe that they are objective measures of country risk, when, in fact, they are subjective judgments reflecting what each service factors into the scores, and the weights on these factors. Just to illustrate the contradictions that can result, PRS gives Libya a country risk score that is higher (safer) than the scores it gives United States or France, putting them at odds with most other services that rank Libya among the riskiest countries in the world…

… For much of my valuation journey, the status quo in valuation has been to look at where a company is incorporated to determine its risk exposure (and the equity risk premium to use in assessing a hurdle rate). While I understand that where you are incorporated and traded can have an effect on your risk exposure, I think it is dwarfed by the risk exposure from where you operate. A company that is incorporated in Germany that gets all of its revenues in Turkey, is far more exposed to the country risk of Turkey than that of Germany.

3. Japan’s growing debt mountain: Crisis, what crisis? – Andrew Sharp

When the U.K. announced uncosted tax breaks last year, it triggered a run on the sterling, sent British government bond yields to their highest since the global financial crisis, and hastened the downfall of Prime Minister Liz Truss after just 44 days in office. This year, the U.K.’s ratio of debt to gross domestic product surpassed 100% for the first time since the early 1960s.

Japan could only dream of a figure so low.

The International Monetary Fund estimates that the world’s third-largest economy’s ratio is around 260% — by far the highest among developed economies, exceeding the 204% seen during World War II in 1944. The number is expected to continue creeping upward, according to projections by the Japan Center for Economic Research, a Nikkei-affiliated think tank.

Yet Tokyo remains relatively sanguine. In an optimistic scenario that sees a rise in Japan’s potential growth rate, the government projects it will balance its books by fiscal 2026.

The cost of borrowing, however, is rising. A decision by the Bank of Japan on Friday to allow yields on Japanese government bonds (JGBs) to rise above its previous cap of 0.5% to 1% has already triggered a spike in yields — they rose above 0.6% for the first time in nine years in Monday trading.

In the meantime, Japan keeps on spending. Prime Minister Fumio Kishida has pledged to boost defense expenditure to 2% of GDP by fiscal 2027 from around 1% now, and to double the child care budget to an annual 3.5 trillion yen ($25 billion). He is also planning to issue 20 trillion yen of Green Transformation (GX) bonds over the next decade.

While the GX bonds are to be repaid through a carbon tax and carbon pricing scheme, Kishida’s government has yet to settle on a plan to cover the defense and child-rearing outlays. Saddled with a super-aged society, the government projects Japan will have to spend nearly one quarter of GDP on social welfare such as nursing care and pensions in the fiscal year beginning April 2040.

So far, none of this has spooked global investors the way Truss’ tax plan did.

Various factors are dampening the fuse on Japan’s debt time bomb. Companies have large cash holdings and are not yet borrowing heavily. Japanese government bonds have a relatively long average maturity and are mostly held domestically. The country has a healthy current account surplus, and a rare period of inflation is also helping…

…Low growth as Japan’s population ages and falls is also a major risk. Without a significant productivity boost, a smaller working-age population would make it very difficult for Japan to maintain or boost growth, which would help to bring down the debt-to-GDP ratio.

“For Japan, the biggest social risk factor has been demographics,” de Guzman at Moody’s said.

4. Soft Landing Optimism Is Everywhere. That’s Happened Before – Jeanna Smialek

In late 1989, an economic commentary newsletter from the Federal Reserve Bank of Cleveland asked the question that was on everyone’s mind after a series of Federal Reserve rate increases: “How Soft a Landing?” Analysts were pretty sure growth was going to cool gently and without a painful downturn — the question was how gently.

In late 2000, a column in The New York Times was titled “Making a Soft Landing Even Softer.” And in late 2007, forecasters at the Federal Reserve Bank of Dallas concluded that the United States should manage to make it through the subprime mortgage crisis without a downturn.

Within weeks or months of all three declarations, the economy had plunged into recession. Unemployment shot up. Businesses closed. Growth contracted.

It is a point of historical caution that is relevant today, when soft-landing optimism is, again, surging…

…But it can be difficult to tell in real time whether the economy is smoothly decelerating or whether it is creeping toward the edge of a cliff — one reason that officials like Mr. Powell are being careful not to declare victory. On Wednesday, policymakers lifted rates to a range of 5.25 to 5.5 percent, the highest level in 22 years and up sharply from near zero as recently as early 2022. Those rate moves are trickling through the economy, making it more expensive to buy cars and houses on borrowed money and making it pricier for businesses to take out loans…

…That is not to say there isn’t good reason for hope, of course. Growth does look resilient, and there is some historical precedent for comfortable cool-downs.

In 1994 and 1995, the Fed managed to slow the economy gently without plunging it into a downturn in what is perhaps its most famous successful soft landing. Ironically, commentators quoted then in The Times weren’t convinced that policymakers were going to pull it off.

5. When did people stop being drunk all the time? – Lefineder

The English, said Sir John Fortescue (c. 1470), “drink no water, unless at certain times upon religious score, or by way of doing penance.”, looking at reconstructions of beer consumption from the middle ages to the pre-industrial era this was only a slight exaggeration. When estimating consumption from the amount of beer provided to soldiers, convicts, and workers or reconstructing consumption from tax revenues on beer we see that the average person consumed about a liter of beer a day, this is around four times as much as consumption in modern beer-drinking countries…

…Is this a historical overestimate? Probably not, in fact, there are several ways in which we might be underestimating historical consumption, most alcohol consumption in the past was from the local mono-drink but we should be still missing some amount of alcohol drunk by wine drinkers in the beer-drinking countries and vice versa and also the small amount consumed by spirits. In the medieval city of Ghent where there is data from 14th-century tax revenue on the consumption of both wine and beer2 per capita, annual consumption is:

  • ~40-liters wine
  • ~1300-liters beer (Such high figures are probably partly the results of the wealthy state of the city following the black death)…

…For English soldiers, it’s long been accepted to receive 8 pints of beer (4.5 L) as a daily ration9 an amount so great it probably was not wholly consumed, people did not have to use all their ration and they could also share it with their families. Nevertheless given that such quantities of alcohol were commonly supplied to historical armies the average soldier in the past didn’t just get angry for battle he got pissed. For sailors the beer supplied was of the strong kind (10%-15% alcohol) since this was the only kind that preserved itself well in the sea, hence drunk as a sailor. Such large consumption among workers and soldiers would mean that around a quarter to close to half of the calories in their diet were from booze…

…England transitioned to a low rate of beer consumption toward the end of the 18th century, looking at the more granular data on Malt beer consumption we see that this transition coincided with the timing of the onset of the British industrial revolution (1780-1800s).

Society is transformed in several ways, Whereas beer expenditure used to consume 12.5% of people’s salary in 1734 in the 1800s it consume only 1-3%. In the English poll tax of 1379-81 we can see that a total of 2.5% of the medieval workforce is comprised of brewers, in 1841 this is reduced to only 0.3 of the labor force…

…In the first of the following graphs, we see when people finish their workday, around 17:00. In the second graph, we see when people start drinking, the answer for the 18th-century cohorts is that drinking starts during the workday and already by 17:00 around 30% of people already drank liquor. In the 1830s this is no longer the case drinking on the job has seem to have been eliminated, people only start being recorded as drinking after 16:00. Society has been transformed by commercial forces.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Deepmind) and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 30 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 30 July 2023:

1. Why the World Is on the Brink of Great Disorder – Ray Dalio

A few years ago, I saw three big things happening that hadn’t happened in my lifetime but had happened in the 1930-45 period. These were:

  1. The largest amounts of debt, the fastest rates of debt growth, and the greatest amounts of central bank printing of money and buying debt since 1930-45.
  2. The biggest gaps in wealth, income, values, and the greatest amounts of populism since the 1930-45 period.
  3. The greatest international great powers conflict, most importantly between the U.S. and China, since 1930-45

Seeing these three big things that never happened in these magnitudes in my lifetime led me to study the rises and declines of markets, economies, and countries over the last 500 years, as well as the rises and declines of China’s dynasties the last 2,100 years.

That examination showed me that these three big forces—i.e. the debt/money one, the internal conflict one, and the external conflict one—transpired in big cycles that reinforced each other to make up what I call the Big Cycle. These cycles were driven by logical cause-effect relationships Most importantly, this study of the last 500 years of history taught me that:

  1. The previously described financial conditions repeatedly proved to be leading indicators of big financial crises that led to big shifts in the financial order.
  2. The previously described levels of political and social gaps repeatedly proved to be leading indicators of great conflicts within countries that led to big changes in domestic orders.
  3. The previously described great powers’ conflicts repeatedly proved to be leading indicators of international conflicts that led to big changes in the world order.

Said differently, history shows that the painful seismic shifts part of the Big Cycle comes about when there is simultaneously 1) too much debt creation that leads to debt bubbles bursting and economic contractions which cause central banks to print a lot of money and buy debt, 2) big conflicts within countries due to big wealth and values conflicts made worse by the bad economic conditions, and 3) big international conflicts due to rising world powers challenging the existing world powers at a time of economic and internal political crises In doing this study, I also saw two other big forces that had big effects. They are:

  1. Acts of nature (droughts, floods, pandemics) including climate change.
  2. Learning leading to inventions of technologies that typically produced evolutionary advances in productivity and living standards —e.g., the First and Second Industrial Revolution, and computing/AI revolution.

I call these the Five Big Forces. I saw how they affect each other and change in logical ways to produce the Big Cycle that produces big changes in the world order. I came to realize that if one understands and follows each of these forces and how they interact, one can understand most everything that’s changing the world order. That’s what I’m trying to do…

…In the U.S., we are now in middle part of what I call the short-term debt cycle and is also known as the business cycle. These short-term debt cycles have lasted 7 years on average, give or take about 3 years. There have been 12 1/2 of them since the new monetary world order started in 1945. So, we are now about half-way though the 13th of the cycles, at the point of the cycle when the central bank has tightened money to fight inflation that is just before the debt and economic contractions which will likely come over next 18 months.

We are also in a late and dangerous part of the long-term debt cycle because the levels of debt assets and debt liabilities have become so high that it is difficult to give lender-creditors a high enough interest rate relative to inflation that is adequate to make them want to hold this debt as an asset without making interest rates so high that it unacceptably hurts the borrower-debtor. Because of unsustainable debt growth, we are likely approaching a major inflection point that will change the financial order. Said differently, it appears to me likely that we are approaching a debt/financial/economic restructuring that will lead to big changes to the financial order…

…In several countries, most importantly the U.S., we have seen a growing percentage of the population that are populist extremists (about 20-25 percent of the right are extreme and about 10-15 percent of the left are) and a shrinking of the percentage of the population that are bipartisan moderates. Though the bipartisan moderates still remain in the majority, they constitute a declining percentage of the population and they are far less willing to fight and win at all costs. In studying history, I saw this growing populism of both sides and increased conflict has repeatedly occurred when large gaps in wealth and values existed at the same time as bad economic conditions. At such times, significant percentages of the population chose populist political leaders who vowed to fight and win for them rather than compromise…

…Looking ahead, the next 18 months will be an increasingly intense big election period which will lead to much greater political conflict which is likely to sharper the divide between the left and the right. Thirty-three Senate seats, the presidency, and control of the House will be fought over by a number of populist candidates and there will likely be poor economic conditions, so the fights will be vicious and there will be a real test of rule-following and compromising, both of which are required to make democracies work…

…The conflicts between the U.S. and China are likely to intensify as domestic political tensions will likely lead to increased aggressiveness toward China. That is because in the U.S. most everyone is anti-China and those running for office will want to out-China-bash each other in an election year. China and the US are already dangerously close to some form of war, whether an all-out economic one or, worse, a military one…

…What can we expect from technology/human inventiveness? Like acts of nature, it is hard to know exactly, though there should be no doubt that generative AI and other technological advances have the potential to cause both massive productivity gains and massive destructions, depending on how they are used. The one thing that we can be sure of is that these changes will be greatly disruptive.

Exactly how events will unfold is beyond my ability to say, but there is no doubt in my mind that those who assume that things will work in the orderly ways we have gotten used in the last few decades will be shocked and probably hurt by the changes to come.

How well these changes are managed will make all the difference. If our leaders can rise above their tendencies to fight and instead focus on cooperating, we can certainly navigate these tricky times to create a better world for most people. Presumably, this outcome is best for everyone, so we should be strongly against civil disorder and war between nations, keeping it in the back of our mind so we strive for cooperative decision-making.

2. Americanas, The Titanic Fraud – Consuelo Diguez (h/t to Marcelo Lima)

Two days earlier, at 6:30 pm, Rial had released a material fact that exploded in the market like dynamite in a fuel tank – and resigned from the company he had taken over on January 2nd. He had only been in office for nine days. The relevant fact (the name given to the statement that a publicly traded company makes to its investors and the market in general about a matter of paramount importance) informed that Americanas, the giant retailer controlled since 1982 by the three richest and most admired businessmen in the country, Jorge Paulo Lemann, Marcel Telles and Carlos Alberto Sicupira, had “accounting inconsistencies” in its balance sheet – in the order of 20 billion reais.

Aside from the colossal gap, the company accumulated a debt of around 22 billion reais with the banks and owed 6.67 billion reais in debentures. All in all, the debt exceeded 48 billion reais, almost five times Americanas’ equity. In summary: the traditional retailer, founded in 1929, was broke. The hole discovered by Rial when he took over the company would be, by itself, scandalous in any part of the world. But in this case, it concealed something even more serious. The expression “accounting inconsistencies” was actually a euphemism for titanic fraud. The biggest fraud in the history of Brazilian corporations.

To the general astonishment, the scam filed against the retailer had been going on for at least ten years. Worse. The first investigations indicated that everything happened with the knowledge and participation of its then president, Miguel Sarmiento Gutierrez, a man trusted by the controllers, who had left his post on December 31st…

Sergio Rial started the virtual conference with bankers asking for calm. He no longer spoke as president of Americanas, but as a representative of the controlling shareholders Lemann, Telles and Sicupira, who had asked him to help alleviate the crisis. His challenge was to explain how it had been possible for this colossal shortfall of 20 billion reais not to appear on the balance sheet. The operation – as he told it – was intricate and took place through the misuse of a legal instrument, known in the market as drawn risk.

This is a very common transaction between banks and retail companies. It works like this: the retailer buys a product from its supplier, but, in order not to run out of capital, it transfers the debt to a bank. The bank then pays the supplier in cash, but with a small discount. The retailer becomes indebted to the bank, with which, despite the incidence of interest, it manages to extend the payment terms – long terms that it would not be able to obtain from its supplier. When making this transaction, the retailer needs to record the drawn risk operation on its balance sheet as bank debt. After all, the debt he had with the supplier was assumed by the bank.

That’s when the fraud at Americanas began: the managers did not account for such bank debt on the balance sheet. For all intents and purposes, it was as if he didn’t exist. They resorted to this makeup for two reasons. First, because, by hiding the withdrawn risk operations, the retailer was able to present a balance sheet with a profit (false) and not a loss (true). For ten years, the balance sheet shone as if the company was healthy and the company’s shares appreciated year after year, leading more and more investors to buy its shares – which ended up guaranteeing more money in the cash register. The good – fictitious – results helped Americanas capitalize and obtain loans from banks.

The second reason was the greed of managers. As they were remunerated based on the company’s performance, executives pocketed stratospheric bonuses the better the result. Part of these bonuses was paid in shares of the retailer. Therefore, they gained twice: with the bonuses received for the good performance of the company and with the appreciation of their shares, boosted by the numbers made up.

In the last Americanas balance sheet, published in September last year, the total long-term debt with banks, referring to the normal loans that the company took, was 19.3 billion reais. For the market, this was not a worrying figure, given that the company had revenues of 14 billion reais per year, and could therefore meet its commitments without difficulty. However, when the 20 billion that were hidden came to light, in January, Americanas’ real debt with the banks surpassed 40 billion. It was double what had been officially declared. And it was priceless…

…Since the beginning of the scandal, the trio of controllers, until then considered the maximum personification of the efficiency of Brazilian capitalism, started to be derided. In addition to the revolt at the damage caused by Americanas and the clumsy way in which the crisis was announced, the market, which idolized Lemann, Telles and Sicupira, suffered a shock and began to give vent to a powerful disappointment.

Most creditors and managers had worked or dreamed of working in one of the companies controlled by the trio of billionaires. Starting with Banco Garantia, founded by Lemann in 1972, which acquired a mythical aura in the market for having changed the way investment banks operated, although it was hastily sold to Banco de Investimentos Credit Suisse (Brazil) in 1998 , hit by the Asian crisis. Generations of managers tried to emulate the talent of Lemann, Telles and Sicupira for business, who gained worldwide visibility with their most radiant undertaking: the creation of AB InBev, one of the largest beer producers in the world, owner, among other brands, of Brahma and Antarctica, Belgian Stella Artois and North American Budweiser (the latter in partnership with mega investor Warren Buffett).

Among many of those bankers at the meeting with Rial, the feeling of disenchantment was perceptible, as if the “divine trinity of the market” had betrayed them. “If these guys get demoralized, who are our role models for successful entrepreneurs? The old man from Havan?”, asked me a former partner of the trio, who confessed to having spent a sleepless night talking to a friend to try to understand what had happened to produce a fraud of this magnitude. Economist André Lara Resende, who worked at Garantia at the beginning of the institution, before assuming relevant positions in the Fernando Henrique Cardoso government, said he was not satisfied. “They are my friends. I find this all very sad. I don’t believe they have anything to do with this fraud. But of course it’s very bad to have your reputation shaken at this point in your life. Of course it’s a blow to them.” The owner of a large investment fund interviewed by piauí expressed his disappointment as follows: “If, on the eve of the material fact, someone told me that a fraud like that would happen at Americanas, I would see it as a joke. No one would ever assume that such a thing could happen in a company whose owners had a track record of success and credibility.”…

…Almost centuries old, Lojas Americanas was founded by three North Americans who happened to land in the Brazil. They wanted to open a business in Buenos Aires, but when the ship stopped in Rio de Janeiro, they realized the country’s potential and changed their plans. The first store was opened in 1929, in Niterói. In 1940, when it was no longer in the hands of the founders, the company went public. Then, in 1982, in a move on the Stock Exchange, Lemann, Telles and Sicupira, who controlled Banco Garantia, took the helm of the company for 24 million dollars. Sicupira became its president, going against Luiz Cezar Fernandes, a partner in Garantia, for whom the best thing was to sell the retailer straight away, pocketing a good profit. After several fights, Fernandes left Garantia and founded Pactual. “Beto wouldn’t give up Americanas”, Fernandes told piauí, in his apartment facing Guanabara Bay, in Rio de Janeiro. “It’s a business that, if you look at the numbers, has always been mediocre. But he, with his arrogance, did not accept discussing the problem.”…

…Since the beginning, Americanas’ Board of Directors has been under the control of the controllers. Of its 7 members, 4 were nominated by the trio Lemann, Telles and Sicupira. In recent years, among the directors were Eduardo Saggioro Garcia, chairman of the board and trusted man of the controllers, Sicupira himself, who passed the command of the company to Gutierrez in 1991, and Paulo Alberto Lemann, son of Jorge Paulo Lemann. For years, Sicupira’s daughter, Cecília, also occupied a chair there. Although it is a public company, the company does what the board, controlled by the trio, approves. Even because, as a former executive at the retailer told me, who would dare to question the decisions of three aces of Brazilian capitalism?

For some lower-ranking employees, the management model at Americanas, due to the managers’ aggressiveness and lack of empathy, was never the best. Other former executives told me that problems were systematically ignored. “Any proposal we made was rejected. Miguel Gutierrez only worked with his people. And, in fact, everything that happened there was Beto’s orders. There was even a joke in the company among employees. Every time an order came from above, the group would ask: ‘Did Beto authorize it?’” According to these former executives, there was “a culture of fear”.

The idea of “meritocracy” sold by the trio was also questioned. “There was neither merit nor autonomy. It was a hand-kissing culture,” said a former employee who worked at the house for ten years. Another added: “There was no feedback from employees. They liked to promote younger people to managers simply so they wouldn’t pay overtime.” One of these managers, who has already left the company, told me that the meal ticket was 4 reais. But if the company made a profit, even though the salary was low, everyone got a dividend. Only, in return, “you had to subject yourself to an unhealthy workload and a lot of humiliation.” In 2019, the company was sued and ordered to pay 11.3 million reais for moral harassment of employees with disabilities in Barueri stores, in the Metropolitan Region of São Paulo.

There was also no rational cost control. Just cuts without further analysis. Basic things like hiring stores with cheaper rent were ignored. “They didn’t have that concern. Instead, they preferred to strangle suppliers and employees.” Part of the employees’ remuneration was in company shares. Anyone who wasn’t willing to buy them was frowned upon. In addition, they could not dispose of the shares and, when they left the company, many took a loss, losing part of the investments they had been obliged to make, because they had not completed the period of service necessary to withdraw the money.

A defining moment at Americanas, according to the executives who preferred not to be identified because they are employed at other companies, happened at the end of 2019, when online sales at Mercado Livre and Magazine Luiza surpassed those at Americanas. “We were close to Black Friday and many employees mobilized to make suggestions in order to increase sales. Managers ignored the suggestions.” Sales dropped. “Everyone had been warning that Magazine Luiza was going to overtake us and the managers said no. Finally, Magazine Luiza became almost twice as big as Americanas.”

Another criticism was related to the treatment given to customers. When there were pricing errors or complaints, the managers, instead of trying to solve them, preferred to put the Legal Department to work. “They spent fortunes on lawsuits when they had a solution on the table,” a legal official told me. A former financial coordinator at the retailer, when asked what it was like to work at the trio’s company, confessed: “That was bizarre. The motto I heard several times there, excuse the expression, was ‘an eye for an eye, a tooth for a tooth, dick up the customer’s ass’.”

Financial operations, on the other hand, were very closed, restricted to the president, the group of directors and a “little group of sycophants”. It was unusual behavior in the market, according to one of the former financial managers interviewed by piauí . “Everything there was very centralized. It always has been,” he said. “The board of directors would gather in the room discussing financial operations that we only became aware of through material facts or the balance sheet. The company was not transparent.”…

…Americanas’ disrespectful behavior towards suppliers was one of the biggest annoyances for employees. As the retailer buys a lot, the suppliers depend on it. “Buyers were very tough, they were hard to pay and they hurt many companies with this abusive treatment,” said a former purchasing manager. The practice was always the same: Americanas committed to pay the supplier within 30 days, but unilaterally changed the deadline to 60 days. When the supplier called to complain, the order was not to answer. Afterwards, the term was changed to 90 or 180 days, until the supplier was strangled. Once that was done, they got in touch, advising that they were going to pay, but with a discount and without interest. “The guy was already so desperate to get paid that he would do any business,” said a former employee in the purchasing department…

…It is not from now that Americanas presents problems. Business administrator Oscar Malvessi, from Fundação Getulio Vargas, studies the reasons why Brazilian companies lose value. In a conversation at his office on Avenida Paulista, last March, he was indignant with what had happened to Americanas. “It is impossible to imagine this scandal in a company that has corporate governance, which, in theory, means that it follows national management principles, with a risk committee, compliance, with internal and external audits . ” The fact is, however, that the retailer has already been losing value on the Stock Exchange since July 2021, according to him. “The resounding destruction of the wealth of shareholders, the company and stakeholders did not just happen after the outbreak of accounting fraud”, he explained.

When Lojas Americanas merged with B2W, creating Americanas SA, in 2021, the reaction was not good and the value of the two companies together fell from 77 billion to 55 billion. The trio of managers, at that time, diluted their stake in the company, which was 60%, to 31%, gave up the control premium and started to call themselves “reference shareholder”, a figure that does not exist in the Law of Brazilian Corporations. From then on, the value of Americanas continued to fall, until it reached 11 billion reais and, the day after the material fact, crumbled into 1 billion reais, imposing a monumental loss to investors large and small, including the employees forced to buy company stock.

BTG Pactual, in its lawsuit against Americanas, accuses the trio of having diluted its stake in the company, already predicting the gap that would surface in 2023. Malvessi, from FGV, makes another association. He considers that “the culture of profit at any price, the abusive pressure on suppliers, the form of executive compensation, in addition to creative accounting, quickly turned into autophagy, with the destruction of the company, shareholders and stakeholders ”.

The fall of Americanas cannot be compared to any other business failure of the Lemann-Telles-Sicupira trio. But the current view is that the policy of “meritocracy” or executive compensation based on profit at any price, combined with the irrational cost-cutting policy, is at the root of all losses. Starting with Warranty. The bank has always carried out risky operations and its operators have spared no efforts to earn a lot of money, even putting the institution at risk. In the book Sonho Grande, the trio explains that the bank almost went bankrupt, which is why it was sold in a hurry, as the “three would have walked away from the business and let the boat run smoothly”. They blamed the new generation of managers for just wanting to “fatten their personal wealth, without thinking about the institution”. In other words, as in the case of Americanas, the troika of Brazilian capitalism exempted itself from responsibility for the failure of the deal.

When they lost Garantia, the three had already made their most successful move: the purchase of Brahma, in 1989. In this case, it fell to Telles to assume command of the company. Again, it increased profits by cutting staff – 2,500 were laid off, out of a total of 20,000 –, reducing salaries and skinning suppliers, which they paid only 120 days after purchase. If you didn’t, you were out of business. With few breweries on the market, everyone swallowed the impositions. Brahma, however, became a success story, mainly with the standardization of products. In 1999, despite the screams of competitors, the trio bought Antarctica and formed Ambev. The operation was criticized by consumer protection associations, politicians and analysts, for whom the Administrative Council for Economic Defense (Cade), the body that watches over competition, should not have approved an operation that created a monopoly in the Brazilian beer market.

In 2004, Ambev merged with the Belgian Interbrew, forming InBev and becoming a leader in the world market. In the end, the trio took over the entire management of the business. The Belgian employees, according to the testimony of the trio in the book Sonho Grande, were shocked by the aggressive and greedy practices of the Brazilians. The strategy was repeated: fixed salary reduction and remuneration increase via bonus. Anyone who didn’t agree, get out. But the most spectacular step of the three Brazilian businessmen was taken in 2008. In association with the mega investor Warren Buffett, they bought the North American Anheuser-Busch (AB), maker of Budweiser. Thus, they created AB InBev.

Americans were shocked to lose such a traditional brand to a foreign group. Even then President Barack Obama was against the deal. The Brazilian executives taken to the company were encouraged to reduce expenses and integrate AB into InBev within five years. According to the book Sonho Grande , the 39 top executives of the new company’s management were offered around 1 billion dollars in stock options (the right to buy the company’s shares after the business took off), if they hit the target. And they knocked. One of the executives, Carlos Brito, the mastermind behind AB’s merger with Interbrew, received 500 million reais in bonuses.

The operation was a success, with shares appreciating by 270%, and application of the old formula: cutting costs to the limit, laying off employees – in the first few weeks alone, 1,400 people were laid off in 2008 –, squeezed suppliers, bonuses spectacular. In one year, executives cut $1 billion in costs and sold $9 billion in assets. While Brazilians celebrated, Americans complained. In 2013, they even accused the new managers of changing the flavor of the beer to save money, an accusation that was never proven…

…The most embarrassing story occurred in América Latina Logística (ALL). The company was acquired in 1997, at the beginning of the privatization of the railroads and when the trio’s big dream was to buy infrastructure and logistics companies in partnership with state pension funds and the BNDES. Business at the time was done through GP, the investment fund of the three, which was later sold and replaced by 3G Capital.

After purchasing ALL, they chose Alexandre Behring, a 30-year-old executive who knew nothing about railroads, to run the company. He adopted the same cutting recipe. In 2004, Behring switched to 3G and ALL had other presidents, among them Bernardo Hees and Eduardo Pelleissone, but the way of dealing with middle-level employees continued to create a toxic environment. In order to achieve the cost-cutting targets with a consequent increase in profit, as told by a company executive, the controllers did not invest in the company. In 2008, Cosan, a sugar producer and today one of the largest fuel distributors and ethanol producers in the country, owned by businessman Rubens Ometto, signed a contract with ALL for the company to transport sugar from its farms in the interior of São Paulo to the Port of Santos. For the business to work, Ometto invested 1.2 billion in ALL for the duplication of the railway track and investment in new trains and wagons. The problems did not take long to appear. Cosan’s administrators began to complain that their products were being delivered late and that the new locomotives were being used to transport soy because it was a more profitable commodity. ALL transported Cosan’s sugar on trains from the 1960s. The delay in delivering the product generated fines that ALL never paid.

As ALL did not invest in the preservation of the tracks, accidents were not uncommon. In 2010, an accident with a train in the city of Brotas in São Paulo spilled 100,000 liters of fuel around the track. The most serious, however, happened in 2013, when a locomotive transporting corn derailed, killing eight people in São José do Rio Preto, also in São Paulo. It was a wake-up call that cost-cutting was bumping into the safety issue.

Working conditions were deplorable. Drivers who needed to sleep in the wagons had to settle on the floor. As the old locomotives did not have bathrooms for employees, unlike the new ones, ALL’s managers decided to close the toilets on the new ones because the drivers only wanted to work on those that guaranteed a minimum of comfort. “The guys thought that running the railroad was the same as brewing beer,” an executive who worked at Cosan told me. “They didn’t invest in anything. They breached contracts. Not even the cargo transport regulatory agency wanted to talk to them anymore and suggested that we give up the partnership.”

In 2014, ALL’s fine with Cosan reached 500 million reais. As ALL was on the verge of going bankrupt, the producer only had two options: either terminate the contract and demand payment of the fine, which was unlikely to be paid, or stay with the business…

…The attacks against Lemann, Telles and Sicupira began to cool down after the three agreed to put up 10 billion reais to cover the gap in Americanas, which is being negotiated with creditor banks. The market has a version that Lemann even had a separate conversation with the presidents of the banks, to explain himself – among them, André Esteves. But for minority shareholders and suppliers, there will be no refreshment. Luis Stuhlberger, manager of one of the largest investment funds in Latin America, Verde, in a letter to his clients, resorted to harsh words when speaking about Americanas. “We were victims of fraud,” he said.

3. My 12 Biggest Key Investing Takeaways from “Antifragile” by Nassim Taleb – Eugene Ng

Asymmetry is where there is more upside than downside, where the positive payoff is significantly larger if you are right (you “earn big time”) than the negative payoff if you are wrong (you “lose small”).

Antifragility arises from asymmetry of more upside than downside, where one tends not to be permanently wiped out, and tends to gain from (1) volatility, (2) randomness, (3) errors, (4) uncertainty, (5) stressors, and (6) time.

Fragility is where there is more downside than upside, where one tends to be eventually permanently wiped out, and tends to lose from (1) volatility, (2) randomness, (3) errors, (4) uncertainty, (5) stressors, and (6) time.

Seek to be timeless, not timely. Focus on the long-term, not the short-term. Time will position the antifragile well, and the fragile poorly…

…Antifragility is anything that has more upside than downside from random events (or certain shocks).

Fragility is the reverse, anything that has more downside than upside.

What is fragile will eventually break over time, so being able to tell what is fragile helps. Positive black swans are more unpredictable than negative black swans. Focus more on removing all negative black swans, and then position for positive black swans, and the eventual process will take care of the outcome…

…It is a dual strategy of playing it safe in some areas (robust to negative black swans), and taking a lot of smaller risks in others (open to positive black swans), hence achieving antifragility.

Because of its construction, it reduces downside risk, and eliminates the risk of ruin…

…Statistics assume normal distributions, but most are not. Power laws drive venture capital returns, and so does public equities investing.

Most investments don’t do well, a small number tend to do very well, and their gains often eventually overwhelm all the losses from the losers combined many folds over.

Identify and focus on what matters that tend to do well, and ignore the rest that don’t…

…In addition, true optionality does not require intelligence, all it requires is to not be stupid and having the wisdom to avoid and not do unintelligent things to hurt yourself. The pros win first by not losing (then winning), we aim to do so as well. We want to play the game for as long as we can without being wiped out.

Via Negativa lists what is not, and proceeds by process of elimination. E.g. Michelangelo on the carving of the statue of David, the masterpiece of all masterpieces. His answer was: “It’s simple. I just remove everything that is not David.”

Negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition…

…Assign little/zero value to what anyone says or writes, if they have no skin in the game, as being wrong costs nothing to them.

Even more so, be wary of theories or anyone who speaks only for fees, with no skin in the game, or worse still, using their circle of influence to pump their own holdings, and benefit themselves. The first is bad, the second is the worst. In addition, be wary of others, who trade fragility of others for their own antifragility.

Skin in the game matters. Mistakes are costly, not free, and being right brings real rewards. Soul in the game brings it to a whole new higher level, committing to a belief, and having something to lose, if wrong.

4. The best book I’ve read this year – Chris Mayer

If you don’t know much about Rubin (I didn’t), he is a producer who worked on many great records by a long list of artists, from Adele to Johnny Cash (see his Wikipedia page). Perhaps he’s most famous for popularizing hip hop.

Anyway, he published a book this year titled The Creative Act: A Way of Being

…Rubin defines creativity broadly. It is simply bringing something into existence that didn’t exist before. That could be a conversation, a meal, a new route to get somewhere, an email, lots of things. It doesn’t have to be recorded, stretched on canvas, encased in glass or sold. With this broad view, Rubin sets the stage for wide applicability of what he has to say…

…“Because there’s an endless amount of data available to us and we have limited bandwidth,” Rubin writes, “we might consider carefully curating the quality of what we allow in.”

I think this is such an important and overlooked step for most (nearly all?) investors who simply allow too much garbage to grab their attention. They read too much macro, too much economic analysis, too many forecasts, too much news and think too much about politics.

Think about what else you might allow in if these things didn’t get so much space. Think like a nutritionist, except now you’re thinking about your brain and what raw material you are going to feed it. Higher quality inputs lead to higher quality outputs. Look for more original research, do more of your own, talk to people closer to the action (i.e., running companies), favor the concrete over the abstract (I’m reminded of Peter Lynch, who said “The GNP six months out is just malarkey. How is the sneaker industry doing?”) and favor annual reports over economic reports…

…Rubin suggests “submerging yourself in the canon of great works.” (What makes the canon of “great” works he leaves rather undefined). Read classic books instead of the news, for example. Watch iconic films. Listen to the most influential music. Or in our case, study great companies.

Rubin says even if you do this for one year, at the end of that year, you’ll have “a more honed sensitivity for recognizing greatness.” Let curiosity be your guide, “stoked by a hunger to… learn, to be fascinated and surprised on a continual basis.” …

…Another theme Rubin hits that I have banged on about in my own work is the idea of being careful with labels. We tend to want to slap labels on everything. But labels can be toxic to clear thinking. They are limiting. As Rubin says:

“Any label you assume before sitting down to create, even one as foundational as sculptor, rapper, author, or entrepreneur, could be doing more harm than good. Strip away the labels. Now how do you see the world?”

This is a big one for investors, who are often so eager to paint the world with labels: “small cap” “large cap” “growth stock” “value stock” and so on. Not only that, but they tend to paint themselves with labels. “We’re value investors,” says one letter. Why the readiness to adopt such a label? What does that even mean? To start with such a label is to limit and twist how you see the world. Rubin says somewhere, where labeling begins, thinking ends.

And think about this, which I loved and wanted to stick in here somewhere:

“Nature transcends our tendencies to label and classify, to reduce and limit. The natural world is unfathomably more rich, interwoven, and complicated than we are taught, and so much more mysterious and beautiful.”

You can say similar things about markets generally. They are way more complicated and interwoven than our labeling suggests.

Labels can be potentially dangerous, but so are narratives. And investors love narratives. (“Inflation is coming down.” “We’re on the brink of recession.” “We’re in a bull market.”) We also have explanations for everything – usually after the fact. But Rubin advises keeping the narratives in check:

“Generally our explanations are guesses. These vague hypotheticals become fixed in our mind as fact. We are interpretation machines, and this process of labeling and detaching is efficient but not accurate. We are the unreliable narrators of our own experience.”...

…I love, too, what Rubin has to say about patience, something almost all investors could use more of. For Rubin patience “begins with acceptance of natural rhythms.” For us investors, that means accepting that bear markets happen, that stocks go down and can go down or nowhere for long stretches of time, that compounding takes time and that many things are out of our control:

“Demanding to control a work of art would be just as foolish as demanding that an oak tree grow according to your will.”

Same with your portfolio. You can’t control it. You plant things and you give them time to grow. You weed when you need to, but you don’t pull up the whole garden because you fear there is a drought coming…

…Here is Rubin on helping create that distance:

“When we obsessively focus on these events, they appear catastrophic. But they’re just a small aspect of a larger life, and the further you zoom back, the smaller each experience becomes. Zoom in and obsess. Zoom out and observe. We get to choose.”

I think of all the times certain sharp stock moves (up or down) seemed so momentous at the time. And yet, when you zoom out and look at a longer-term chart, those events barely register.

5. Goodbye to the Prophets of Doom – Yascha Mounk

For much of the country’s history, most Americans assumed that the future would bring them or their descendants greater affluence. Despite periodic economic crises, the overall story seemed to be one of progress for every stratum of the population. Those expectations were largely borne out: The standard of living enjoyed by working-class Americans for much of the mid-20th century, for example, was far superior to that enjoyed by affluent Americans a generation or two earlier.

But after the 2008 financial crisis, those assumptions were upended by a period of intense economic suffering coupled with a newfound interest among economists in the topic of inequality. Predictions of economic decline took over the conversation. America, a country long known for its inveterate optimism, came to dread the future—in which it now appeared that most people would have less and less.

Three arguments provided the intellectual foundation for the Great Disappointment. The first, influentially advanced by the MIT economist David Autor, was that the wages of most Americans were stagnating for the first time in living memory. Although the income of average Americans had roughly doubled once every generation for most of the previous century, wage growth for much of the population began to flatline in the 1980s. By 2010, it looked as though poorer Americans faced a future in which they could no longer expect any real improvement in their standard of living.

The second argument had to do with globalization’s impact on the worldwide distribution of income. In a graph that came to be known as the “elephant curve,” the Serbian American economist Branko Milanović argued that the world’s poorest people were experiencing only minor income growth; that the middle percentiles were benefiting mightily from globalization; that those in the upper-middle segment—which included many industrial workers and people in the service industry in rich countries, including America—had seen their incomes stagnate; and that the very richest were making out like bandits. Globalization, it seemed, was a mixed blessing, and a distinctly concerning one for the bottom half of wage earners in industrialized economies such as the United States.

The final, and most sweeping, argument was about the nature and causes of inequality. Even as much of the population was just holding its own in prosperity, the wealth and income of the richest Americans were rising rapidly. In his 2013 surprise best seller, Capital in the Twenty-First Century, the French economist Thomas Piketty proposed that this trend was likely to continue. Arguing that the returns on capital had long outstripped those of labor, Piketty seemed to suggest that only a calamitous event such as a major war—or a radical political transformation, which did not appear to be on the horizon—could help tame the trend toward ever-greater inequality…

…The U.S. economy, Autor wrote in a highly influential paper in 2010, is bifurcating. Even as demand for high-skilled workers rose, demand for “middle-wage, middle-skill white-collar and blue-collar jobs” was contracting. America’s economy, which had once provided plenty of middle-class jobs, was splitting into a highly affluent professional stratum and a large remainder that was becoming more immiserated. The overall outcome, according to Autor, was “falling real earnings for noncollege workers” and “a sharp rise in the inequality of wages.”

Autor’s past work on the falling wages of a major segment of the American workforce makes it all the more notable that he now sounds far more optimistic. Because companies were desperately searching for workers at the tail-end of the pandemic, Autor argues in a working paper published earlier this year, low-wage workers found themselves in a much better bargaining position. There has been a remarkable reversal in economic fortunes.

“Disproportionate wage growth at the bottom of the distribution reduced the college wage premium and reversed the rise in aggregate wage inequality since 1980 by approximately one quarter,” Autor writes. The big winners of recent economic trends are precisely those groups that had been left out in preceding decades: “The rise in wages was particularly strong among workers under 40 years of age and without a college degree.”

Even after accounting for inflation, Autor shows, the bottom quarter of American workers has seen a significant boost in income for the first time in years. The scholar who previously wrote about the “polarization” in the U.S. workforce now concludes that the American economy is experiencing an “unexpected compression.” In other words, the wealth gap is narrowing with surprising speed….

…A few years ago, Milanović set out to update the original elephant curve, which was based on data from 1988 to 2008. The result came as a shock—a positive one. Once Milanović included data for another decade, to 2018, the curve changed shape. Instead of the characteristic “rise, fall, rise again” that had given the curve its viral name, its steadily falling gradient now seemed to paint a straightforward and much more optimistic picture. Over the four decades he now surveyed, the incomes of the poorest people in the world rose very fast, those of people toward the middle of the distribution fairly fast, and those of the richest rather sluggishly. Global economic conditions were improving for nearly everyone, and, contrary to conventional wisdom, it was the most needy, not the most affluent, who were reaping the greatest rewards.

In a recent article for Foreign Affairs, Milanović goes even further. “We’re frequently told,” he writes, that “we live in an age of inequality.” But when you look at the most recent global data, that turns out to be false: In fact, “the world is growing more equal than it has been for over 100 years.”…

…But even Piketty’s pessimistic diagnosis, made a decade ago, has come to look much less dire.

In part, this is because Piketty’s work has come in for criticism from other economists. According to one influential line of argument, Piketty mistook why returns on capital were higher than returns to labor in many industrialized countries in the decades after World War II. Absent concerted pressure to prevent this, Piketty had argued, the nature of capitalism would always favor billionaires and giant corporations over ordinary workers. But according to Matthew Rognlie, an economist at Northwestern University, Piketty’s explanation for why inequality increased during that period was based on a misinterpretation of the data.

The outsize returns on capital during the latter half of the 20th century, Rognlie argues, were mainly due to the huge growth in house prices in metropolitan centers such as Paris and New York. If returns on capital were larger than returns to labor over this period, the reason was not a general economic trend but specific political factors, such as restrictive building codes. In addition, the main beneficiaries were not the billionaires and big corporations on which Piketty focused; rather, they were the kinds of upper-middle-class professionals who own the bulk of housing stock in major cities.

Economists continue to debate whether such criticisms hit the mark. But even as Piketty defended his work, he himself started to strike a more optimistic note about the long-term structure of the economy. In his 2022 book, A Brief History of Equality, he talks about the rise of inequality as an anomaly. “At least since the end of the eighteenth century there has been a historical movement towards equality,” he writes. “The world of the 2020s, no matter how unjust it may seem, is more egalitarian than that of 1950 or that of 1900, which were themselves in many respects more egalitarian than those of 1850 or 1780.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in MercadoLibre. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 23 July 2023:

1. RWH029: Beyond Rich w/ Pico Iyer – William Green and Pico Iyer

[00:48:03] William Green: Some of what you were just saying gets to this whole question of how to design a life that suits ourselves. And I thought about this a lot after I guess it was 2008, 2009, and I’d been [Inaudible] by Time and then I went to work at another company for a while and I hated it. And I was working with my friend Guy Spier on his autobiography, his memoir. He’s a hedge fund manager and I was helping him write that, and part of what he had done was he had moved to Zurich, having been caught up in this kind of vortex of selling and greed and all of that, in competition in the hedge fund world in New York, and he really rebooted his entire life by moving to a slightly bland but very pleasant suburb of Zurich. And this really got me thinking a lot about how to design a life, and then when I moved from London back to New York, I really thought very carefully about, “Well, so I’m going to live in a more modest home than I lived in in London, but I’m not going to be surrounded by people with their Maseratis and their Ferraris and stuff. Because I was living in Belgravia in London on Time Magazine’s, dime, and once that was no longer available to me, I really had to think about how to structure a life. And it feels to me like part of the thing that got you to think about how to structure your own life was this seminal event that happened back I guess in about 1990, right? Where there was a fire, your family home in Santa Barbara that burned your house to the ground, and I wanted to talk about that in some depth because I think it gets in a lot of these issues that we want to discuss about how to construct a life that’s truly valuable, it’s truly abundant. But if you could start by just telling us what actually happened and how this became a really defining, formative event in the way you view your life.

[00:49:47] Pico Iyer: Well, again and again, William, you’ve asked exactly the question that’s been coming up in my mind. It’s as if we’re absolutely working in sync or telepathically. And just before I address that, two things: designing a life is such a beautiful phrase and it reminds me, we put so much attention into how we’ll furnish a house and how we’ll make a house, which is we need to do, but even more essential is how will we furnish and make our lives. And when Guy Spier hosted you on his first podcast, it was one of the most lovely, humane conversations I’ve ever had. I learned so much about investing from it.

[00:50:19] William Green: Thank you.

[00:50:20] Pico Iyer: I learned even more about friendship and generosity, so to anyone who’s listening who hasn’t heard you be a guest on his podcast –

[00:50:28] William Green: Ah, well, it’s kind of you to listen because I know how little interest you must have in the world of investing, so I take that as a great honor that you listened. Thank you.

[00:50:37] Pico Iyer: I don’t have a huge interest in the world of investment, but I have a huge interest in the world of investors because they’re wise people.

[00:50:42] William Green: Yeah.

[00:50:43] Pico Iyer: They figured out how to live not just in a monetary sense, but they’ve got to where they are not by chance and not by foolishness, and I think they have a lot to offer, and that’s what your book is about, so yeah. In terms of the fire, I was sitting in my family house in the hills of California, and I saw this distant knife of orange cutting through a hillside, so I went downstairs to call the fire department. And then when I came upstairs again, five minutes later, literally our house was encircled by 70-foot flames, five stories high on all sides. So I grabbed my mother’s cat, jumped into a car to try to escape, and then I was stuck on the mountain road for three hours underneath our house, saved only by a good Samaritan who had driven up with a water truck to be of assistance, and then found himself stuck and saved us all by pointing with a little hose of water at every roar of fire that approached us. It was the worst fire in California history at the time, and it’s broken out just up the road from us. So of course, it was a shock. We lost every last thing in the world. In my case, all my handwritten notes for my next eight years of writing, probably my next three books. In my parents’ case, all the photos and mementos, our keepsakes from 60 years.

[00:51:54] Pico Iyer: But the interesting thing, looking back on it, was that months later, after adjusting to circumstances, when the insurance company came along and said, “Well, we have some money and you can replace your goods,” of course, that really did make me understand I didn’t need 90% of the books and clothes and furniture I’d accumulated. I could live much more lightly, which is really the way I’d always wanted to live. I called up my editor in New York – or in London actually at the time, and I said, “All those books I was promising you, I can’t offer them to you because all my notes have gone,” and because he’s a kind man, he commiserated for a while, but because he’s a wise man, he said, “Actually, not having notes may liberate you to writing much more deeply from your heart and from your memory, from imagination.” And then lacking a physical home in California, I suddenly began to think, “Well, maybe I should spend more time in the place that really feels like my true home,” which is Japan, and now I’m pretty much here all the time. And so in so many ways, that seeming catastrophe opened doors and windows that might otherwise have been closed for a long time, perhaps forever.

[00:52:59] Pico Iyer: And I was thinking about it a lot during the pandemic because the pandemic was closing so many doors and so many lives, but at the same time, it was opening little windows of possibility, at least for me, that otherwise I might never have glimpsed, and moving me to live in better ways than I had been beforehand. I suppose the one other interesting thing about the fire, especially given our connection, is that as soon as – I stuck there for three hours and smoke was so intense that no fire firetruck could come up and make contact with me, and I could hear helicopters above, but they couldn’t see me and I couldn’t see them. Finally, after three hours, a fire truck came up and told me it was safe to drive down. So I drove down through what looked like what I associated with scenes from the Vietnam War: houses exploding all over the place, cars smoldering, fires on every side of me. I went downtown and I bought a toothbrush, which was the only thing I had in the world at that point.

[00:53:53] Pico Iyer: And then I went to sleep on a friend’s floor, but before I went to sleep, because my job then was partly working for Time Magazine, I asked my friend if I could use his computer, and I filed a report. So three hours after escaping the fire, I filed a report on this major news event for which I had a front seat view. And I ended my little piece with a poem that I picked up in Japan, because I had begun spending time there, from the 17th century haiku, which just said, “My house burnt down. I can now see better the rising moon.” So the very night when I lost everything in the world, something in me, probably wiser than I am, realized not everything was lost. Certain things would be gained, and actually, the main thing I would gain was a sense of priorities. So, literally that night, I thought about that poem, “I lost everything. I can now really see what’s important.”

[00:54:46] William Green: Yeah, I read that article yesterday. It was beautiful and still incredibly vivid, and it was striking to me that I think in probably all six of the books of yours that I’ve read in recent weeks, you mentioned the fire. You come back to it again and again. It’s such a profound formative episode for you. One thing you wrote in Autumn Light, you said, “As I climbed all the way up to our house the day after everything in our lives was reduced to rubble, I saw that everything that could be replaced – furniture, clothes, books – was by definition worthless. The only things that mattered were the things that were gone forever, and I think that’s such an interesting question, this whole issue of what you discover has value after it’s gone. And this is something we talked about in Vancouver where you led a fascinating session where you asked people various questions. One of which was ””If you had, I think, 10 minutes to save anything from your home, what would you save?” And I wonder if you could talk a bit more about that sense of what has value and what doesn’t. What does have value? When you had a very near escape a few years later after you rebuilt the house, what did you take out, for example?

[00:56:03] Pico Iyer: The only way I live differently since the fire than before, this is a bit embarrassing, I keep all my notes in a safety deposit box in the bank because they’re still handwritten and they seem to me the one indispensable thing, not because I make my living by being a writer, but more because I feel that’s my life. My life is contained in this otherwise illegible scrolls. Other people, I think my mother might have kept her photographs as well as her jewelry in the bank, which makes absolute sense to me. So again, I don’t think there’s a right answer, but I think it’s a really useful question to ask, which is why I shared it with that little circle at TED, and just again, that sense that we know things intuitively, but unless we actually stop to ask ourselves that, we get caught up in the rush and then life catches us by surprise.

[00:56:49] Pico Iyer: Because it always will. You’ve read my books more closely than anyone I can imagine, and I’m so touched because that’s the ultimate compliment and act of generosity. And you’re the first person who’s noticed that they all keep on coming back to that fire, which is partly a metaphor for a world on fire, where a lot of our certainties are being burnt up, but also a way of saying that whoever you are, you’re going to face some of these challenges in life. It could be a typhoon or a flood or an earthquake, or it could just be a car coming at high speed towards you, the wrong side of the road or a bad diagnosis, but one way or another, and maybe this is my age speaking a little, I think it’s a useful exercise to think if suddenly I only had a little time, what would I want to do with it? Or if suddenly my life were upended, what is it that I would cherish? I can’t really answer your question so much as applaud it and say maybe I feel that’s the question we should all be asking ourselves…

…[01:18:49] William Green: I loved this story. I think it was in Autumn Light, where you talked about all of these very rich donors rolling up in their fancy suits and their expensive sock dresses, and they show him [referring to the Dalai Lama] this wonderful, elaborate architecture model of this beautiful Buddhist center with treasure rooms and meditation halls that they’re going to build.

[01:19:08] William Green: And he, I, the way you described it, I think he, he slaps the thigh of this monk who’s sitting both beside him and he says, no, no need. This is your treasure, and I thought that was really beautiful. There’s a sense of humanity to him and a sense of pragmatism where it’s like, don’t spend all the money.

[01:19:24] William Green: He’s like, just be kinder to people. Do, help people, and you said also, I think there was another lovely story in, in one of the books where he said these very rich people would come to him and ask for a blessing and he’d say, you are the only one who can give yourself a blessing.

[01:19:38] William Green: You have money, freedom, opportunity to do some good for someone else. Why? Ask me for what’s in your hands?

[01:19:46] Pico Iyer: Yes, and then I think he said, start a school. We’ll give money to a hospital. Do something very concrete that’s going to help you and everybody else much more, so I really feel unlike monks in every tradition, he’s pretty much given his whole life to the subject of your podcast.

[01:20:00] Pico Iyer: What is richness? What is wisdom, and what is happiness? And again, the other thing that I’ve sometimes witnessed is when he’ll show up in Los Angeles, traditionally, he’d be surrounded by, billionaires and movie stars and movers and shakers, and people would often say, it must be so hard to live amidst the poverty of India.

[01:20:17] Pico Iyer: He’d look across this room where many people are on their fifth marriages and going to see a therapist every day in their pain, and he’d say, well, there’s poverty, and there’s poverty, and of course the material poverty of India is really serious and one wants to do everything one can to help it.

[01:20:31] Pico Iyer: That’s what he did. In fact, partly with his Nobel Prize money, but there’s an inner poverty that is just as debilitating, and you guys have, in the terms of the world, done everything that could be expected and much more, and you’re still suffering terribly, so that’s the poverty that you really need to address.

[01:20:48] William Green: I think there was another message that came through very powerfully from your books about the fact that if we live in this extremely uncertain world where anything can happen, basically, one of the things you point out is there’s an urgency that comes from that. If nothing lasts forever, you’ve got to relish the moment in the knowledge that it may not come again.

[01:21:10] William Green: Can you talk about that? Because that seems to me a, just a hugely important if obvious insight. Like, like most great insights there, they are obvious but you’ve got to internalize them somehow.

[01:21:23] Pico Iyer: Yeah, and I think, that’s the main thing I’ve got from the pandemic. I realized I’m living with much more decisiveness and clarity, because I know time isn’t infinite and I always knew it.

[01:21:33] Pico Iyer: As you say we’ve been held, told it a thousand times and we grew up studying it at school and being reminded of it by the tolling bells in Kyoto, but I think it really came home to us during the pandemic and I was living with my 88-year-old mother and it was a great blessing. I could spend a lot of time with her.

[01:21:47] Pico Iyer: She died in the course of the pandemic unrelated to Covid, which was just another reminder that as you say, I think the central line in my most recent book is the fact nothing lasts as the reason that everything matters because we can’t take anything for granted. Let’s make the most of this moment as just as you said so perfectly, William.

[01:22:06] Pico Iyer: I don’t know what’s going. This afternoon, all I know is I’ve got this chance to talk to you and I never have that chance otherwise, let me make the most of it and bring all of myself to it, and I think,  to go back to the Dalai Lama and so much that we’ve been talking about and really where we began the conversation, none of this means ignoring the material stuff of the world.

[01:22:26] Pico Iyer: I think unless you’ve got that in place, it’s very hard to have the luxury of thinking about other things. Nobody is counseling poverty where if you are in a desperate state, you can’t think of anything other than relieving your immediate circumstances. I have a friend who’s a very serious Zen practitioner for many years, and a very actually accomplished and successful guy these days because of his writing.

[01:22:48] Pico Iyer: And he told me that at one point in his life when he was young, he decided to live on $8,000 a year. Very as simply as you could and beyond all that, and I think he probably managed that until somebody, maybe a wise Buddhist teacher told him living, trying to live on $8,000 a year is as crazy as trying to live off, trying to make 8 billion a year.

[01:23:09] Pico Iyer: The Buddha himself and Thomas Merton, everybody has seen. The silliness of extremes and twisting your life into a bonsai in order to live with almost nothing is as crazy as turning yourself into a madman to try to get everything. It’s a matter of balance, and I think that’s why, as you said, I mean really when I, we began by talking about my leaving Time magazine, but as I said earlier on, I couldn’t have left it if I hadn’t got there.

[01:23:34] Pico Iyer: And I couldn’t have seen through what, as you said about investors, they have to earn millions for them to realize, oh, actually maybe that’s not enough. I had to exhaust my boyhood ambitions to realize their insufficient ambitions as a young ambitions, and actually it’s something more that I need to fulfill me entirely, which is why if this podcast were called just wisdom and happiness, I’d be a bit skeptical about it because I would think, well, that’s wonderful stuff up in the air and abstract.

[01:24:02] Pico Iyer: But most of us are living in the world and so the fact that we begin with the richness part is what gives legitimacy, I think, to the other two parts because all of us in our lives have to take care of those fundamentals. Yeah. As you said, probably an hour ago before, as a way of addressing the other things…

…[01:39:40] Pico Iyer: But as you say, I think just in the most commonplace ways, mysteries everywhere, and thank heavens for that. I remember when my mother turned 80, we threw a party for her and one of her friends said, oh, Pico, why don’t you interview your mother? And I thought, roll my head eyes and oh, what a terrible idea.

[01:39:56] Pico Iyer: But my, her friend was eager to do this, so I said, okay, I will, so I asked my mother a few questions and I think the last question was, well, now you’re 80 years old. What’s the main thing that you’ve learned? And she said that you can never know another person, and I love that A, because it was the last thing, I expected my mother ever to say.

[01:40:13] Pico Iyer: I never knew if she believed that, and so by saying it, she actually bore it out. I didn’t know my own mother. I was really taken aback by that answer, and also, I was haunted by our answer because she was saying maybe her husband, my father, was as much a mystery to her as to me, and maybe she was saying that I.

[01:40:31] Pico Iyer: I’m a mystery to her, but whatever she meant by it, it was a wonderful answer. I’m so glad asked it, and that maybe when you and I are both 80 if we’re lucky enough to attain that, we’ll even more have this sense of how little we know about the people who are closest to us, and as you said about circumstances, which is which is wonderful.

[01:40:50] Pico Iyer: I’m so glad to be freed of that sense. I had as a kid that I knew exactly how my life was going and that I would plan it.  I think when that fire burnt down my house the day before, as you can tell, I had my next eight years mapped out. I knew exactly which books I was going to write, I’d accumulated all my notes, and suddenly life has a different plan for me.

[01:41:10] Pico Iyer: And I can’t say it’s a worse plan than the one I would’ve come up with.

2. The Problem with Valuation – Nick Maggiulli

But, I do have an issue with valuation models in general. Because, today, basically all the valuation metrics tell the same story—U.S. stocks are overvalued, therefore, we should expect a major crash as these metrics return to their long-term historical averages. Whether you use Hussman’s measure, the Buffett indicator, or Shiller’s CAPE (cyclically-adjusted price-to-earnings) ratio, the logic is always the same.

But, there’s a huge problem with this logic—there is nothing that says that these metrics have to return to their long-term averages. In fact, I believe the opposite. Valuation multiples are likely to stay above their historical norms for the foreseeable future. Why?

Because investing is much simpler today than it used to be. With the rise of cheap diversification over the last half century, investors today are willing to accept lower future returns (i.e. higher valuations) than their predecessors. This has fundamentally changed valuation metrics and made historical comparisons less useful than they once were. This might seem far-fetched, but let me explain.

Imagine it’s 1940 and you want to build your wealth by owning a diversified portfolio of U.S. stocks. How would you do it?

You first might try to go the mutual fund route to have a professional pick stocks for you. Though the first mutual fund was created in 1924 and the industry grew in the 1930s, many of these early funds had high load fees. These were fees that you had to pay anytime you bought (or sold) shares in the fund. Load fees varied, but could be up to 9% of your total investment… If you wanted to avoid such fees, your next best option would have been to create a diversified portfolio by hand. Unfortunately, this would have meant doing research to figure out which stocks would do well over time and which ones wouldn’t. This task would have been even more difficult and time consuming than it is today given the lack of access to information.

More importantly, you would be picking stocks during a time when it wasn’t obvious that owning stocks was the right thing to do. After all, it’s 1940 and America just came out of the worst economic crisis in its history. Are you sure that stocks aren’t just a gamble? We can answer this question with a definitive “no” today because we have historical evidence that shows otherwise. But this historical evidence wouldn’t have been readily available in 1940.

This is what I call the privilege of knowledge, or the idea that we are able to make certain decisions today that are ancestors couldn’t make because we have more information than they had. For example, it’s easy to say “Just Keep Buying” in 2023 because we have so much data to back it up. But, 1940 this wasn’t true…

…Investing today is far simpler and cheaper that it was nearly a century ago. This begs a question: how much annual return would you be willing to give up in 1940 to have all the investment innovations that we have today? I bet it’s at least a few percentage points. And, if this is true across investors in general, then we would expect stock prices to be bid up accordingly over time.

And this is exactly what we’ve seen over the past few decades. If you look at Shiller’s P/E (price-to-earnings) ratio going back to 1920, you can see that this ratio has been mostly increasing over the last century:

In fact, before 2000, the average Shiller P/E ratio was 15.5 and since then it has been around 27. This is evidence that investors are willing to bid up prices (and, thus, accept lower returns than their predecessors). Even in March 2009, when things looked the bleakest during The Great Recession, the P/E ratio only briefly dipped below its pre-2000 average (~15) before immediately shooting back upward…

…Nevertheless, this simple valuation model has the same flaws that all the others do—it assumes that underlying conditions are the same in every time period. It assumes that a P/E ratio of 15 in 1940 is identical to a P/E ratio of 15 in 2009. But, as I’ve just demonstrated, they aren’t. Yes, they may look the same, but they definitely don’t feel the same.

3. AI Is a Lot of Work – Josh Dzieza

A few months after graduating from college in Nairobi, a 30-year-old I’ll call Joe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. In Joe’s case, he was labeling footage for self-driving cars — identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of — frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10.

Then, in 2019, an opportunity arose: Joe could make four times as much running an annotation boot camp for a new company that was hungry for labelers. Every two weeks, 50 new recruits would file into an office building in Nairobi to begin their apprenticeships. There seemed to be limitless demand for the work. They would be asked to categorize clothing seen in mirror selfies, look through the eyes of robot vacuum cleaners to determine which rooms they were in, and draw squares around lidar scans of motorcycles. Over half of Joe’s students usually dropped out before the boot camp was finished. “Some people don’t know how to stay in one place for long,” he explained with gracious understatement. Also, he acknowledged, “it is very boring.”…

…The current AI boom — the convincingly human-sounding chatbots, the artwork that can be generated from simple prompts, and the multibillion-dollar valuations of the companies behind these technologies — began with an unprecedented feat of tedious and repetitive labor.

In 2007, the AI researcher Fei-Fei Li, then a professor at Princeton, suspected the key to improving image-recognition neural networks, a method of machine learning that had been languishing for years, was training on more data — millions of labeled images rather than tens of thousands. The problem was that it would take decades and millions of dollars for her team of undergrads to label that many photos.

Li found thousands of workers on Mechanical Turk, Amazon’s crowdsourcing platform where people around the world complete small tasks for cheap. The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.

Annotation remains a foundational part of making AI, but there is often a sense among engineers that it’s a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished. Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data. These failures, called “edge cases,” can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them. Already, this has given rise to a global industry staffed by people like Joe who use their uniquely human faculties to help the machines.

Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.

“There’s an entire supply chain,” said Sonam Jindal, the program and research lead of the nonprofit Partnership on AI. “The general perception in the industry is that this work isn’t a critical part of development and isn’t going to be needed for long. All the excitement is around building artificial intelligence, and once we build that, it won’t be needed anymore, so why think about it? But it’s infrastructure for AI. Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.”

The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks. There are also “crowdworking” sites like Mechanical Turk and Clickworker where anyone can sign up to perform tasks. In the middle are services like Scale AI. Anyone can sign up, but everyone has to pass qualification exams and training courses and undergo performance monitoring. Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.

This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they don’t have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.”

Automation often unfolds in unexpected ways. Erik Duhaime, CEO of medical-data-annotation company Centaur Labs, recalled how, several years ago, prominent machine-learning engineers were predicting AI would make the job of radiologist obsolete. When that didn’t happen, conventional wisdom shifted to radiologists using AI as a tool. Neither of those is quite what he sees occurring. AI is very good at specific tasks, Duhaime said, and that leads work to be broken up and distributed across a system of specialized algorithms and to equally specialized humans. An AI system might be capable of spotting cancer, he said, giving a hypothetical example, but only in a certain type of imagery from a certain type of machine; so now, you need a human to check that the AI is being fed the right type of data and maybe another human who checks its work before passing it to another AI that writes a report, which goes to another human, and so on. “AI doesn’t replace work,” he said. “But it does change how work is organized.”…

…Worries about AI-driven disruption are often countered with the argument that AI automates tasks, not jobs, and that these tasks will be the dull ones, leaving people to pursue more fulfilling and human work. But just as likely, the rise of AI will look like past labor-saving technologies, maybe like the telephone or typewriter, which vanquished the drudgery of message delivering and handwriting but generated so much new correspondence, commerce, and paperwork that new offices staffed by new types of workers — clerks, accountants, typists — were required to manage it. When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious…

…The act of simplifying reality for a machine results in a great deal of complexity for the human. Instruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use. A human asked to tag all the shirts in a photo probably wouldn’t tag the reflection of a shirt in a mirror because they would know it is a reflection and not real. But to the AI, which has no understanding of the world, it’s all just pixels and the two are perfectly identical. Fed a dataset with some shirts labeled and other (reflected) shirts unlabeled, the model won’t work. So the engineer goes back to the vendor with an update: DO label reflections of shirts. Soon, you have a 43-page guide descending into red all-caps.

“When you start off, the rules are relatively simple,” said a former Scale employee who requested anonymity because of an NDA. “Then they get back a thousand images and then they’re like, Wait a second, and then you have multiple engineers and they start to argue with each other. It’s very much a human thing.”

The job of the annotator often involves putting human understanding aside and following instructions very, very literally — to think, as one annotator said, like a robot. It’s a strange mental space to inhabit, doing your best to follow nonsensical but rigorous rules, like taking a standardized test while on hallucinogens. Annotators invariably end up confronted with confounding questions like, Is that a red shirt with white stripes or a white shirt with red stripes? Is a wicker bowl a “decorative bowl” if it’s full of apples? What color is leopard print? When instructors said to label traffic-control directors, did they also mean to label traffic-control directors eating lunch on the sidewalk? Every question must be answered, and a wrong guess could get you banned and booted to a new, totally different task with its own baffling rules…

…According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.

That is, when they were making any money at all. The most common complaint about Remotasks work is its variability; it’s steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. Any task could be their last, and they never know when the next one will come.

This boom-and-bust cycle results from the cadence of AI development, according to engineers and data vendors. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date. There may be monthslong demand for thousands of annotators, then for only a few hundred, then for a dozen specialists of a certain type, and then thousands again. “The question is, Who bears the cost for these fluctuations?” said Jindal of Partnership on AI. “Because right now, it’s the workers.”…

…A woman I’ll call Anna was searching for a job in Texas when she stumbled across a generic listing for online work and applied. It was Remotasks, and after passing an introductory exam, she was brought into a Slack room of 1,500 people who were training a project code-named Dolphin, which she later discovered to be Google DeepMind’s chatbot, Sparrow, one of the many bots competing with ChatGPT. Her job is to talk with it all day. At about $14 an hour, plus bonuses for high productivity, “it definitely beats getting paid $10 an hour at the local Dollar General store,” she said.

Also, she enjoys it. She has discussed science-fiction novels, mathematical paradoxes, children’s riddles, and TV shows. Sometimes the bot’s responses make her laugh; other times, she runs out of things to talk about. “Some days, my brain is just like, I literally have no idea what on earth to ask it now,” she said. “So I have a little notebook, and I’ve written about two pages of things — I just Google interesting topics — so I think I’ll be good for seven hours today, but that’s not always the case.”

Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.” When ChatGPT debuted late last year, its impressively natural-seeming conversational style was credited to its having been trained on troves of internet data. But the language that fuels ChatGPT and its competitors is filtered through several rounds of human annotation. One group of contractors writes examples of how the engineers want the bot to behave, creating questions followed by correct answers, descriptions of computer programs followed by functional code, and requests for tips on committing crimes followed by polite refusals. After the model is trained on these examples, yet more contractors are brought in to prompt it and rank its responses. This is what Anna is doing with Sparrow. Exactly which criteria the raters are told to use varies — honesty, or helpfulness, or just personal preference. The point is that they are creating data on human taste, and once there’s enough of it, engineers can train a second model to mimic their preferences at scale, automating the ranking process and training their AI to act in ways humans approve of. The result is a remarkably human-seeming bot that mostly declines harmful requests and explains its AI nature with seeming self-awareness.

Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.

This circuitous technique is called “reinforcement learning from human feedback,” or RLHF, and it’s so effective that it’s worth pausing to fully register what it doesn’t do. When annotators teach a model to be accurate, for example, the model isn’t learning to check answers against logic or external sources or about what accuracy as a concept even is. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them. Maybe this results in the model extracting patterns from the part of its linguistic map labeled as accurate and producing text that happens to align with the truth, but it can also result in it mimicking the confident style and expert jargon of the accurate text while writing things that are totally wrong. There is no guarantee that the text the labelers marked as accurate is in fact accurate, and when it is, there is no guarantee that the model learns the right patterns from it.

This dynamic makes chatbot annotation a delicate process. It has to be rigorous and consistent because sloppy feedback, like marking material that merely sounds correct as accurate, risks training models to be even more convincing bullshitters. An early OpenAI and DeepMind joint project using RLHF, in this case to train a virtual robot hand to grab an item, resulted in also training the robot to position its hand between the object and its raters and wiggle around such that it only appeared to its human overseers to grab the item. Ranking a language model’s responses is always going to be somewhat subjective because it’s language. A text of any length will have multiple elements that could be right or wrong or, taken together, misleading. OpenAI researchers ran into this obstacle in another early RLHF paper. Trying to get their model to summarize text, the researchers found they agreed only 60 percent of the time that a summary was good. “Unlike many tasks in [machine learning] our queries do not have unambiguous ground truth,” they lamented.

When Anna rates Sparrow’s responses, she’s supposed to be looking at their accuracy, helpfulness, and harmlessness while also checking that the model isn’t giving medical or financial advice or anthropomorphizing itself or running afoul of other criteria. To be useful training data, the model’s responses have to be quantifiably ranked against one another: Is a bot that helpfully tells you how to make a bomb “better” than a bot that’s so harmless it refuses to answer any questions? In one DeepMind paper, when Sparrow’s makers took a turn annotating, four researchers wound up debating whether their bot had assumed the gender of a user who asked it for relationship advice. According to Geoffrey Irving, one of DeepMind’s research scientists, the company’s researchers hold weekly annotation meetings in which they rerate data themselves and discuss ambiguous cases, consulting with ethical or subject-matter experts when a case is particularly tricky.

Anna often finds herself having to choose between two bad options. “Even if they’re both absolutely, ridiculously wrong, you still have to figure out which one is better and then write words explaining why,” she said. Sometimes, when both responses are bad, she’s encouraged to write a better response herself, which she does about half the time…

…The new models are so impressive they’ve inspired another round of predictions that annotation is about to be automated. Given the costs involved, there is significant financial pressure to do so. Anthropic, Meta, and other companies have recently made strides in using AI to drastically reduce the amount of human annotation needed to guide models, and other developers have started using GPT-4 to generate training data. However, a recent paper found that GPT-4-trained models may be learning to mimic GPT’s authoritative style with even less accuracy, and so far, when improvements in AI have made one form of annotation obsolete, demand for other, more sophisticated types of labeling has gone up. This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.

Chen is skeptical AI will reach a point where human feedback is no longer needed, but he does see annotation becoming more difficult as models improve. Like many researchers, he believes the path forward will involve AI systems helping humans oversee other AI. Surge recently collaborated with Anthropic on a proof of concept, having human labelers answer questions about a lengthy text with the help of an unreliable AI assistant, on the theory that the humans would have to feel out the weaknesses of their AI assistant and collaborate to reason their way to the correct answer. Another possibility has two AIs debating each other and a human rendering the final verdict on which is correct. “We still have yet to see really good practical implementations of this stuff, but it’s starting to become necessary because it’s getting really hard for labelers to keep up with the models,” said OpenAI research scientist John Schulman in a recent talk at Berkeley.

“I think you always need a human to monitor what AIs are doing just because they are this kind of alien entity,” Chen said. Machine-learning systems are just too strange ever to fully trust. The most impressive models today have what, to a human, seems like bizarre weaknesses, he added, pointing out that though GPT-4 can generate complex and convincing prose, it can’t pick out which words are adjectives: “Either that or models get so good that they’re better than humans at all things, in which case, you reach your utopia and who cares?”…

…Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

4. Interview: Chris Miller, historian and author of “Chip War” – Noah Smith and Chris Miller

N.S.: That all makes sense. How much impact will the export controls have on China’s military capabilities over the next 10 years? I’ve heard it said that military tech generally uses trailing-edge chips; if so, that would mean that in the short term, China’s military would only need chips that China can already make, using tools they already have. How true is that?

C.M.: Autos provide a good analogy for understanding how militaries use chips. A typical new car might have a thousand chips inside. Most are very simple, like the ones that make your window move up or down. But the high-value features–the entertainment system, the lidar or radar sensors, and the semi-autonomous-driving features, all require more sophisticated and specialized semiconductors. What’s more, a lot of the high-value features in cars don’t only require chips on cars–they also require sophisticated chips in cell towers and datacenters too. This is why Tesla builds its own high-end Dojo chips.

Military systems are pretty similar. Most of the chips in tanks and missiles are low-end, but the chips that provide differentiated capabilities are not. Just like autos, some of the most sophisticated chips aren’t actually in the missiles and tanks, but in the networks and datacenters that guide and train them. We know that autonomous driving efforts require huge volumes of advanced chips in cutting edge datacenters. We know less about the U.S. military’s drone programs, but there’s no doubt they use a lot of sensors, a lot of communications, and a lot of compute. The Himars missiles used in Ukraine don’t require ultra-advanced chips themselves, but they rely on targeting information provided by a vast array of sensors and processors to sort signals from noise or to differentiate tanks from trucks. It’s now easy to put GPS guidance in a missile, since every smartphone has GPS guidance too. But can your missile maneuver itself to avoid countermeasures while operating in an area where GPS is jammed? If so, its going to need more sophisticated semiconductors.

There’s not a single type of chip for which you can say “without this chip, China’s military modernization will grind to a halt.” It’s always possible to design around a certain component. But the more you have to design around subpar semiconductors, the more tradeoffs you have to make between performance, power consumption, reliability, and other characteristics. I think the recent tightening of export controls will exacerbate these tradeoffs.

N.S.: So the goal is really just to slow China down, keep them half a step behind us. That brings me to probably the most important argument against export controls. A lot of people argue that had the U.S. not enacted export controls, China would have remained dependent on U.S. chips for longer, but now that we cut them off, China will simply learn how to make everything itself, thus cutting U.S. companies out of the market and ultimately raising China’s own technological capabilities. What do you think of this argument?

C.M.: I think its hard to sustain the argument that the controls will make China pursue a strategy of reducing dependence on the U.S…because that was already China’s strategy. Chinese leaders, including Xi personally, have articulated this repeatedly since at least 2014. They launched a major industrial policy program focused on the aim of ending reliance on the U.S., spending billions of dollars annually. So to say the export controls caused this goal gets the chronology backward: this goal existed for years before the export controls.

Now, one could argue “China’s prior policies weren’t working and reducing dependence on the U.S., but now China will pursue more effective policies.” But I haven’t seen anyone articulate why this would be the case. It doesn’t seem like semiconductor funding in China has increased (and the sums involved were already vast.) Nor have the export controls introduced new information into the Chinese policymaking apparatus that will make it smarter. Beijing was pursuing this self-sufficiency strategy before the controls precisely because it knew it was so dependent.

Perhaps you could argue that the imposition of controls has reshaped the political economy or the relationships between Chinese firms and government in a way that will lead to smarter Chinese policy. I haven’t seen anyone spell out how this might work. So I’m skeptical, and I think loss of access to chipmaking tools and the broader chilling effects on expertise transfer will make China’s catch up efforts harder.

N.S.: How difficult will it be for China to make chipmaking tools similar to those made by ASML? I know they’re trying very hard to steal ASML’s tech, and I’ve seen one report indicating they may have had some success there. Also I’d expect them to try to purchase ASML machines through third countries, as well as accelerating their own indigenous R&D efforts. Will any of those workarounds work, and if so, how long until they catch up?

C.M.: The likelihood of purchasing these machines through third countries is close to zero. The number of advanced tools produced each year measures in the dozens, and there are only a handful of customers. A single advanced lithography machine requires multiple airplanes to transport. And there are ASML staff on site at all times who are critical to its operation. So its difficult to imagine a set of tools that would be more difficult to smuggle.

Replicating them is easier, but still a monumentally challenging task. It took ASML three decades to develop EUV lithography tools, and it was only possible in close collaboration with users like TSMC and Intel. Of course, it will be easier to replicate the tools than it was for ASML to first produce them. But these are the most complex and precise pieces of equipment humans have ever made. The challenge isn’t only to replicate the unique components inside the tools – such as the smoothest mirrors humans have ever made – though this will be hard. The really challenging part will be to get the hundreds of thousands of components to work often enough so that the tools can actually function in high-volume manufacturing. If each of the hundreds of thousands of components in your tool breaks down once a year, the tool basically never works. So reliability is a potentially fatal challenge.

And remember–lithography tools are probably the hardest challenge, but they’re not the only one. There are also deposition tools, etching tools, metrology tools, and others. China is behind to varying degrees–often significantly–in all of them. All these tools require tens of thousands of precision components and need to be accurate at the nanometer scale.

The final point here is that all the Western toolmakers have new chipmaking equipment rolling out on a regular basis. ASML will soon release its next generation lithography tool, called high-numerical aperture EUV. The industry continues to race forward. So if China manages to produce its own suite of EUV lithography and related etch, deposition, and lithography tools within five years, it will still be substantially behind the cutting edge…

N.S.: If you were advising the Biden administration, what would you list as the top action items or priorities to improve the U.S.’ position in the semiconductor industry, beyond what has already been done? Also, by the way, are you advising the Biden administration on this?

C.M.: In the short run, there’s more work to be done on making the U.S. cost competitive. I mentioned permitting reform. We should recognize Korea’s and Taiwan’s safety and construction regulations for fabs as equivalent, so that firms from those countries don’t need to redesign their facilities when they want to build in the U.S. The more they can copy and paste from what works in those countries, the less money they have to spend redesigning facilities to suit the needs of America’s fire and plumbing inspectors, who have much less experience with fab safety than the biggest firms. (Moreover, with billions of dollars of equipment in their fabs, chipmakers have plenty of incentive to avoid accidents.) Second, there should be strict time limits in which permits are either rejected or approved, so that the NEPA burden can be limited. At the very least we should be able to make our regulations only as burdensome as Europe’s. Today they’re worse.

The second short-run change is to extend the investment tax credit, which currently expires at the end of 2026. It should be made permanent to ensure that manufacturing in other countries isn’t cheaper simply for tax reasons.

In the long run, whichever country innovates most rapidly will succeed. The CHIPS Act puts more money into R&D, and there’s discussion of focusing CHIPS funding toward prototyping rather than basic science (which is great, but which we already have plenty of.) In the chip industry, prototyping is very expensive, so we have fewer startups and new products than, say, in software, simply due to the high upfront cost. Making it cheaper and easier to turn new ideas into working prototypes like test chips would help boost the rate of innovation…

N.S.: What are some quantitative metrics we should be keeping an eye on in the semiconductor industry, in order to know how the international competition is going?

C.M.: In terms of technological leadership in the chip industry, a key question will be at what rate leading Chinese firms advance their manufacturing processes and how this compares with non-Chinese firms. 

But I think the more pressing short-term metric is market share in China’s domestic chip market. Today China’s domestic chip market is dominated by foreign firms. China’s leaders have repeatedly stated they want to change this by importing fewer. That’s the point of Made in China 2025 and other industrial policy plans. I wonder whether they might finally take steps in this direction — not by overtaking competitors technologically but by pressuring Chinese buyers to use less capable domestically produced chips.

The electronics industry is the only major sector of the Chinese economy that has not thus far been subject to substantial “buy Chinese” pressure. (In contrast to autos, aviation, high speed rail etc.) In most other sectors, “buy Chinese” has been an acceptable policy because Chinese firms learned to produce products at or close to the technological frontier. Could we be at a point where China’s leaders are so committed to self-sufficiency, they decide to pressure domestic firms to buy domestic chips, even if they’re worse? The implications for global trade would be dramatic, because China spends as much money importing chips as anything else.

5. How a Vast Demographic Shift Will Reshape the World – Lauren Leatherby

The projections are reliable, and stark: By 2050, people age 65 and older will make up nearly 40 percent of the population in some parts of East Asia and Europe. That’s almost twice the share of older adults in Florida, America’s retirement capital. Extraordinary numbers of retirees will be dependent on a shrinking number of working-age people to support them.

In all of recorded history, no country has ever been as old as these nations are expected to get.

As a result, experts predict, things many wealthier countries take for granted — like pensions, retirement ages and strict immigration policies — will need overhauls to be sustainable. And today’s wealthier countries will almost inevitably make up a smaller share of global G.D.P., economists say.

This is a sea change for Europe, the United States, China and other top economies, which have had some of the most working-age people in the world, adjusted for their populations. Their large work forces have helped to drive their economic growth.

Those countries are already aging off the list. Soon, the best-balanced work forces will mostly be in South and Southeast Asia, Africa and the Middle East, according to U.N. projections. The shift could reshape economic growth and geopolitical power balances, experts say…

…The opportunity for many poorer countries is enormous. When birth rates fall, countries can reap a “demographic dividend,” when a growing share of workers and few dependents fuel economic growth. Adults with smaller families have more free time for education and investing in their children. More women tend to enter the work force, compounding the economic boost.

Demography isn’t destiny, and the dividend isn’t automatic. Without jobs, having a lot of working-age people can drive instability rather than growth. And even as they age, rich countries will enjoy economic advantages and a high standard of living for a long time…

…But without the right policies, a huge working-age population can backfire rather than lead to economic growth. If large numbers of young adults don’t have access to jobs or education, widespread youth unemployment can even threaten stability as frustrated young people turn to criminal or armed groups for better opportunities…

…East Asian countries that hit the demographic sweet spot in the last few decades had particularly good institutions and policies in place to take advantage of that potential, said Philip O’Keefe, who directs the Aging Asia Research Hub at the ARC Center of Excellence in Population Aging Research and previously led reports on aging in East Asia and the Pacific at the World Bank.

Other parts of the world – some of Latin America, for example – had age structures similar to those East Asian countries’ but haven’t seen anywhere near the same growth, according to Mr. O’Keefe. “Demography is the raw material,” he said. “The dividend is the interaction of the raw material and good policies.”…

…Today’s young countries aren’t the only ones at a critical juncture. The transformation of rich countries has only just begun. If these countries fail to prepare for a shrinking number of workers, they will face a gradual decline in well-being and economic power.

The number of working-age people in South Korea and Italy, two countries that will be among the world’s oldest, is projected to decrease by 13 million and 10 million by 2050, according to U.N. population projections. China is projected to have 200 million fewer residents of working age, a decrease higher than the entire population of most countries.

To cope, experts say, aging rich countries will need to rethink pensions, immigration policies and what life in old age looks like…

…Not only are Asian countries aging much faster, but some are also becoming old before they become rich. While Japan, South Korea and Singapore have relatively high income levels, China reached its peak working-age population at 20 percent the income level that the United States had at the same point. Vietnam reached the same peak at 14 percent the same level.

Pension systems in lower-income countries are less equipped to handle aging populations than those in richer countries…

…And some rich countries won’t face as profound a change — including the United States.

Slightly higher fertility rates and more immigration mean the United States and Australia, for example, will be younger than most other rich countries in 2050. In both the United States and Australia, just under 24 percent of the population is projected to be 65 or older in 2050, according to U.N. projections — far higher than today, but lower than in most of Europe and East Asia, which will top 30 percent…

…People aren’t just living longer; they are also living healthier, more active lives. And aging countries’ high level of development means they will continue to enjoy prosperity for a long time.

But behavioral and governmental policy choices loom large.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), ASML, Meta Platforms, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 16 July 2023:

1. Inside Google’s big AI shuffle — and how it plans to stay competitive, with Google DeepMind CEO Demis Hassabis – Nilay Patel and Demis Hassabis

From the outside, the timeline looks like this: everyone’s been working on this for ages, we’ve all been talking about it for ages. It is a topic of conversation for a bunch of nerdy journalists like me, a bunch of researchers, we talk about it in the corner at Google events.

Then ChatGPT is released, not even as a product. I don’t even think Sam [Altman] would call it a great product when it was released, but it was just released, and people could use it. And everyone freaked out, and Microsoft releases Bing based on ChatGPT, and the world goes upside down, and Google reacts by merging DeepMind and Google Brain. That’s what it looks like from the outside. Is that what it felt like from the inside?

That timeline is correct, but it’s not these direct consequences; it’s more indirect in a sense. So, Google and Alphabet have always run like this. They let many flowers bloom, and I think that’s always been the way that even from Larry [Page] and Sergey [Brin] from the beginning set up Google. And it served them very well, and it’s allowed them to organically create incredible things and become the amazing company that it is today. On the research side, I think it’s very compatible with doing research, which is another reason we chose Google as our partners back in 2014. I felt they really understood what fundamental and blue sky research was, ambitious research was, and they were going to facilitate us being and enable us to be super ambitious with our research. And you’ve seen the results of that, right?

By any measure, AlphaGo, AlphaFold, but more than 20 nature and science papers and so on — all the normal metrics one would use for really delivering amazing cutting-edge research we were able to do. But in a way, what ChatGPT and the large models and the public reaction to that confirmed is that AI has entered a new era. And by the way, it was a little bit surprising for all of us at the coalface, including OpenAI, how viral that went because — us and some other startups like Anthropic and OpenAI — we all had these large language models. They were roughly the same capabilities.

And so, it was surprising, not so much what the technology was because we all understood that, but the public’s appetite for that and obviously the buzz that generated. And I think that’s indicative of something we’ve all been feeling for the last, I would say, two, three years, which is these systems are reaching a level of maturity now and sophistication where it can really come out of the research phase and the lab and go into powering incredible next-generation products and experiences and also breakthroughs, things like AlphaFold directly being useful for biologists. And so, to me, this is just indicative of a new phase that AI is in of being practically useful to people in their everyday lives and actually being able to solve really hard real-world problems that really matter, not just the curiosities or fun, like games.

When you recognize that shift, then I think that necessitates a change in your approach as to how you’re approaching the research and how much focus you’re having on products and those kinds of things. And I think that’s what we all came to the realization of, which was: now was the time to streamline our AI efforts and focus them more. And the obvious conclusion of that was to do the merger…

It feels like the ChatGPT moment that led to this AI explosion this year was really rooted in the AI being able to do something that regular people could do. I want you to write me an email, I want you to write me a screenplay, and maybe the output of the LLM is a C+, but it’s still something I can do. People can see it. I want you to fill out the rest of this photo. That’s something people can imagine doing. Maybe they don’t have the skills to do it, but they can imagine doing it. All the previous AI demos that we have gotten, even yours, AlphaFold, you’re like, this is going to model all the proteins in the world.

But I can’t do that; a computer should do that. Even a microbiologist might think, “That is great. I’m very excited that a computer can do that because I’m just looking at how much time it would take us, and there’s no way we could ever do it.” “I want to beat the world champion at Go. I can’t do that. It’s like, fine. A computer can do that.” 

There’s this turn where the computer is starting to do things I can do, and they’re not even necessarily the most complicated tasks. Read this webpage and deliver a summary of it to me. But that’s the thing that unlocked everyone’s brain. And I’m wondering why you think the industry didn’t see that turn coming because we’ve been very focused on these very difficult things that people couldn’t do, and it seems like what got everyone is when the computer started doing things people do all the time.

I think that analysis is correct. I think that is why the large language models have really entered the public consciousness because it’s something the average person, that the “Joe Public,” can actually understand and interact with. And, of course, language is core to human intelligence and our everyday lives. I think that does explain why chatbots specifically have gone viral in the way they have. Even though I would say things like AlphaFold, I mean of course I’d be biased in saying this, but I think it’s actually had the most unequivocally biggest beneficial effects so far in AI on the world because if you talk to any biologist or there’s a million biologists now, researchers and medical researchers, have used AlphaFold. I think that’s nearly every biologist in the world. Every Big Pharma company is using it to advance their drug discovery programs. I’ve had multiple, dozens, of Nobel Prize-winner-level biologists and chemists talk to me about how they’re using AlphaFold.

So a certain set of all the world’s scientists, let’s say, they all know AlphaFold, and it’s affected and massively accelerated their important research work. But of course, the average person in the street doesn’t know what proteins are even and doesn’t know what the importance of those things are for things like drug discovery. Whereas obviously, for a chatbot, everyone can understand, this is incredible. And it’s very visceral to get it to write you a poem or something that everybody can understand and process and measure compared to what they do or are able to do… 

…There are so many decisions I make every day,it’s hard to come up with one now. But I tend to try and plan out and scenario a plan many, many years in advance. So I tell you the way I try to approach things is, I have an end goal. I’m quite good at imagining things, so that’s a different skill, visualizing or imagining what would a perfect end state look like, whether that’s organizational or it’s product-based or it’s research-based. And then, I work back from the end point and then figure out what all the steps would be required and in what order to make that outcome as likely as possible.

So that’s a little bit chess-like, right? In the sense of you have some plan that you would like to get to checkmate your opponent, but you’re many moves away from that. So what are the incremental things one must do to improve your position in order to increase the likelihood of that final outcome? And I found that extremely useful to do that search process from the end goal back to the current state that you find yourself in.

Let’s put that next to some products. You said there’s a lot of DeepMind technology and a lot of Google products. The ones that we can all look at are Bard and then your Search Generative Experience. There’s AI in Google Photos and all this stuff, but focused on the LLM moment, it’s Bard and the Search Generative Experience. Those can’t be the end state. They’re not finished. Gemini is coming, and we’ll probably improve both of those, and all that will happen. When you think about the end state of those products, what do you see?

The AI systems around Google are also not just in the consumer-facing things but also under the hood that you may not realize. So even, for example, one of the things we applied our AI systems to very initially was the cooling systems in Google’s data centers, enormous data centers, and actually reducing the energy they use by nearly 30 percent that the cooling systems use, which is obviously huge if you multiply that by all of the data centers and computers they have there. So there are actually a lot of things under the hood where AI is being used to improve the efficiency of those systems all the time. But you’re right, the current products are not the end state; they’re actually just waypoints. And in the case of chatbots and those kinds of systems, ultimately, they will become these incredible universal personal assistants that you use multiple times during the day for really useful and helpful things across your daily lives.

From what books to read to recommendations on maybe live events and things like that to booking your travel to planning trips for you to assisting you in your everyday work. And I think we’re still far away from that with the current chatbots, and I think we know what’s missing: things like planning and reasoning and memory, and we are working really hard on those things. And I think what you’ll see in maybe a couple of years’ time is today’s chatbots will look trivial by comparison to I think what’s coming in the next few years.

My background is as a person who’s reported on computers. I think of computers as somewhat modular systems. You look at a phone — it’s got a screen, it’s got a chip, it’s got a cell antenna, whatever. Should I look at AI systems that way — there’s an LLM, which is a very convincing human language interface, and behind it might be AlphaFold that’s actually doing the protein folding? Is that how you’re thinking about stitching these things together, or is it a different evolutionary pathway?

Actually, there’s a whole branch of research going into what’s called tool use. This is the idea that these large language models or large multimodal models, they’re expert at language, of course, and maybe a few other capabilities, like math and possibly coding. But when you ask them to do something specialized, like fold a protein or play a game of chess or something like this, then actually what they end up doing is calling a tool, which could be another AI system, that then provides the solution or the answer to that particular problem. And then that’s transmitted back to the user via language or pictorially through the central large language model system. So it may be actually invisible to the user because, to the user, it just looks like one big AI system that has many capabilities, but under the hood, it could be that actually the AI system is broken down into smaller ones that have specializations.

And I actually think that probably is going to be the next era. The next generation of systems will use those kinds of capabilities. And then you can think of the central system as almost a switch statement that you effectively prompt with language, and it roots your query or your question or whatever it is you’re asking it to the right tool to solve that question for you or provide the solution for you. And then transmit that back in a very understandable way. Again, using through the interface, the best interface really, of natural language.

Does that process get you closer to an AGI, or does that get you to some maximum state and you got to do something else?

I think that is on the critical path to AGI, and that’s another reason, by the way, I’m very excited about this new role and actually doing more products and things because I actually think the product roadmap from here and the research roadmap from here toward something like AGI or human-level AI is very complementary. The kinds of capabilities one would need to push in order to build those kinds of products that are useful in your everyday life like a universal assistant requires pushing on some of these capabilities, like planning and memory and reasoning, that I think are vital for us to get to AGI. So I actually think there’s a really neat feedback loop now between products and research where they can effectively help each other…

You’ve signed a letter from the Center for AI Safety — OpenAI’s Sam Altman and others have also signed this letter — that warns against the risk from AI. And yet, you’re pushing on, Google’s in the market, you’ve got to win, you’ve described yourself as competitive. There’s a tension there: needing to win in the market with products and “Oh boy, please regulate us because raw capitalism will drive us off the cliff with AI if we don’t stop it in some way.” How do you balance that risk?

It is a tension. It’s a creative tension. What we like to say at Google is we want to be bold and responsible, and that’s exactly what we’re trying to do and live out and role model. So the bold part is being brave and optimistic about the benefits, the amazing benefits, incredible benefits, AI can bring to the world and to help humanity with our biggest challenges, whether that’s disease or climate or sustainability. AI has a huge part to play in helping our scientists and medical experts solve those problems. And we’re working hard on that  and all those areas. And AlphaFold, again, I’d point to as a poster child for that, what we want to do there. So that’s the bold part. And then, the responsible bit is to make sure we do that as thoughtfully as possible with as much foresight as possible ahead of time.

Try and anticipate what the issues might be if one was successful ahead of time. Not in hindsight, and perhaps this happened with social media, for example, where it is this incredible growth story. Obviously, it’s done a lot of good in the world, but then it turns out 15 years later we realize there are some unintended consequences as well to those types of systems. And I would like to chart a different path with AI. And I think it’s such a profound and important and powerful technology. I think we have to do that with something as potentially as transformative as AI. And it doesn’t mean no mistakes will be made. It’s very new, anything new, you can’t predict everything ahead of time, but I think we can try and do the best job we can.

And that’s what signing that letter was for was just to point out that I don’t think it’s likely, I don’t know on the timescales, but it’s something that we should consider, too, in the limit is what these systems can do and might be able to do as we get closer to AGI. We are nowhere near that now. So this is not a question of today’s technologies or even the next few years’, but at some point, and given the technology’s accelerating very fast, we will need to think about those questions, and we don’t want to be thinking about them on the eve of them happening. We need to use the time now, the next five, 10, whatever it is, years, to do the research and to do the analysis and to engage with various stakeholders, civil society, academia, government, to figure out, as this stuff is developing very rapidly, what the best way is of making sure we maximize the benefits and minimize any risks.

And that includes mostly, at this stage, doing more research into these areas, like coming up with better evaluations and benchmarks to rigorously test the capabilities of these frontier systems.

You talked about tool usage for AI models, you ask an LLM to do something, it goes off and asks AlphaFold to fold the protein for you. Combining systems like that, integrating systems like that, historically that’s where emergent behaviors appear, things you couldn’t have predicted start happening. Are you worried about that? There’s not a rigorous way to test that.

Right, exactly. I think that’s exactly the sort of thing we should be researching and thinking about ahead of time is: as tool use becomes more sophisticated and you can combine different AI systems together in different ways, there is scope for emergent behavior. Of course, that emergent behavior may be very desirable and be extremely useful, but it could also potentially be harmful in the wrong hands and in the hands of bad actors, whether that’s individuals or even nation-states…

There’s the concept of model collapse. That we’re going to train LLMs on LLM-generated data, and that’s going to go into a circle. When you talk about cross-referencing facts, and I think about Google — Google going out in the web and trying to cross-reference a bunch of stuff but maybe all that stuff has been generated by LLMs that were hallucinating in 2023. How do you guard against that?

We are working on some pretty cool solutions to that. I think the answer is, and this is an answer to deepfakes as well, is to do some encrypted watermarking, sophisticated watermarking, that can’t be removed easily or at all, and it’s probably built into the generative models themselves, so it’s part of the generative process. We hope to release that and maybe provide it to third parties as well as a generic solution. But I think that the industry in the field needs those types of solutions where we can mark generated media, be that images, audio, perhaps even text with some Kitemark that says to the user and future AI systems that these were AI-generated. And I think that’s a very, very pressing need right now for near-term issues with AI like deepfakes and disinformation and so on. But I actually think a solution is on the horizon now.

2. A stock market gift right under your nose – Chin Hui Leong

In my book, the best returns come from owning stocks for the long term. For example, I have owned shares of Apple, Amazon, Booking Holdings, and Intuitive Surgical since 2010. On average, these shares have grown by almost 17 times their original value, turning each dollar invested to nearly US$17 over the past 13 years. The key ingredient here is time. But the trick is knowing what shares to hold.

Ideally, the business behind the stock should exhibit the ability to grow in both good times and bad. When businesses are able to deliver huge increases in earnings over time, your odds of a good outcome increase. Here is your big hint. If companies can perform during a tough economy, it stands to reason that they will do as well or better, when the economic conditions improve. And if they outperform, it is a great recipe for long-term investment returns…

Booking Holdings, which owns popular travel sits such as Booking.com and Agoda, reported revenue and profit growth of over 65 per cent and nearly 149 percent, respectively, between 2007 and 2009 at the worst of the GFC. Post-GFC, the company outperformed. From 2009 to today, Booking Holdings’ revenue and net profit soared by almost eight-fold and nine-fold, respectively. The shares I bought are up by more than 900 per cent, closely mirroring its profit increase, demonstrating that stock returns followed growth over 13 years.

Likewise, Apple’s iPhone was criticised for being too expensive back in 2007. Yet its sales from 2007 to 2009 (the GFC period) show that the smartphone is far from a discretionary purchase. In fact, the iPhone drove Apple’s revenue and earnings per share up 52 per cent and 60 per cent, respectively, during this tumultuous period. Today, revenue is more than 10-fold the 2009 level and over 26 times the EPS. The shares which I own since 2010 are up 21 times, another marker that returns follow actual growth…

… A key reason why I chose this quartet of stocks in 2010 is due to their strong performance during the difficult GFC period. Today, you have similar conditions. Last year, business growth stalled due to issues ranging from unfavourable exchange rates to supply chain disruptions and rising interest rates. But behind these troubles, you are being gifted real-world data on a select group of businesses that thrived, despite the circumstances…

…Said another way, you do not have to guess which companies will do well in bad times, you can sieve through the available data and see for yourself. At the end of this process, you should have a list of potential stocks to buy. This list, I submit, should comprise a superior set of companies to start your research. Instead of looking for a needle in a haystack, you will be able to dramatically narrow down your search, right off the bat. As far as gifts from the stock market go, that is hard to beat.

3. An Interview with Marc Andreessen about AI and How You Change the World – Ben Thompson and Marc Andreessen

I did want to ask one quick question about that article Software is Eating the World. The focus of that seemed to be that we’re not in a bubble, which obviously in 2011 turned out to be very true. I wrote an Article in 2015 saying we’re not in a bubble. That also turned out to be very true. By 2021, 2022, okay maybe, but you missed a lot of upside in the meantime to say the least!

However, there’s one bit in that article where you talk about Borders giving Amazon its e-commerce business, and then you talk about how Amazon is actually a software company. That was certainly true at the time, but I think you can make the case — and I have — that Amazon.com in particular is increasingly a logistics company that is very much rooted in the real world, with a moat that costs billions of dollars to build and a real world moat, you can’t really compete with it: they can compete anyone out of business in the long run by dropping prices and covering their marginal costs. Now that doesn’t defeat your point, all of that is enabled by software and their dominant position came from software, but do you think there is a bit where physical moat still means more, or is Amazon just an exception to every rule?

MA: You can flip that on its head, and you can basically observe that the legacy car companies basically make that same argument that you’re making as to why they’ll inevitably crush Tesla. Car company’s CEOs have made this argument to me directly for many years, which is, “Oh, you Californians, it’s nice and cute that you’re doing all this stuff with software, but you don’t understand the car industry is about the real world. It’s about atoms and it’s about steel and it’s about glass and rubber and it’s about cars that have to last for 200,000 miles and have to function in the snow.” They usually point out, “You guys test your electric self-driving cars in the California weather, wait till you have a car on the road in Detroit. It’s just a matter of time before you software people come to the realization that you’re describing for Amazon, which is this is a real world business and the software is nice, but it’s just a part of it and this real world stuff is what really matters.”

There’s some truth to that. Look, the global auto industry in totality still sells a lot more cars than Tesla. Absolutely everything you’re saying about Amazon logistics is correct, but I would still maintain that over the long run that the opposite is still true, and I would describe it as follows, which is Amazon, notwithstanding all of their logistics expertise and throwaway, they’re still the best software company. Apple notwithstanding all of their manufacturing prowess and industrial design and all the rest of it, they’re still the best or one of the two best mobile software companies. Then of course Tesla, we’re sitting here today, and Tesla I think today is still worth more than the rest of the global auto industry combined in terms of market cap, and I think the broad public investor base is looking forward and saying, “Okay, the best software company is in fact going to win.” Then of course you drive the different cars and you’re like, “Okay, obviously the Tesla is just a fundamentally different experience as a consequence of quite literally being now a self-driving car run run by software.”

I would still hold of the strong form of what I said in that essay, which is in the long run, the best software companies win. And then it’s just really hard. Part of the problem is, it’s hard to compete with great software with mediocre software, it’s really hard to do that because there comes a time when it really matters and the fundamental form and shape of the thing that you’re dealing with fundamentally changes. You know this, are you going to use the video recorder app on your smartphone, which is software, or are you going to use an old-fashioned camcorder that in theory comes with a 600-page instruction manual and has 50 buttons on it. At some points the software wins and I would still maintain that that is what will happen in many markets…

What is the case for AI as you see it?

MA: Well, this is part of why I know there’s hysterical panic going on, because basically the people who are freaking out about AI never even bothered to stop and basically try to make the positive case, and just immediately assumed that everything is going to be negative.

The positive case on AI is very straightforward, which is AI is, number one is just AI is a technical development. It has the potential to grow the economy and do all the things that technology does to improve the world, but very specifically, the thing about AI is that it is intelligence. The thing about intelligence, and we know this from the history of humanity, intelligence is a lever on the rest of the world, a very fundamental way to make a lot of things better at the same time.

We know that because in human affairs, human intelligence, we know, across thousands of studies for a hundred years, increases in human intelligence make basically all life outcomes better for people. So people who are smarter are able to better function in life, they’re able to have higher educational attainment, they’re able to have better career success, they have better physical health. By the way, they’re also more able to deal with conflict, they’re less prone to violence, they’re actually less bigoted, they also have more successful children, those children go on to become more successful, those children are healthier. So intelligence is basically this universal mechanism to be able to deal with the complex world, to be able to assimilate information, and then be able to solve problems.

Up until now, our ability as human beings to engage in the world and apply intelligence to solve problems has been, of course, limited to the faculties that we have with these kind of partial augmentations, like in the form of calculating machines. But fundamentally, we’ve been trying to work through issues with our own kind of inherent intelligence. AI brings with it the very big opportunity, which I think is already starting to play out, to basically say, “Okay, now we can have human intelligence compounded, augmented with machine intelligence”. Then effectively, we can do a forklift upgrade and effectively make everybody smarter.

If I’m right about that and that’s how this is going to play out, then this is the most important technological advance with the most positive benefits, basically, of anything we’ve done probably since, I don’t know, something like fire, this could be the really big one…

But if it’s so smart and so capable, then why isn’t it different this time? Why should it be dismissed as another sort of hysterical reaction to say that there’s this entity coming along? I mean, back in the day, maybe the chimps had an argument about, “Look, it’s okay if these humans evolve and they’re smarter than us”. Now they’re stuck in zoos or whatever it might be. I mean, why would not a similar case be made for AI?

MA: Well, because it’s not another animal, and it’s not another form of human being, it’s a machine. This is what’s remarkable about it, it’s machine intelligence, it’s a combination of the two. The significance of that, basically, is like your chimp analogy, or basically human beings reacting to other human beings, or over time in the past when two different groups of humans would interact and then declare war on each other, what you were dealing with was you were dealing with evolved living species in each case.

That evolved part there is really important because what is the mechanism by which evolution happens, right? It’s conflict. So survival of the fittest, natural selection, the whole point of evolution is to kind of bake off different, originally one cell organisms, and then two cell organisms, and then ultimately animals, and then ultimately people against each other. The way that evolution happens is basically a big fight and then, at least in theory, the stronger of the organisms survives.

At a very deep genetic level, all of us are wired for combat. We’re wired for conflict, we’re wired for a high level of, let’s say, if not a high level of physical violence, then at least a high level of verbal violence and social and cultural conflict. I mean, machine intelligence is not evolved. The term you might apply is intelligent design, right?

(laughing) Took me a second on that one.

MA: You remember that from your childhood? As do I. Machine intelligence is built and it’s built by human beings, it’s built to be a tool, it’s built the way that we build tools, it’s built in the form of code, it’s built in the form of math, it’s built in the form of software that runs on chips. In that respect, it’s a software application like any other. So it doesn’t have the four billion years of conflict driven evolution behind it, it has what we design into it.

That’s where I part ways from, again, the doomers, where from my perspective, the doomers kind of impute that it’s going to behave as if it had come up through four billion years of violent evolution when it hasn’t, like we have built it. Now, it can be used to do bad things and we can talk about that. But it, itself, does not have inherent in it the drive for kind of conquest and domination that living beings do.

What about the accidental bad things, the so-called paperclip problem?

MA: Yeah, so the paperclip problem is a very interesting one because it contains what I think is sort of a logical fallacy that’s right at the core of this whole argument, which is for the paperclip argument to work — the term that the doomers use — they call it orthogonality.

So for the paperclip argument to work, you have to believe two things at the same time. You have to believe that you have a super intelligent AI that is so intelligent, and creative, and flexible, and devious, and genius level, super-genius level conceptual thinker, that it’s able to basically evade all controls that you would ever want to put on it. It’s able to circumvent all security measures, it’s able to build itself its own energy sources, it’s able to manufacture itself its own chips, it’s able to hide itself from attack, it’s able to manipulate human beings into doing what it wants to do, it has all of these superpowers. Whenever you challenge the doomers on the paperclip thing, they always come up with a reason why the super intelligent AI is going to be able, it’s going to be so smart that it’s going to be able to circumvent any limitations you put on it.

But you also have to believe that it’s so stupid that all it wants to do is make paperclips, right? There’s just a massive gap there, because if it’s smart enough to turn the entire world, including atoms and the human body into paperclips, then it’s not going to be so stupid as to decide that’s the only thing that matters in all of existence. So this is what they call the orthogonality argument, because the sleight of hand they try to do is they try to say, well, it’s going to be super genius in these certain ways, but it’s going to be just totally dumb in this other way. That those are orthogonal concepts somehow.

Is it fair to say that yours is an orthogonal argument though? Where it’s going to be super intelligent, even more intelligent than humans in one way, but it’s not going to have any will or drive because it hasn’t evolved to have it. Could this be an orthogonality face-off in some regards?

MA: Well, I would just say I think their orthogonality theory is a little bit like the theory of false consciousness and Marxism. It’s just like you have to believe that this thing is not going to be operating according to any of the ways that you would expect normal people or things to behave.

Let me give you another thing. So a sort of thing they’ll say, again, that’s part of orthogonality, is they’ll say, “Well, it won’t be doing moral reasoning, it’ll be executing its plan for world conquest, but it will be incapable of doing moral reasoning because it’ll just have the simple-minded goal”. Well, you can actually disprove that today, and you can disprove that today by going to any LLM of any level of sophistication, you can do moral reasoning with it. Sitting here, right now, today, you can have moral arguments with GPT, and with Bard, and with Bing, and with every other LLM out there. Actually, they are really good at moral reasoning, they are very good at arguing through different moral scenarios, they’re very good at actually having this exact discussion that we’re having…

...Again, just cards on the table, I mostly agree with you, so I’m putting up a little bit of a defense here, but I recognize it’s probably not the best one in the world. But I see there being a few candidates for being skeptical of the AI doomers.

First, you’ve kind of really jumped on the fact that you think the existential risk doesn’t exist. Is that the primary driver of your skepticism and some would say dismissal of this case? Or is it also things like another possibility would be AI is inevitable, it’s going to happen regardless, so let’s just go forward? Or is there sort of a third one, which is that any reasonable approach, even if there were risks — look at COVID, it’s not doable. We can’t actually manage to find a middle path that is reasonable and adjust accordingly, it’s either one way or the other. Given that and your general skepticism, that’s the way it has to go.

Are all three of those working in your argument here, or is it really just you don’t buy it at all?

MA: So I think the underlying thing is actually a little bit more subtle, which is I’m an engineer. So for better or for worse, I was trained as an engineer. Then I was also trained in science in the way that engineers are trained in science, so I never worked as a scientist, but I was trained in the scientific method as engineers are. I take engineering very seriously, and I take science very seriously, and I take the scientific method very seriously. So when it comes time to engage in questions about what is a technology going to do, I start by going straight to the engineering, which is like, “Okay, what is it that we’re dealing with here”?

The thing is, what we’re dealing with here is something that you’re completely capable of understanding what it is. What it is it’s math and code. You can buy many textbooks that will explain the math and code to you, they’re all being updated right now to incorporate the transformer algorithm, there’s books already out on the market. You can download many How-To guides on how to do this stuff. It’s lots of matrix multiplication, there’s lots of linear algebra involved, there are various algorithms, it’s just like these are machines and you can understand it as a machine.

What I would think of is there’s these flights of fancy that people then launch off of where they make extrapolations, in some cases, literally billions of years into the future. I read this book Superintelligence, which is the one that is kind of the catechism urtext for the AI doomers. [Nick Bostrom] goes from these very general descriptions of possible forms of future intelligence to these extrapolations of literally what’s going to happen billions of years in the future. These seem like fine thought experiments, this seems like a fine way to write science fiction, but I don’t see anything in it resembling engineering.

Then also the other thing really striking is there’s an absence of science. So what do we know about science? We know that science involves at its core the proposing of a hypothesis and then a way to test the hypothesis such that you can falsify it if it’s not true. You’ll notice that in all these books and all these materials, as far as I’ve been able to find, there are no testable hypotheses, there are no falsifiable hypotheses, there are not even metrics to be able to evaluate how you’re doing against your hypothesis. You just have basically these incredible extrapolations.

So I read this stuff and I’m like, “Okay, fine, this isn’t engineering”. They seem very uninterested in the details of how any of this stuff works. This isn’t science. there are no hypotheses so it reads to me as pure speculation. Speculation is fun, but we should not make decisions in the real world just based on speculation.

What’s the testable hypothesis that supports your position? What would you put forward that, if something were shown to be true, then that would change your view of the matter?

MA: Yeah, I mean, we have these systems today. Are they seizing control of their computers and declaring themselves emperor of earth?

I mean, I did have quite the encounter with Sydney.

MA: (laughing) How’s it going? Yeah, well, there you go. Right? Well, so look, the meme that I really like on this, there is a meme I really like on this, I’ll make the sin of trying to explain a meme, but it’s the eldritch horror from outer space.

I put a version of that in my article about Sydney.

MA: The kicker is the evil shoggoth, AI doom saying thing is mystified why the human being isn’t afraid of it. Then the human being’s response is, “Write this email”.

So again, this is the thing — what do we do? What do we do when we’re engineers and scientists? We build the thing, and we test the thing, and we figure out ways to test the thing, we figure out do we like how the thing is working or not? We figure out along the way what are the risks, then we figure out the containment methods for the risk.

This is what we’ve done with every technology in human history. The cumulative effect of this is the world we live in today, which is materially an incredibly advanced world as compared to the world that our ancestors lived in.

Had we applied the precautionary principle or any of the current trendy epistemic methods to evaluating the introduction of prior technologies ranging from fire and the wheel all the way to gunpowder and microchips, we would not be living in the world we’re living in today. We’d be living in a much worse world, and child mortality would be through the roof and we’d all be working these god awful physical labor jobs and we’d be like, “Wow, is this the best we can do?” I think our species has actually an excellent track record at dealing with these things, and I think we should do what we do, we should build these things and then we should figure out the pros and cons…

Was crypto a mistake, and I mean both in terms of the technology, but also in terms of how closely a16z became tied to it reputationally? Is there a bit where you wish you had some of those reputation points right now for your AI arguments, where maybe that’s more important to human flourishing in the long run?

MA: Yeah, I don’t think that, so that idea that there’s some trade off there, I don’t think it works that way. This is a little bit like the topic of political capital in the political system, and there’s always this question if you talk to politicians, there’s always this question of political capital, which is do you gain political capital by basically conceding on things, or do you gain political capital by actually exercising political power? Right? Are you better off basically conserving political power or actually just putting the throttle forward and being as forceful as you can?

I mean, look, I believe whatever political power we have, whatever influence we have is because we’re a hundred percent on the side of innovation. We’re a hundred percent on the side of startups, we’re a hundred percent on the side of entrepreneurs who are building new things. We take a very broad brush approach to that. We back entrepreneurs in many categories of technology, and we’re just a hundred percent on their side.

Then really critically, we’re a hundred percent on their side despite the waxing and waning of the moon. My experience with all of these technologies, including the Internet and computers and social media and AI and every other thing we can talk about biotech, they all go through these waves. They all go through periods in which everybody is super excited and extrapolates everything to the moon, and they all go through periods where everybody’s super depressed and wants to write everything off. AI itself went through decades of recurring booms and winters. I remember in the 1980s, AI went through a big boom in the 1980s, and then crashed super hard in the late eighties, and was almost completely discredited by the time I got to college in ’89. There had been a really big surge of enthusiasm before that.

My view is like, “We’re just going to put ourselves firmly on the side of the new ideas, firmly on the side of the innovations. We’re going to stick with them through the cycles”. If there’s a crypto winter, if there’s an AI winter, if there’s a biotech winter, whatever, it doesn’t really matter. By the way, it also maps to the fundamentals of how we think about what we do, which is we are trying to back the entrepreneurs with the biggest ideas, building the biggest things, to the extent that we succeed in doing that building big things takes a long time.

4. The private credit ‘golden moment’ – Robin Wigglesworth

By ‘private credit’ or ‘private debt’, we’re mostly (but not only) talking about direct loans between an investment fund and a corporate borrower, usually a small or mid-sized company.

These sometimes struggle to get traditional banks interested in their custom — for big banks it’s more attractive to lend to big blue-chip companies that you can also sell M&A advice, derivatives and pension plan management etc — but remain too small to tap the bond market, where you realistically need to raise at least $200mn in one gulp, and ideally over $500mn.

Private credit funds therefore often depict themselves as helping bread-and-butter ma-and-pa small businesses that mean ol’ banks are shunning. In reality, most of the lending is done to private equity-owned businesses, or as part of a distressed debt play. So it can arguably be better seen as a rival (or complement) to the leveraged loan and junk bond markets…

…As you can see from the fundraising bonanza, private credit has morphed from a cottage business mostly focused on distressed debt into a massive business over the past decade. And after starting out overwhelmingly American it is beginning to grow a little in Europe and Asia(opens a new window) as well.

Morgan Stanley estimates the overall assets under management at about $1.5tn (of which about $500bn was money raised but not yet lent, aka ‘dry powder’ as the industry loves to call it).

That makes it bigger than both the US high yield and leveraged loan markets for the first time, says Cyprys:..

…Why has it been growing? Well, for investors it is the promise of both smoother and stronger returns, in an era where even the high-yield bond market for a long time made a mockery of its moniker. Remember when some European junk-rated companies could borrow at negative rates(opens a new window)? Happy days.

Direct loans are also more attractive when interest rates are rising, because they are floating rate, as opposed to the fixed rates that public market bonds pay. At the same time, since these are private, (mostly) untraded assets, their value doesn’t move around as much leveraged loans or traditional bonds…

…In many respects the growth of private credit is a healthy development. It is arguably far better that an investment fund with long-term locked-up capital takes on the associated credit risk than a traditional deposit-taking commercial bank.

But as we wrote earlier this year, there are a lot of reasons to be wary of the current private credit boom. Things have basically gone a bit nuts as money has gushed in.

Using data on business development companies — publicly listed direct lenders, often managed by one of the private capital industry’s giants — Goldman has put some meat on one of our skeleton arguments: floating rate debt is great for investors, but only up to a point.

At some point the rising cost of the debt will crush the company, and we may be approaching that point.

UBS predicts that the default rate of private credit borrowers will spike to a peak of 9-10 per cent early next year as a result, before falling back to about 5-6 per cent as the Federal Reserve is forced into cutting rates.

Default rates like that might seem manageable. It’s hardly Creditpocalypse Now. But the problem is that, as Jeff Diehl and Bill Sacher of Adam Street — a US private capital firm — wrote in a recent report(opens a new window), loss avoidance is the name of the game in private credit:

Benign economic and credit conditions over the last decade have allowed many managers to avoid losses, leading to a narrow return dispersion . . . The benign climate has changed with higher rates, wider credit spreads and slowing revenue growth, all of which is likely to put pressure on many managers’ portfolios…

…And to be fair, as our colleague Mark Vandevelde wrote in a fab recent column, the broader danger isn’t really that there’s been silly lending going on. These are investors and asset managers that (mostly) know what they’re doing, in an area people know is risky. People will lose money, the world will keep turning etc.

The issue, as Mark writes, is that private credit firms are now big and extensive enough to plausibly become shock conduits between investors, borrowers, and the broader economy:

In short, the biggest risks inherent in the rise of private credit are the ones that critics most easily miss. They arise, not from the misbehaviour of anyone on Wall Street, but from replacing parts of an imperfect banking system with a novel mechanism whose inner workings we are only just discovering.

This may seem like vague hand-waving by journalists, but the reality is that the complex interlinkage of private credit, private equity and broader debt markets is opaque. As the Federal Reserve noted in its latest financial stability report(opens a new window):

Overall, the financial stability vulnerabilities posed by private credit funds appear limited. Most private credit funds use little leverage and have low redemption risks, making it unlikely that these funds would amplify market stress through asset sales. However, a deterioration in credit quality and investor risk appetite could limit the capacity of private credit funds to provide new financing to firms that rely on private credit . Moreover, despite new insights from Form PF, visibility into the private credit space remains limited. Comprehensive data are lacking on the forms and terms of the financing extended by private credit funds or on the characteristics of their borrowers and the default risk in private credit portfolios.

5. Debt: The First 5,000 Years – Johan Lunau

Economists claim that we started off with barter, moved to coinage, and only then discovered the infinite wonders of credit. Each iteration in this supposedly linear evolution is presented as a logical solution to a common problem.

  1. Whilst barter, the original system, did allow for the exchange of goods and services, it required a double coincidence of wants: I need to have something you want, and you need to have something I want.If there’s no match, there’s no exchange.
  2. It therefore made sense to store things that everybody wanted, making transactions much more flexible and frequent (commodities like dried cod, salt, sugar, etc.). But certain issues remained… what if the goods were perishable? And how could transactions far from home be made practical?
  3. Enter precious metals, which are durable, portable, and divisible into smaller units. As soon as central authorities began to stamp these metals, their different characteristics (weight, purity) were extinguished, and they became the official currencies in specific national economies or trade regions.
  4. Banks and credit followed thereafter, as the final step.

However, Graeber’s main argument is that the above timeline is wrong, as intuitive as it is. Specifically, he posits that we actually started off with credit, then transitioned to coinage, and resort to barter only when an economy or central authority collapses (as with the fall of the Soviet Union). Moreover, he writes that this progression was chaotic and not linear; there were constant rise-and-fall cycles of credit and coinage. It’s obvious that this account is much, much harder to teach at universities, lacking the elegant simplicity of the version that is commonly presented in textbooks.

In fact, to the frustration of economists, it appears there is no historical evidence for a barter system ever having existed at all, except among obscure peoples like the Nambikwara of Brazil and the Gunwinggu of Western Arnhem Land in Australia. And even then, it takes place between strangers of different tribes in what to us are bizarre ceremonies.

However, there is evidence for widespread debt transactions as far back as 3,500 BC in Mesopotamia, which is now modern Iraq. Merchants would use credit to trade, and people would run up tabs at their local alehouses. We know this because Sumerians would often record financial dealings on clay tablets called bullae in cuneiform (successful translation of this language kicked off in the 1800s), which were dug up by archaeologists.

And whilst Sumeria did have a currency (the silver shekel), it was almost never used in transactions. Instead, it was a simple unit of account for bureaucrats. 1 shekel was divided into 60 minas, each of which was equal to 1 bushel of barley on the principle that temple labourers worked 30 days a month and received 2 rations of barley each day. Though debts were often recorded in shekels, they could be paid off in any other form, such as barley, livestock, and furniture. Since Sumeria is the earliest society about which we know anything, this discovery alone should have resulted in a revision of the history of money. It obviously didn’t…

…As stated, Graeber wrote that history is marked by flip-flop cycles of credit and coinage. But the question is, why? Likely because of cycles of war and peace.

“While credit systems tend to dominate in periods of relative social peace, or across networks of trust (…), in periods characterised by widespread war and plunder, they tend to be replaced by precious metal”.

The reason for this is twofold. Unlike credit, gold and silver can be stolen through plunder, and in transactions, it demands no trust, except in the characteristics of the precious metal itself. And soldiers, who are often constantly travelling with a fair probability of death, are the definition of an extremely bad credit risk. Who would lend to them? Armies typically created entire marketplaces around themselves.

“For much of human history, then, an ingot of gold or silver, stamped or not, has served the same role as the contemporary drug dealer’s suitcase of unmarked bills: an object without a history, valuable because one knows it will be accepted in exchange for other goods just about anywhere, no questions asked.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Intuitive Surgical, Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 09 July 2023:

1. Intellectual Laziness – Philo

The collapse of General Electric stands apart. GE was the bluest of the blue-chips: descended from Thomas Edison and J.P. Morgan, it was one of the original twelve components of the Dow in 1896, and grew to become one of the leading technology giants of the early 20th century. After WWII, GE evolved into an industrial behemoth with dominant positions in a dizzying array of electricity-adjacent markets, from jet engines and turbines to light bulbs and home appliances.

In the 1980s, GE ascended to new heights. Jack Welch took the reins as CEO in 1981, and he established GE a major player in media and financial services while expanding GE’s leadership positions in its industrial markets. For most of the 1990s and 2000s, GE was the most valuable company in America, with a valuation topping out at over $1 trillion (as measured in current dollars). While GE had its skeptics and critics at the time, it was widely seen as a corporate paragon, regularly named by Fortune as the most admired company in the world. Welch was regarded as a management guru, and his underlings were routinely poached to become CEOs at other Fortune 500 companies.

And then, a few years ago, it all unraveled in spectacular fashion. Much of the supposed success from the Welch era of the 1980s and 1990s proved to be illusory, the product of temporary tailwinds and aggressive accounting. GE’s fortunes worsened under the reign of Welch’s handpicked successor, Jeff Immelt, who took over in 2001. Immelt struggled to cope with the problems he inherited, which were compounded by the 2008 financial crisis and major management missteps of his own. In 2017, when the extent of GE’s problems became clear, GE’s stock nose-dived, and Immelt was pushed out…

…Jack Welch had most of the traits we typically associate with a great executive. He was incredibly smart (earning his PhD in chemical engineering in only three years), he was demanding of his subordinates, and he worked tirelessly. He had deep operating experience, he was willing to buck convention, and he produced quantifiable results. He was charismatic, ambitious, and a world-class marketer and publicist. And yet, he will forever be remembered as the father of the biggest corporate disaster in American history…

…The story of the fall of GE is worthy of an authoritative book, and we looked at a pair of early entries a couple of years ago – Lights Out, written by the WSJ journalists that covered its fall, and Hot Seat, Jeff Immelt’s memoir.

Power Failure, weighing in at nearly 800 pages, is the most ambitious yet. The author, William Cohan, did an early-career stint as a junior analyst at GE Capital in the 1980s, before becoming an investment banker and then a business writer, putting him in a unique position to tell the GE story.

What sets Cohan’s effort apart is that he got almost everybody to talk to him for his book. He managed to interview both Jack Welch (before he passed away in 2020) and Jeff Immelt, and many former and current senior GE executives as well. Dozens of GE critics, counterparties, and journalists also weigh in throughout…

…Power Failure also doesn’t really offer an overarching theory of why GE failed. Power Failure lists many different things that went wrong at GE — bad management, bad acquisitions, bad incentives, bad accounting, bad luck — but almost all companies suffer from some of these issues without running into a GE-scale disaster. Maybe the failure of GE was the result of an unlucky confluence of individual problems, but it feels like for a group of smart, hard-working people to produce such an exceptionally catastrophic result, there must be a larger lesson to be drawn.

One possible clue comes from the story of David Cote, a star GE finance executive who rose to become the head of the Appliances division in the 1990s, and was one of five early candidates to succeed Jack Welch as the CEO of GE. However, he was eliminated before the three finalists were chosen, and he was asked to leave GE. It is suggested that Cote was doomed by the divisional assignment he drew; the finalists were the ones who had been assigned to oversee GE’s crown jewels, while he was stuck trying to fix a basket case.

Cote eventually landed a position in 2002 as the CEO of Honeywell, a much smaller industrial conglomerate – Cohan at one point refers to it as a “mini-GE”. Honeywell had been run since 1991 by Larry Bossidy, who before then had spent his career as a top executive at GE, a close associate of Jack Welch…

…Cote had an incredibly successful run at Honeywell, leading it until his retirement in 2017. While GE foundered, Honeywell soared. A $1,000 investment in Honeywell in 2003 would be worth over $9,000 today, while the same investment in GE would now be worth only $450. Remarkably, Honeywell managed to surpass GE in overall value as well: Honeywell’s current market capitalization is $140 billion, while GE is now worth less than $90 billion. GE is slated to be broken up, but as it stands today, is nothing more than a mini-Honeywell.

This would seem to be the perfect natural experiment. A GE cast-off takes over a small company run by Jack Welch’s former right-hand man, and turns it around and surpasses GE. What did Cote do so differently from Welch, Immelt, and Bossidy, to get such a spectacular result?…

…What is Cote’s diagnosis of the root problems at Honeywell? Cote opens the book by telling the story of an internal meeting at the beginning of his tenure, a business review of Honeywell’s Aerospace division. The head of Aerospace was steeped in the old culture, and had even been a candidate for the CEO job that Cote won. The meeting does not start well:

We sat down in a conference room so that team members could present their strategic plan to me. A copy of the plan had been placed on the table facing each seat. Flipping through mine, I saw that it was thick–maybe 150 pages long, full of charts and tables. Uh oh, I thought, not good. I had found so far at Honeywell that executives and managers often made presentations far longer than necessary, overwhelming audience members with facts, figures, and commentary to preempt sharp, critical questioning.

Nevertheless, Cote interrupts them with sharp, critical questions. The Aerospace team responds with annoyance — they had planned to put on a show and receive a pat on the back — but Cote interrogates them about the root cause of the $800 million in cost overruns on their biggest project. The team eventually relents and agrees to probe the root causes of their biggest issues, and they turn the ship around. Cote concludes (emphasis mine):

What I learned, to my chagrin, was that Aerospace had become adept at lying to itself, shoehorning costs here and there into a budget without acknowledging them openly. This put enormous strain on the organization, which then had to patch together aggressive bookkeeping and special deals with customers and others, to make its goals. A dysfunctional approach if I’d ever seen one.

Cote says that this approach was pervasive at Honeywell:

Lacking any drive to think deeply about their businesses, and unchallenged by leadership to do so, teams held meetings that were essentially useless, their presentations clogged up with feel-good jargon, meaningless numbers, and analytic frameworks whose chief purpose was to hide faulty logic and make the business look good. When you did a bit of digging, you found that most executives didn’t understand their businesses very well, or even at all.

Cote defines this as intellectual laziness. It is the tendency of organizations to “juke the stats” and lie to themselves instead of diagnosing and solving root problems. This kind of anecdote is everywhere in Power Failure; recall Steve Burke’s appraisal that GE “never had the intellectual curiosity or the drive” to understand and manage NBCU…

…GE Capital was central to GE’s ability to manipulate reported earnings. Accounting rules allow a company to book a profit whenever they sell an asset for more than they paid for it. In the course of their normal business, GE Capital owned hundreds of billions of dollars of assets, like bonds and office buildings and parking lots (which they funded with short-term and long-term borrowings). Over time, real assets tend to appreciate, at least in nominal terms. Whenever GE was having a bad quarter, they would sell some of these appreciated assets–say, an office building that was bought decades ago for $10 million that was now worth $20 million–and report the $10 million accounting profit as a part of regular earnings, to compensate for the earnings shortfall from the core business. As for GE Capital CEO Gary Wendt put it in Power Failure:

I always had a lot of [asset sales] available for the quarter. I had to because I knew [Jack Welch] was going to call up and say, “I need another $1 million or another $2 million or whatever,” and so I’d go over to [GE Capital CFO James] Parke and I’d say, “Okay, let’s do this one and this one.” Making your earnings was just life to us.

This kind of one-time accounting gain from asset sales is fundamentally different in nature from operating profits from selling jet engines and power turbines. The $20 million office building was already worth $20 million before GE sold it, despite being on the books for $10 million; selling it converts it to cash but does not make shareholders any wealthier (in fact, by triggering a tax bill, it can make them worse off), despite the accounting profit that gets booked. Bundling these kinds of accounting gains with normal operating results only serves to obscure the true performance of the business from investors.

Regardless, the most of the senior GE executives who talked to Cohan continued to stand behind the practice of earnings smoothing:

Over lunch at a Connecticut pub, Denis Nayden, who was one of Wendt’s successors at GE Capital, also defended the practice of harvesting GE Capital assets as needed. “What’s the job of a public company?” he asked rhetorically. “Produce earnings for shareholders.”

“The job of a public company is to produce earnings for shareholders” is a hell of a thing for the former chairman of GE Capital to be saying after the collapse of GE. If you ask GE’s investors, they would say the job of a public company is to make money for shareholders. GE was among the best at consistently “producing earnings” for shareholders; they did so for decades. They were just abysmal at making money. 

There is a plethora of ways to produce short-term earnings without making money, and GE somehow seemed to engage in all of them. You can sell appreciated assets to record an accounting profit. You can overpay for assets with high current earnings and poor long-term prospects. You can sell power equipment to Angola on credit, with little hope of ever getting paid in cash. You can book immediate paper profits from the long-tail insurance policies you sell today, and then find out two decades later that your assumptions were too optimistic and you have to come up with $15 billion of cash to plug the gap. There are no magic metrics, and GAAP earnings are as subject to Goodhart’s Law as any other measure.

According to Power Failure, almost every time GE made a major decision that destroyed shareholder value, the obsession with manipulating earnings was front and center in the thought process. GE lost a lot of money in insurance, but why was a manufacturing company in the insurance business in the first place? Well, insurance companies offer a lot of accounting leeway, in terms of the way reserves are taken and assets are sold for profit, and could act as “shock absorbers” that let Jack Welch report smooth earnings when other divisions stumbled.

Why did GE Capital recklessly allow itself to become dependent on funding from short-term commercial paper, a practice that would almost bankrupt it in 2008? Well, short-term borrowing lowers interest expense, which boosts short-term earnings.

Why did GE buy a subprime mortgage broker in 2004? They had just spun off their insurance business, and Immelt felt they needed to replace the earnings that the insurance business had previously generated. 

Why did GE keep expanding GE Capital? Well, it was a good way to increase earnings. Why didn’t GE sell out of noncore businesses like real estate and media when valuations were sky-high in the mid-00s? GE didn’t want to lose the earnings those divisions produced. The catastrophic 2015 acquisition of Alstom? Immelt thought the synergies would definitely increase earnings. The mistimed $40 billion stock buyback in 2015? Jeff Immelt decided on a $2 per share earnings target, and wanted to do anything he could to hit that goal.  Never in Power Failure does it seem like GE management gave any thought to shareholder value when making major decisions: it was always earnings, earnings, earnings.

Even putting aside the obsession with reported earnings, GE’s culture seems to have been generally lacking in intellectual rigor. GE’s strategies were supported by narratives that sounded compelling at a superficial level, but fell apart under any kind of scrutiny.

A classic example: Jack Welch liked to tell everyone that his brilliant insight about expanding into finance was that it had higher revenue per employee than industrial manufacturing, thus it must be a better business to be in. Of course, that is nonsense: there is no reason to expect there to be any relationship between revenue per employee and return on invested capital.

Welch told this story even after GE learned this lesson the hard way in the 1980s, overpaying to acquire Kidder Peabody, a venerable investment banking firm (investment banking being perhaps the highest revenue per employee business that exists), a deal that was an endless source of trouble, and ultimately led to a $2 billion loss when GE finally got rid of it in 1995. (Cohan discovers when talking to a former director that Welch managed to prevent this massive loss from affecting reported earnings by raiding the reserves of the insurance business.)

Return on invested capital is mostly determined by factors like barriers to entry and sustainable competitive advantage, which GE’s industrial businesses had in spades but which GE Capital completely lacked — after all, money is a commodity. After the financial crisis, GE Capital’s return on invested capital collapsed not because revenue per employee declined, but because GE Capital’s lenders and regulators came to understand the true risk inherent in the business, and demanded higher rates, lower leverage, and closer oversight.

As GE placed no value on intellectual rigor, it is no surprise that they ended up promoting executives on the basis of polish and storytelling ability. So it was that when it came time to pick a new CEO, Welch elevated Jeff Immelt, a slick-talking salesman with little understanding of GE’s businesses and little patience for details, and dismissed David Cote, who would go on to have so much success at Honeywell. 

It is not clear that GE’s decision-making process was any worse under Immelt than it was under Welch. Immelt would be skewered by accusations that he encouraged “success theater”, a culture where executives never confronted root problems and pretended everything was going well, but the culture of extreme intellectual laziness certainly dated back to his predecessor. In fact, Welch’s best-selling autobiography was subtitled “Straight from the Gut”.

It would be technically accurate to state that the dramatic collapse of GE resulted from a perfect storm of mistakes — wrong CEO, bad investments, strategic missteps, operational snafus. But underlying all of those seemingly unrelated mistakes was one thing: this culture of intellectual laziness, the willingness to juke the stats and tell comforting stories rather than diagnose and solve root problems. GE failed to create shareholder value because they didn’t really try to create shareholder value; they were content to be able to report some shiny meaningless numbers and a pleasant narrative…

…At this point, we have to ask: how does one identify management teams that demand intellectual rigor, and avoid management teams that are intellectually lazy?

The answer is simple, but not easy. In each example we presented here, the intellectually lazy managers are actually initially exposed when they present their story to a knowledgeable audience. To be sure, they are able to assemble a narrative that sounds convincing to a layman, peppered with vanity metrics and impenetrable business-speak.

However, the narrative is usually all form and no substance, pure business theater. It leans heavily on rhetorical tricks: accounting chicanery employed to meet previously announced financial targets might be rationalized as “exceptional dedication to meeting our public commitments”. (The implication being that if you don’t fudge the numbers, maybe you’re just the type of person that doesn’t take their commitments seriously.)

Nonsense axioms are invented out of thin air – recall the continued insistence of former GE executives that companies must consistently announce growing earnings, in the face of the evidence that most successful companies did no such thing.

Then there is the midwit appeal to complexity: anyone who argues that the narrative is a convoluted, illogical mess is accused of being an ignorant simpleton who is incapable of grasping such sophistication and brilliance.

The intellectually lazy narrative always contains these sorts logical gaps. When confronted about these inconsistencies, managers respond with plausible-sounding non sequiturs, answers that might be accepted by a novice but would never pass muster with anyone with real expertise.

In the case of GE, experienced analysts knew that an inherently cyclical business could not produce perfectly smooth metrics, and they also realized that GE Capital’s reliance on cheap short-term funding was not sustainable — points they raised at the time. At Honeywell, David Cote immediately identified the flaws in the stories that his underlings were telling, and called them out. 

2. Value of BRK Float, Buffett Market View etc. – The Brooklyn Investor

For example, it is true that BRK only owns $328 billion in stocks against $500 billion in equity. This looks bearish, compared to say, back in 1994/1995 as you see. That looks like equity exposure of only 66% or so.

But as we all know, BRK has been buying a lot of operating businesses. For example, Burlington Northern now is a wholly owned subsidiary. Owning 100% of something is no less ‘equity exposure’ than owning just some of the stock. Right? So our equity exposure is much higher than 66% if you include all the other operating businesses. What is that number? Let’s say we include equity method investments (which is clearly equity) of $26 billion, and the book value of the Rails, Utilities and Energy business of $140 billion. That’s $166 billion. Add that to the $328 billion stock portfolio and you get $494 billion. And this doesn’t include some stuff in the “Insurance and other” (where I assume manufacturing, services and retail is), and we are already pretty much at 100% equity exposure. That, to me, is as good as “fully invested”.

How is that bearish? It’s not, actually. Bearish is if you take all those businesses / stocks and actually sell it down so your actual net equity exposure to all business is way below your shareholders equity. If you tell me that the above $494 billion is actually $250 billion, and the rest is cash, then I would agree BRK is waiting for the end of the world.

As it stands now? Not at all…

…This is the sort of thing that Buffett would hate because I am going to tell you what he is thinking, and I will do so without having any idea. So, having said that…

Rates are now back up to over 5% on the short end, and almost 4% on the long end (10 year). What does Buffett think of interest rates? Well, he won’t tell you. He will probably tell you he thinks long rates are too low and that it can’t stay low forever, but that’s all.

But let’s see what he is doing to see what he thinks of interest rates. With the long end approaching 4%, does Buffett think bonds are interesting?

Below, I went back through the recent 10-K’s (when you get old, even going back 25 years is recent, lol…) and jotted down the cash and fixed income investments at BRK. This way, we can actually see when he started to get allergic to long term bonds, and then we can see if he is getting interested again.

First of all, I can tell you that fixed income on BRK’s balance sheet has been steadily in the $20s billions, despite net worth, cash etc. increasing over the years. Spoiler alert: in the 2023 10Q, this is still $23 billion, so he is not expressing any interest in bonds yet…

…So when did Buffett start to get away from long bonds? It is clear from the above table that he really started to dislike them in 2003. There is a clear pivot in that year, when cash rose a lot and fixed income investments went down. He seemed fine with bonds in 2001 and 2002, when they were around 5% or so…

…So it is clear that Buffett started to really dislike bonds when it started to go below 5%. I was going to argue 4% is the level, but you see rates above 4% for a few years after 2003, but Buffett didn’t bite; fixed income levels remained low, which seems to suggest 5% is the level he won’t accept anything below. The slight rise in this during the financial crisis could be from the emergency financing he did for GE, BAC and others, but I didn’t check. I think those were factors other than the general level of interest rates, so we can ignore that rise in bond holdings during that period.

So, reasonably or unreasonably, I am going to assume that 5% is the point Buffett won’t go below for long term rates. 

3. The Full Story of Large Language Models and RLHF – Marco Ramponi

Language Models (LMs) are a class of probabilistic models explicitly tailored to identify and learn statistical patterns in natural language. The primary function of a language model is to calculate the probability that a word succeeds a given input sentence.

How are these models trained to do this? The core process is a general technique known as self-supervised learning, a learning paradigm that leverages the inherent structure of the data itself to generate labels for training.

In the context of natural language processing, self-supervised learning enables models to learn from unannotated text, rather than relying on manually labeled data, which is relatively scarce and often expensive.

During the training process, an LM is fed with a large corpus (dataset) of text and tasked with predicting the next word in a sentence. In practice, this is often achieved by randomly truncating the last part of an input sentence and training the model to fill in the missing word(s). As the model iterates through numerous examples, it learns to recognize and internalize various linguistic patterns, rules, and relationships between words and concepts. One can say that via this process the model creates an internal representation of language.

The outcome of this training process is a pre-trained language model. By exposure to diverse linguistic patterns, the model is equipped with a foundation for understanding natural language and for generating contextually appropriate and coherent text. Some people refer to such pre-trained models as foundation models…

…How good can a language model become?

As it turns out, the effectiveness of LMs in performing various tasks is largely influenced by the size of their architectures. These architectures are based on artificial neural networks, which are computational models loosely inspired by the structure and functioning of biological neural networks, such as those in the human brain. Artificial neural networks consist of interconnected layers of nodes, or “neurons” which work together to process and learn from data.

Neurons in the network are associated with a set of numbers, commonly referred to as the neural network’s parameters. The numerical value of these parameters is supposed to represent the strength of connections between different neurons. The parameters within a neural network are adjustable, and they get iteratively updated during the training process to minimize the difference between the model’s predictions and the actual target values.

In the context of LMs in particular, larger networks with more parameters have been shown to achieve better performance. Intuitively, the more parameters, the greater their “storage capacity”, even though it should be noted that language models do not store information in a way comparable to the standard way storage memory works in computers (hard drives).

Essentially, a higher number of parameters allows the model to “internalize” a greater variety of statistical patterns (via the numerical relationships of its parameters) within the language data they are exposed to. Larger models, however, also require more computational resources and training data to reach their full potential.

A language model is more than just a neural net.

Modern language models comprise various components or blocks, often formed by different neural networks, each designed to perform specific tasks and featuring specialized architectures. Virtually all current LMs are based on a particularly successful choice of architecture: the so-called Transformer model, invented in 2017.

Starting from the field of Natural Language Processing (NLP), Transformers have been revolutionizing nearly all areas of applied AI, due to their efficiency at processing large chunks of data at once (parallelization) rather than sequentially, a feature that allowed for training on bigger datasets than previous existing architectures. On text data, Transformers have proved exceptionally good at carrying out a form of natural language contextual understanding, which made them the de facto standard choice for most NLP tasks nowadays. Two components are key for this success: the attention mechanism and word embeddings.

  • Word Embeddings are high-dimensional vector representations of words that capture their semantic and syntactic properties. These representations enable the model to numerically manipulate words in a mathematical space, a sort of semantic space, where physically nearby words share some form of relationship of meaning or other kinds of similarities. Instead of treating words as isolated entities, word embeddings allow the model to learn and understand the complex interplay of words within a given context.
  • Attention Mechanisms allow the model to weigh the importance of different words or phrases in the text. This helps the model to selectively focus on specific parts of the input, assigning different attention scores to the words based on their relevance to the task at hand. Attention can be thought of as a numerical operation that is supposed to mimic the “focusing ability” of a model to the local, specific context as it reads through or generates text…

…Previous prevailing heuristics have long been claiming that increasing the size of a model was the most effective way to improve its performance, while scaling the training datasets was less important. However, more recent research has radically reshaped this perspective, revealing that many of the current LLMs are, in fact, significantly undertrained with respect to the amount of data seen during pre-training.

This fundamental shift has led to the formation of a new set of guiding heuristics, emphasizing the importance of training large models with more extensive datasets. In practice, in order to fully train the next massive LLM following these new principles one would need an immense amount of data, corresponding to a significant fraction, if not all of the text data available on the entire internet today.

The implications of this new perspective are profound. On the one hand, the total amount of training data actually available might turn out to be the true fundamental bottleneck for these AI systems…

…Scaling language models yields more than expected.

With scaling, the performance of LLMs has (predictably) shown consistent improvements across a number of quantitative metrics that are supposed to measure to which extent an LM is able to do what it was primarily designed for: calculate probability distributions over words. An example of such metrics is perplexity, a measure of fluency of generated text.

We have seen, however, how the process of scaling language models requires training them on enormous quantities of data, often sourced from the extensive troves of text available online. LLMs thus get to be “fed” with substantial portions of the web, spanning a vast array of information. Being exposed to such a diverse range of linguistic patterns and structures during training, LLMs progressively learn to emulate and reproduce these patterns with high fidelity.

As a byproduct, this process has appeared to give rise to fascinating qualitative behaviors. Empirical studies have found that, as LLMs are scaled, they are able to suddenly “unlock” new capabilities that seem to emerge in a discontinuous manner, in contrast to the more predictable linear improvement of quantitative metrics.

These emergent abilities encompass a wide range of tasks, such as translation between different languages, the ability to write programming code, and many others. Remarkably, LLMs acquire these skills through the mere observation of recurring patterns in natural language during the training process, that is, without explicit task-specific supervision…

…The phenomenon of emergent abilities in LLMs, although quite recent and still not fully understood by researchers, is also not a completely obscure one.

Even though there is no prediction on exactly which new cognitive capabilities further scaled LLM may acquire in the future, the general pattern that allows this to happen is fairly clear. Let’s consider the example of Question-Answering.

Within this massive language dataset, the internet of text, there exist numerous instances of questions followed by answers. These question-answer pairs occur in diverse contexts, such as forums, articles, or educational resources, and cover a multitude of topics, from everyday trivia to specialized technical knowledge.

Ultimately, a statistically significant number of these answers is in fact correct, and this is reflected in the ability of an LLM to carry out a form of information retrieval from web knowledge, by giving reasonably correct answers to common sense questions on disparate topics when requested to do so.

Unfortunately, the internet is also filled with (a statistically significant amount of) false facts and wrong answers to common sense questions. Due to the sheer volume of this data, it is virtually impossible for the researchers to regulate the content LLMs are exposed to during training.

As a matter of fact, LLMs may occasionally exhibit various types of undesirable behavior, such as reproducing harmful or biased content, or generating so-called hallucinations by fabricating nonexistent or false facts.

When such models are proposed as general purpose conversational chatbots (like ChatGPT), it becomes a lot more difficult to identify all the possible threats that arise from a mass use of these systems, since it is almost impossible to predict a priori all the possible scenarios…

…Can a machine learn human values?

Fundamentally, RLHF is based on a straightforward premise. Imagine having two language models: a baseline (unaligned) model and a secondary preference model. The preference model’s role is to determine which action a human would prefer within a given list of possibilities (e.g., two different responses from the baseline model to a user’s request). This model could assign a numerical score to each action, effectively ranking them according to human preferences. In technical terms, this is known as a reward model.

Utilizing the reward model, the baseline model can be refined iteratively, altering its internal text distribution to prioritize sequences favored by humans (as indicated by the reward model). In some sense, the reward model serves as a means to introduce a “human preference bias” into the baseline model…

…OpenAI has applied the general methodology of RLHF to fine-tune ChatGPT through a three-step process.

The initial step involves collecting human demonstrations using a group of about 40 human annotators for a pre-selected set of prompts. The prompts are sourced from two different origins: some are created by annotators or developers, while others are sampled from OpenAI’s API requests.

These demonstrations can be thought of as the “ideal answers”, or responses to these prompts, and together constitute a training dataset. This dataset is then used to fine-tune a pre-trained model in a supervised manner, yielding the Supervised Fine-Tuned (SFT) model.

As mentioned earlier, this approach has scalability limitations, resulting in a relatively small dataset (approximately 15k examples).

The second step revolves around preference orderings. Labelers (or annotators) are tasked with voting on a number of SFT model outputs, thereby creating a new dataset composed of comparison data. The reward model is trained on this dataset.

In practice, a list of prompts is chosen, and the SFT model generates multiple outputs (between 4 and 9) for each prompt. Annotators rank these outputs from best to worst, forming a new labeled dataset with rankings serving as labels.

Although the exact details remain undisclosed by OpenAI, the dataset’s size may be roughly ten times larger than the curated dataset used for the SFT model.

Finally, the third step involves applying Reinforcement Learning to teach the SFT model the human preference policy through the reward model, essentially as described in the previous section. The SFT model is fine-tuned via the reward model. The outcome is the so-called policy model…

…As we have previously discussed, by treating the language model as a reinforcement learning policy during the fine-tuning phase, RLHF introduces biases into the distribution.

Operationally, we can interpret this effect as the introduction of a mode-seeking behavior which guides the model through the distribution and leads to outputs with higher rewards (as modeled by learned human preferences), effectively narrowing the potential range of generated content…

…While RLHF improves the consistency of the model’s answers, it inevitably does so at the cost of diversity in its generation abilities. This trade-off could be viewed as both a benefit and a limitation, depending on the intended use case.

For instance, in LLM applications such as search engines, where accurate and reliable responses are paramount, RLHF is an ideal solution. On the other hand, when using language models for creative purposes, such as generating novel ideas or assisting in writing, the reduction in output diversity may hinder the exploration of new and intriguing concepts.

4. Why transformative AI is really, really hard to achieve – Arjun Ramani and Zhengdong Wang

Humans have a good track record of innovation. The mechanization of agriculture, steam engines, electricity, modern medicine, computers, and the internet—these technologies radically changed the world. Still, the trend growth rate of GDP per capita in the world’s frontier economy has never exceeded three percent per year.

It is of course possible for growth to accelerate. There was time before growth began, or at least when it was far closer to zero. But the fact that past game-changing technologies have yet to break the three percent threshold gives us a baseline. Only strong evidence should cause us to expect something hugely different.

Yet many people are optimistic that artificial intelligence is up to the job. AI is different from prior technologies, they say, because it is generally capable—able to perform a much wider range of tasks than previous technologies, including the process of innovation itself. Some think it could lead to a “Moore’s Law for everything,” or even risks on on par with those of pandemics and nuclear war. Sam Altman shocked investors when he said that OpenAI would become profitable by first inventing general AI, and then asking it how to make money. Demis Hassabis described DeepMind’s mission at Britain’s Royal Academy four years ago in two steps: “1. Solve Intelligence. 2. Use it to solve everything else.”…

…Neither this essay nor the economic growth literature rules out this possibility. Instead, our aim is to simply temper your expectations. We think AI can be “transformative” in the same way the internet was, raising productivity and changing habits. But many daunting hurdles lie on the way to the accelerating growth rates predicted by some…

…Productivity growth almost definitionally captures when a new technology efficiently performs useful work. A powerful AI could one day perform all productive cognitive and physical labor. If it could automate the process of innovation itself, some economic growth models predict that GDP growth would not just break three percent per capita per year—it would accelerate.

Such a world is hard to achieve. As the economist William Baumol first noted in the 1960s, productivity growth that is unbalanced may be constrained by the weakest sector. To illustrate this, consider a simple economy with two sectors, writing think-pieces and constructing buildings. Imagine that AI speeds up writing but not construction. Productivity increases and the economy grows. However, a think-piece is not a good substitute for a new building. So if the economy still demands what AI does not improve, like construction, those sectors become relatively more valuable and eat into the gains from writing. A 100x boost to writing speed may only lead to a 2x boost to the size of the economy.

This toy example is not all that different from the broad pattern of productivity growth over the past several decades. Eric Helland and Alex Tabarrok wield Baumol in their book Why Are the Prices So Damn High? to explain how technology has boosted the productivity of sectors like manufacturing and agriculture, driving down the relative price of their outputs, like TVs and food, and raising average wages. Yet TVs and food are not good substitutes for labor-intensive services like healthcare and education. Such services have remained important, just like constructing buildings, but have proven hard to make more efficient. So their relative prices have grown, taking up a larger share of our income and weighing on growth…

…Progress in fine motor control has hugely lagged progress in neural language models. Robotics workshops ponder what to do when “just a few cubicles away, progress in generative modeling feels qualitatively even more impressive.” Moravec’s paradox and Steven Pinker’s 1994 observation remain relevant: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.” The hardest “easy” problems, like tying one’s shoelaces, remain. Do breakthroughs in robotics easily follow those in generative modeling? That OpenAI disbanded its robotics team is not a strong signal.

It seems highly unlikely to us that growth could greatly accelerate without progress in manipulating the physical world. Many current economic bottlenecks, from housing and healthcare to manufacturing and transportation all have a sizable physical-world component…

…Current methods may also not be enough. Their limits may soon be upon us. Scaling compute another order of magnitude would require hundreds of billions of dollars more spending on hardware. According to SemiAnalysis: “This is not practical, and it is also likely that models cannot scale to this scale, given current error rates and quantization estimates.” The continued falling cost of computation could help. But we may have exhausted the low-hanging fruit in hardware optimization and are now entering an era of deceleration. Moore’s Law has persisted under various guises, but the critical factor for transformative AI may be whether we will reach it before Moore’s Law stops.

Next look at data. Villalobos et al. warns that high quality language data may run out by 2026. The team suggests data efficiency and synthetic data as ways out, but so far these are far from complete solutions as Shumailov et al. shows.

In algorithms, our understanding of what current architectures can and cannot do is improving. Delétang et al. and Dziri et al. identify particularly hard problems for the Transformer architecture. Some say that so-called emergent abilities of large language models could still surprise us. Not necessarily. Schaeffer et al. argues that emergence appears “due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale.” …

…Humans remain a limiting factor in development. Human feedback makes AI outputs more helpful. Insofar as AI development requires human input, humans will constrain productivity. Millions of humans currently annotate data to train models. Their humanity, especially their expert knowledge and creative spark, becomes more valuable by the day. The Verge reports: “One engineer told me about buying examples of Socratic dialogues for up to $300 a pop.”…

…A big share of human knowledge is tacit, unrecorded, and diffuse… We are constantly surprised in our day jobs as a journalist and AI researcher by how many questions do not have good answers on the internet or in books, but where some expert has a solid answer that they had not bothered to record. And in some cases, as with a master chef or LeBron James, they may not even be capable of making legible how they do what they do.

The idea that diffuse tacit knowledge is pervasive supports the hypothesis that there are diminishing returns to pure, centralized, cerebral intelligence. Some problems, like escaping game-theoretic quagmires or predicting the future, might be just too hard for brains alone, whether biological or artificial…

…The history of economic transformation is one of contingency. Many factors must come together all at once, rather than one factor outweighing all else. Individual technologies only matter to the extent that institutions permit their adoption, incentivize their widespread deployment, and allow for broad-scale social reorganization around the new technology…

…All agree that history is not inevitable. We think this applies to AI as well. Just as we should be skeptical of a Great Man theory of history, we should not be so quick to jump to a Great Technology theory of growth with AI.

And important factors may not be on AI’s side. Major drivers of growth, including demographics and globalization, are going backwards. AI progress may even be accelerating the decoupling of the US and China, reducing the flow of people and ideas.

AI may not be able to automate precisely the sectors most in need of automation. We already “know” how to overcome many major constraints to growth, and have the technology to do so. Yet social and political barriers slow down technology adoption, and sometimes halt it entirely. The same could happen with AI.

Comin and Mestieri observe that cross-country variation in the intensity of use for new technologies explains a large portion of the variation in incomes in the twentieth century. Despite the dream in 1954 that nuclear power would cause electricity to be “too cheap to meter,” nuclear’s share of global primary energy consumption has been stagnant since the 90s. Commercial supersonic flight is outright banned in US airspace…

…Automation alone is not enough for transformative economic growth. History is littered with so-so technologies that have had little transformative impact, as Daron Acemoglu and Simon Johnson note in their new book Power and Progress. Fast-food kiosks are hardly a game-changer compared to human employees. Nobel laureate Robert Fogel documented that in the same way, railroads had little impact on growth because they were only a bit better than their substitutes, canals and roads. Many immediate applications of large language models, from customer service to writing marketing copy, appear similar.7

OpenAI’s own economists estimate that about “19% of jobs have at least 50% of their tasks exposed” to GPT-4 and the various applications that may be built upon it. Some view this as game-changing. We would reframe it. That means over 80% of workers would have less than 50% of their tasks affected, hardly close to full automation. And their methodology suggests that areas where reliability is essential will remain unaffected for some time…

…There is a deeper point here. GDP is a made-up measure of how much some humans value what others produce, a big chunk of which involves doing social things amongst each other. As one of us recently wrote, we may value human-produced outputs precisely because they are scarce. As long as AI-produced outputs cannot substitute for that which is social, and therefore scarce, such outputs will command a growing “human premium,” and produce Baumol-style effects that weigh on growth.

5. Compounding Optimism – Morgan Housel

The question is: Did George Wheelwright know that he would influence Edwin Land, who would then influence Steve Jobs, who would then design a phone that 2.5 billion people would use?

Did Michael Faraday, who died in 1867, know that his ideas would directly influence the light bulb, which effectively led to the creation of everything from the modern power grid to nightlife?

Did Ben Graham know that his 1950s finance class would lead to 45,000 trekking to Omaha every year to hear his student speak?

Of course not. It’s so hard to know what an idea, or an invention, or a philosophy, will influence, and what a person who’s influenced by it will go on to create.

Visa Founder Dee Hock says, “A book is far more than what the author wrote; it is everything you can imagine and read into it as well.” An author might write something that’s dull or obvious, but it could inspire a reader to go do something incredible…

…Most new ideas and inventions are pretty bland on their own. But when you mix several of them together, you can get magic. Plastic is great. Electronics are neat. Metal is special. But mix them together in the right way and you get an iPhone, which is pure magic…

…I think part of the reason pessimism is so much easier and more common than optimism is that compound growth is not intuitive.

It’s hard to imagine, say, our incomes doubling over the next few generations. That seems like such a massive leap, like we’d have to boil the ocean to get it done. But doubling the average income over 30 years works out to about 2.3% growth per year. It’s not crazy at all. It’s actually quite achievable. What made it seem so ambitious to begin with is that compound growth is easy to underestimate.

If you look at the end result of a long period of compounding, it’s astounding. But all it took to get it done was little bits of incremental growth strung together for a long time.

All progress is like that.

Technological progress is easy to underestimate because it’s so counterintuitive to see how, for example, the philosophies of a guy who invented Polaroid film would go on to inspire the iPhone. Or how an 18th-century physicist would write a notebook that would set the foundations for a modern electrical system.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of DeepMind), Apple (parent of the iPhone), and Visa. Holdings are subject to change at any time.