What We’re Reading (Week Ending 16 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 16 July 2023:

1. Inside Google’s big AI shuffle — and how it plans to stay competitive, with Google DeepMind CEO Demis Hassabis – Nilay Patel and Demis Hassabis

From the outside, the timeline looks like this: everyone’s been working on this for ages, we’ve all been talking about it for ages. It is a topic of conversation for a bunch of nerdy journalists like me, a bunch of researchers, we talk about it in the corner at Google events.

Then ChatGPT is released, not even as a product. I don’t even think Sam [Altman] would call it a great product when it was released, but it was just released, and people could use it. And everyone freaked out, and Microsoft releases Bing based on ChatGPT, and the world goes upside down, and Google reacts by merging DeepMind and Google Brain. That’s what it looks like from the outside. Is that what it felt like from the inside?

That timeline is correct, but it’s not these direct consequences; it’s more indirect in a sense. So, Google and Alphabet have always run like this. They let many flowers bloom, and I think that’s always been the way that even from Larry [Page] and Sergey [Brin] from the beginning set up Google. And it served them very well, and it’s allowed them to organically create incredible things and become the amazing company that it is today. On the research side, I think it’s very compatible with doing research, which is another reason we chose Google as our partners back in 2014. I felt they really understood what fundamental and blue sky research was, ambitious research was, and they were going to facilitate us being and enable us to be super ambitious with our research. And you’ve seen the results of that, right?

By any measure, AlphaGo, AlphaFold, but more than 20 nature and science papers and so on — all the normal metrics one would use for really delivering amazing cutting-edge research we were able to do. But in a way, what ChatGPT and the large models and the public reaction to that confirmed is that AI has entered a new era. And by the way, it was a little bit surprising for all of us at the coalface, including OpenAI, how viral that went because — us and some other startups like Anthropic and OpenAI — we all had these large language models. They were roughly the same capabilities.

And so, it was surprising, not so much what the technology was because we all understood that, but the public’s appetite for that and obviously the buzz that generated. And I think that’s indicative of something we’ve all been feeling for the last, I would say, two, three years, which is these systems are reaching a level of maturity now and sophistication where it can really come out of the research phase and the lab and go into powering incredible next-generation products and experiences and also breakthroughs, things like AlphaFold directly being useful for biologists. And so, to me, this is just indicative of a new phase that AI is in of being practically useful to people in their everyday lives and actually being able to solve really hard real-world problems that really matter, not just the curiosities or fun, like games.

When you recognize that shift, then I think that necessitates a change in your approach as to how you’re approaching the research and how much focus you’re having on products and those kinds of things. And I think that’s what we all came to the realization of, which was: now was the time to streamline our AI efforts and focus them more. And the obvious conclusion of that was to do the merger…

It feels like the ChatGPT moment that led to this AI explosion this year was really rooted in the AI being able to do something that regular people could do. I want you to write me an email, I want you to write me a screenplay, and maybe the output of the LLM is a C+, but it’s still something I can do. People can see it. I want you to fill out the rest of this photo. That’s something people can imagine doing. Maybe they don’t have the skills to do it, but they can imagine doing it. All the previous AI demos that we have gotten, even yours, AlphaFold, you’re like, this is going to model all the proteins in the world.

But I can’t do that; a computer should do that. Even a microbiologist might think, “That is great. I’m very excited that a computer can do that because I’m just looking at how much time it would take us, and there’s no way we could ever do it.” “I want to beat the world champion at Go. I can’t do that. It’s like, fine. A computer can do that.” 

There’s this turn where the computer is starting to do things I can do, and they’re not even necessarily the most complicated tasks. Read this webpage and deliver a summary of it to me. But that’s the thing that unlocked everyone’s brain. And I’m wondering why you think the industry didn’t see that turn coming because we’ve been very focused on these very difficult things that people couldn’t do, and it seems like what got everyone is when the computer started doing things people do all the time.

I think that analysis is correct. I think that is why the large language models have really entered the public consciousness because it’s something the average person, that the “Joe Public,” can actually understand and interact with. And, of course, language is core to human intelligence and our everyday lives. I think that does explain why chatbots specifically have gone viral in the way they have. Even though I would say things like AlphaFold, I mean of course I’d be biased in saying this, but I think it’s actually had the most unequivocally biggest beneficial effects so far in AI on the world because if you talk to any biologist or there’s a million biologists now, researchers and medical researchers, have used AlphaFold. I think that’s nearly every biologist in the world. Every Big Pharma company is using it to advance their drug discovery programs. I’ve had multiple, dozens, of Nobel Prize-winner-level biologists and chemists talk to me about how they’re using AlphaFold.

So a certain set of all the world’s scientists, let’s say, they all know AlphaFold, and it’s affected and massively accelerated their important research work. But of course, the average person in the street doesn’t know what proteins are even and doesn’t know what the importance of those things are for things like drug discovery. Whereas obviously, for a chatbot, everyone can understand, this is incredible. And it’s very visceral to get it to write you a poem or something that everybody can understand and process and measure compared to what they do or are able to do… 

…There are so many decisions I make every day,it’s hard to come up with one now. But I tend to try and plan out and scenario a plan many, many years in advance. So I tell you the way I try to approach things is, I have an end goal. I’m quite good at imagining things, so that’s a different skill, visualizing or imagining what would a perfect end state look like, whether that’s organizational or it’s product-based or it’s research-based. And then, I work back from the end point and then figure out what all the steps would be required and in what order to make that outcome as likely as possible.

So that’s a little bit chess-like, right? In the sense of you have some plan that you would like to get to checkmate your opponent, but you’re many moves away from that. So what are the incremental things one must do to improve your position in order to increase the likelihood of that final outcome? And I found that extremely useful to do that search process from the end goal back to the current state that you find yourself in.

Let’s put that next to some products. You said there’s a lot of DeepMind technology and a lot of Google products. The ones that we can all look at are Bard and then your Search Generative Experience. There’s AI in Google Photos and all this stuff, but focused on the LLM moment, it’s Bard and the Search Generative Experience. Those can’t be the end state. They’re not finished. Gemini is coming, and we’ll probably improve both of those, and all that will happen. When you think about the end state of those products, what do you see?

The AI systems around Google are also not just in the consumer-facing things but also under the hood that you may not realize. So even, for example, one of the things we applied our AI systems to very initially was the cooling systems in Google’s data centers, enormous data centers, and actually reducing the energy they use by nearly 30 percent that the cooling systems use, which is obviously huge if you multiply that by all of the data centers and computers they have there. So there are actually a lot of things under the hood where AI is being used to improve the efficiency of those systems all the time. But you’re right, the current products are not the end state; they’re actually just waypoints. And in the case of chatbots and those kinds of systems, ultimately, they will become these incredible universal personal assistants that you use multiple times during the day for really useful and helpful things across your daily lives.

From what books to read to recommendations on maybe live events and things like that to booking your travel to planning trips for you to assisting you in your everyday work. And I think we’re still far away from that with the current chatbots, and I think we know what’s missing: things like planning and reasoning and memory, and we are working really hard on those things. And I think what you’ll see in maybe a couple of years’ time is today’s chatbots will look trivial by comparison to I think what’s coming in the next few years.

My background is as a person who’s reported on computers. I think of computers as somewhat modular systems. You look at a phone — it’s got a screen, it’s got a chip, it’s got a cell antenna, whatever. Should I look at AI systems that way — there’s an LLM, which is a very convincing human language interface, and behind it might be AlphaFold that’s actually doing the protein folding? Is that how you’re thinking about stitching these things together, or is it a different evolutionary pathway?

Actually, there’s a whole branch of research going into what’s called tool use. This is the idea that these large language models or large multimodal models, they’re expert at language, of course, and maybe a few other capabilities, like math and possibly coding. But when you ask them to do something specialized, like fold a protein or play a game of chess or something like this, then actually what they end up doing is calling a tool, which could be another AI system, that then provides the solution or the answer to that particular problem. And then that’s transmitted back to the user via language or pictorially through the central large language model system. So it may be actually invisible to the user because, to the user, it just looks like one big AI system that has many capabilities, but under the hood, it could be that actually the AI system is broken down into smaller ones that have specializations.

And I actually think that probably is going to be the next era. The next generation of systems will use those kinds of capabilities. And then you can think of the central system as almost a switch statement that you effectively prompt with language, and it roots your query or your question or whatever it is you’re asking it to the right tool to solve that question for you or provide the solution for you. And then transmit that back in a very understandable way. Again, using through the interface, the best interface really, of natural language.

Does that process get you closer to an AGI, or does that get you to some maximum state and you got to do something else?

I think that is on the critical path to AGI, and that’s another reason, by the way, I’m very excited about this new role and actually doing more products and things because I actually think the product roadmap from here and the research roadmap from here toward something like AGI or human-level AI is very complementary. The kinds of capabilities one would need to push in order to build those kinds of products that are useful in your everyday life like a universal assistant requires pushing on some of these capabilities, like planning and memory and reasoning, that I think are vital for us to get to AGI. So I actually think there’s a really neat feedback loop now between products and research where they can effectively help each other…

You’ve signed a letter from the Center for AI Safety — OpenAI’s Sam Altman and others have also signed this letter — that warns against the risk from AI. And yet, you’re pushing on, Google’s in the market, you’ve got to win, you’ve described yourself as competitive. There’s a tension there: needing to win in the market with products and “Oh boy, please regulate us because raw capitalism will drive us off the cliff with AI if we don’t stop it in some way.” How do you balance that risk?

It is a tension. It’s a creative tension. What we like to say at Google is we want to be bold and responsible, and that’s exactly what we’re trying to do and live out and role model. So the bold part is being brave and optimistic about the benefits, the amazing benefits, incredible benefits, AI can bring to the world and to help humanity with our biggest challenges, whether that’s disease or climate or sustainability. AI has a huge part to play in helping our scientists and medical experts solve those problems. And we’re working hard on that  and all those areas. And AlphaFold, again, I’d point to as a poster child for that, what we want to do there. So that’s the bold part. And then, the responsible bit is to make sure we do that as thoughtfully as possible with as much foresight as possible ahead of time.

Try and anticipate what the issues might be if one was successful ahead of time. Not in hindsight, and perhaps this happened with social media, for example, where it is this incredible growth story. Obviously, it’s done a lot of good in the world, but then it turns out 15 years later we realize there are some unintended consequences as well to those types of systems. And I would like to chart a different path with AI. And I think it’s such a profound and important and powerful technology. I think we have to do that with something as potentially as transformative as AI. And it doesn’t mean no mistakes will be made. It’s very new, anything new, you can’t predict everything ahead of time, but I think we can try and do the best job we can.

And that’s what signing that letter was for was just to point out that I don’t think it’s likely, I don’t know on the timescales, but it’s something that we should consider, too, in the limit is what these systems can do and might be able to do as we get closer to AGI. We are nowhere near that now. So this is not a question of today’s technologies or even the next few years’, but at some point, and given the technology’s accelerating very fast, we will need to think about those questions, and we don’t want to be thinking about them on the eve of them happening. We need to use the time now, the next five, 10, whatever it is, years, to do the research and to do the analysis and to engage with various stakeholders, civil society, academia, government, to figure out, as this stuff is developing very rapidly, what the best way is of making sure we maximize the benefits and minimize any risks.

And that includes mostly, at this stage, doing more research into these areas, like coming up with better evaluations and benchmarks to rigorously test the capabilities of these frontier systems.

You talked about tool usage for AI models, you ask an LLM to do something, it goes off and asks AlphaFold to fold the protein for you. Combining systems like that, integrating systems like that, historically that’s where emergent behaviors appear, things you couldn’t have predicted start happening. Are you worried about that? There’s not a rigorous way to test that.

Right, exactly. I think that’s exactly the sort of thing we should be researching and thinking about ahead of time is: as tool use becomes more sophisticated and you can combine different AI systems together in different ways, there is scope for emergent behavior. Of course, that emergent behavior may be very desirable and be extremely useful, but it could also potentially be harmful in the wrong hands and in the hands of bad actors, whether that’s individuals or even nation-states…

There’s the concept of model collapse. That we’re going to train LLMs on LLM-generated data, and that’s going to go into a circle. When you talk about cross-referencing facts, and I think about Google — Google going out in the web and trying to cross-reference a bunch of stuff but maybe all that stuff has been generated by LLMs that were hallucinating in 2023. How do you guard against that?

We are working on some pretty cool solutions to that. I think the answer is, and this is an answer to deepfakes as well, is to do some encrypted watermarking, sophisticated watermarking, that can’t be removed easily or at all, and it’s probably built into the generative models themselves, so it’s part of the generative process. We hope to release that and maybe provide it to third parties as well as a generic solution. But I think that the industry in the field needs those types of solutions where we can mark generated media, be that images, audio, perhaps even text with some Kitemark that says to the user and future AI systems that these were AI-generated. And I think that’s a very, very pressing need right now for near-term issues with AI like deepfakes and disinformation and so on. But I actually think a solution is on the horizon now.

2. A stock market gift right under your nose – Chin Hui Leong

In my book, the best returns come from owning stocks for the long term. For example, I have owned shares of Apple, Amazon, Booking Holdings, and Intuitive Surgical since 2010. On average, these shares have grown by almost 17 times their original value, turning each dollar invested to nearly US$17 over the past 13 years. The key ingredient here is time. But the trick is knowing what shares to hold.

Ideally, the business behind the stock should exhibit the ability to grow in both good times and bad. When businesses are able to deliver huge increases in earnings over time, your odds of a good outcome increase. Here is your big hint. If companies can perform during a tough economy, it stands to reason that they will do as well or better, when the economic conditions improve. And if they outperform, it is a great recipe for long-term investment returns…

Booking Holdings, which owns popular travel sits such as Booking.com and Agoda, reported revenue and profit growth of over 65 per cent and nearly 149 percent, respectively, between 2007 and 2009 at the worst of the GFC. Post-GFC, the company outperformed. From 2009 to today, Booking Holdings’ revenue and net profit soared by almost eight-fold and nine-fold, respectively. The shares I bought are up by more than 900 per cent, closely mirroring its profit increase, demonstrating that stock returns followed growth over 13 years.

Likewise, Apple’s iPhone was criticised for being too expensive back in 2007. Yet its sales from 2007 to 2009 (the GFC period) show that the smartphone is far from a discretionary purchase. In fact, the iPhone drove Apple’s revenue and earnings per share up 52 per cent and 60 per cent, respectively, during this tumultuous period. Today, revenue is more than 10-fold the 2009 level and over 26 times the EPS. The shares which I own since 2010 are up 21 times, another marker that returns follow actual growth…

… A key reason why I chose this quartet of stocks in 2010 is due to their strong performance during the difficult GFC period. Today, you have similar conditions. Last year, business growth stalled due to issues ranging from unfavourable exchange rates to supply chain disruptions and rising interest rates. But behind these troubles, you are being gifted real-world data on a select group of businesses that thrived, despite the circumstances…

…Said another way, you do not have to guess which companies will do well in bad times, you can sieve through the available data and see for yourself. At the end of this process, you should have a list of potential stocks to buy. This list, I submit, should comprise a superior set of companies to start your research. Instead of looking for a needle in a haystack, you will be able to dramatically narrow down your search, right off the bat. As far as gifts from the stock market go, that is hard to beat.

3. An Interview with Marc Andreessen about AI and How You Change the World – Ben Thompson and Marc Andreessen

I did want to ask one quick question about that article Software is Eating the World. The focus of that seemed to be that we’re not in a bubble, which obviously in 2011 turned out to be very true. I wrote an Article in 2015 saying we’re not in a bubble. That also turned out to be very true. By 2021, 2022, okay maybe, but you missed a lot of upside in the meantime to say the least!

However, there’s one bit in that article where you talk about Borders giving Amazon its e-commerce business, and then you talk about how Amazon is actually a software company. That was certainly true at the time, but I think you can make the case — and I have — that Amazon.com in particular is increasingly a logistics company that is very much rooted in the real world, with a moat that costs billions of dollars to build and a real world moat, you can’t really compete with it: they can compete anyone out of business in the long run by dropping prices and covering their marginal costs. Now that doesn’t defeat your point, all of that is enabled by software and their dominant position came from software, but do you think there is a bit where physical moat still means more, or is Amazon just an exception to every rule?

MA: You can flip that on its head, and you can basically observe that the legacy car companies basically make that same argument that you’re making as to why they’ll inevitably crush Tesla. Car company’s CEOs have made this argument to me directly for many years, which is, “Oh, you Californians, it’s nice and cute that you’re doing all this stuff with software, but you don’t understand the car industry is about the real world. It’s about atoms and it’s about steel and it’s about glass and rubber and it’s about cars that have to last for 200,000 miles and have to function in the snow.” They usually point out, “You guys test your electric self-driving cars in the California weather, wait till you have a car on the road in Detroit. It’s just a matter of time before you software people come to the realization that you’re describing for Amazon, which is this is a real world business and the software is nice, but it’s just a part of it and this real world stuff is what really matters.”

There’s some truth to that. Look, the global auto industry in totality still sells a lot more cars than Tesla. Absolutely everything you’re saying about Amazon logistics is correct, but I would still maintain that over the long run that the opposite is still true, and I would describe it as follows, which is Amazon, notwithstanding all of their logistics expertise and throwaway, they’re still the best software company. Apple notwithstanding all of their manufacturing prowess and industrial design and all the rest of it, they’re still the best or one of the two best mobile software companies. Then of course Tesla, we’re sitting here today, and Tesla I think today is still worth more than the rest of the global auto industry combined in terms of market cap, and I think the broad public investor base is looking forward and saying, “Okay, the best software company is in fact going to win.” Then of course you drive the different cars and you’re like, “Okay, obviously the Tesla is just a fundamentally different experience as a consequence of quite literally being now a self-driving car run run by software.”

I would still hold of the strong form of what I said in that essay, which is in the long run, the best software companies win. And then it’s just really hard. Part of the problem is, it’s hard to compete with great software with mediocre software, it’s really hard to do that because there comes a time when it really matters and the fundamental form and shape of the thing that you’re dealing with fundamentally changes. You know this, are you going to use the video recorder app on your smartphone, which is software, or are you going to use an old-fashioned camcorder that in theory comes with a 600-page instruction manual and has 50 buttons on it. At some points the software wins and I would still maintain that that is what will happen in many markets…

What is the case for AI as you see it?

MA: Well, this is part of why I know there’s hysterical panic going on, because basically the people who are freaking out about AI never even bothered to stop and basically try to make the positive case, and just immediately assumed that everything is going to be negative.

The positive case on AI is very straightforward, which is AI is, number one is just AI is a technical development. It has the potential to grow the economy and do all the things that technology does to improve the world, but very specifically, the thing about AI is that it is intelligence. The thing about intelligence, and we know this from the history of humanity, intelligence is a lever on the rest of the world, a very fundamental way to make a lot of things better at the same time.

We know that because in human affairs, human intelligence, we know, across thousands of studies for a hundred years, increases in human intelligence make basically all life outcomes better for people. So people who are smarter are able to better function in life, they’re able to have higher educational attainment, they’re able to have better career success, they have better physical health. By the way, they’re also more able to deal with conflict, they’re less prone to violence, they’re actually less bigoted, they also have more successful children, those children go on to become more successful, those children are healthier. So intelligence is basically this universal mechanism to be able to deal with the complex world, to be able to assimilate information, and then be able to solve problems.

Up until now, our ability as human beings to engage in the world and apply intelligence to solve problems has been, of course, limited to the faculties that we have with these kind of partial augmentations, like in the form of calculating machines. But fundamentally, we’ve been trying to work through issues with our own kind of inherent intelligence. AI brings with it the very big opportunity, which I think is already starting to play out, to basically say, “Okay, now we can have human intelligence compounded, augmented with machine intelligence”. Then effectively, we can do a forklift upgrade and effectively make everybody smarter.

If I’m right about that and that’s how this is going to play out, then this is the most important technological advance with the most positive benefits, basically, of anything we’ve done probably since, I don’t know, something like fire, this could be the really big one…

But if it’s so smart and so capable, then why isn’t it different this time? Why should it be dismissed as another sort of hysterical reaction to say that there’s this entity coming along? I mean, back in the day, maybe the chimps had an argument about, “Look, it’s okay if these humans evolve and they’re smarter than us”. Now they’re stuck in zoos or whatever it might be. I mean, why would not a similar case be made for AI?

MA: Well, because it’s not another animal, and it’s not another form of human being, it’s a machine. This is what’s remarkable about it, it’s machine intelligence, it’s a combination of the two. The significance of that, basically, is like your chimp analogy, or basically human beings reacting to other human beings, or over time in the past when two different groups of humans would interact and then declare war on each other, what you were dealing with was you were dealing with evolved living species in each case.

That evolved part there is really important because what is the mechanism by which evolution happens, right? It’s conflict. So survival of the fittest, natural selection, the whole point of evolution is to kind of bake off different, originally one cell organisms, and then two cell organisms, and then ultimately animals, and then ultimately people against each other. The way that evolution happens is basically a big fight and then, at least in theory, the stronger of the organisms survives.

At a very deep genetic level, all of us are wired for combat. We’re wired for conflict, we’re wired for a high level of, let’s say, if not a high level of physical violence, then at least a high level of verbal violence and social and cultural conflict. I mean, machine intelligence is not evolved. The term you might apply is intelligent design, right?

(laughing) Took me a second on that one.

MA: You remember that from your childhood? As do I. Machine intelligence is built and it’s built by human beings, it’s built to be a tool, it’s built the way that we build tools, it’s built in the form of code, it’s built in the form of math, it’s built in the form of software that runs on chips. In that respect, it’s a software application like any other. So it doesn’t have the four billion years of conflict driven evolution behind it, it has what we design into it.

That’s where I part ways from, again, the doomers, where from my perspective, the doomers kind of impute that it’s going to behave as if it had come up through four billion years of violent evolution when it hasn’t, like we have built it. Now, it can be used to do bad things and we can talk about that. But it, itself, does not have inherent in it the drive for kind of conquest and domination that living beings do.

What about the accidental bad things, the so-called paperclip problem?

MA: Yeah, so the paperclip problem is a very interesting one because it contains what I think is sort of a logical fallacy that’s right at the core of this whole argument, which is for the paperclip argument to work — the term that the doomers use — they call it orthogonality.

So for the paperclip argument to work, you have to believe two things at the same time. You have to believe that you have a super intelligent AI that is so intelligent, and creative, and flexible, and devious, and genius level, super-genius level conceptual thinker, that it’s able to basically evade all controls that you would ever want to put on it. It’s able to circumvent all security measures, it’s able to build itself its own energy sources, it’s able to manufacture itself its own chips, it’s able to hide itself from attack, it’s able to manipulate human beings into doing what it wants to do, it has all of these superpowers. Whenever you challenge the doomers on the paperclip thing, they always come up with a reason why the super intelligent AI is going to be able, it’s going to be so smart that it’s going to be able to circumvent any limitations you put on it.

But you also have to believe that it’s so stupid that all it wants to do is make paperclips, right? There’s just a massive gap there, because if it’s smart enough to turn the entire world, including atoms and the human body into paperclips, then it’s not going to be so stupid as to decide that’s the only thing that matters in all of existence. So this is what they call the orthogonality argument, because the sleight of hand they try to do is they try to say, well, it’s going to be super genius in these certain ways, but it’s going to be just totally dumb in this other way. That those are orthogonal concepts somehow.

Is it fair to say that yours is an orthogonal argument though? Where it’s going to be super intelligent, even more intelligent than humans in one way, but it’s not going to have any will or drive because it hasn’t evolved to have it. Could this be an orthogonality face-off in some regards?

MA: Well, I would just say I think their orthogonality theory is a little bit like the theory of false consciousness and Marxism. It’s just like you have to believe that this thing is not going to be operating according to any of the ways that you would expect normal people or things to behave.

Let me give you another thing. So a sort of thing they’ll say, again, that’s part of orthogonality, is they’ll say, “Well, it won’t be doing moral reasoning, it’ll be executing its plan for world conquest, but it will be incapable of doing moral reasoning because it’ll just have the simple-minded goal”. Well, you can actually disprove that today, and you can disprove that today by going to any LLM of any level of sophistication, you can do moral reasoning with it. Sitting here, right now, today, you can have moral arguments with GPT, and with Bard, and with Bing, and with every other LLM out there. Actually, they are really good at moral reasoning, they are very good at arguing through different moral scenarios, they’re very good at actually having this exact discussion that we’re having…

...Again, just cards on the table, I mostly agree with you, so I’m putting up a little bit of a defense here, but I recognize it’s probably not the best one in the world. But I see there being a few candidates for being skeptical of the AI doomers.

First, you’ve kind of really jumped on the fact that you think the existential risk doesn’t exist. Is that the primary driver of your skepticism and some would say dismissal of this case? Or is it also things like another possibility would be AI is inevitable, it’s going to happen regardless, so let’s just go forward? Or is there sort of a third one, which is that any reasonable approach, even if there were risks — look at COVID, it’s not doable. We can’t actually manage to find a middle path that is reasonable and adjust accordingly, it’s either one way or the other. Given that and your general skepticism, that’s the way it has to go.

Are all three of those working in your argument here, or is it really just you don’t buy it at all?

MA: So I think the underlying thing is actually a little bit more subtle, which is I’m an engineer. So for better or for worse, I was trained as an engineer. Then I was also trained in science in the way that engineers are trained in science, so I never worked as a scientist, but I was trained in the scientific method as engineers are. I take engineering very seriously, and I take science very seriously, and I take the scientific method very seriously. So when it comes time to engage in questions about what is a technology going to do, I start by going straight to the engineering, which is like, “Okay, what is it that we’re dealing with here”?

The thing is, what we’re dealing with here is something that you’re completely capable of understanding what it is. What it is it’s math and code. You can buy many textbooks that will explain the math and code to you, they’re all being updated right now to incorporate the transformer algorithm, there’s books already out on the market. You can download many How-To guides on how to do this stuff. It’s lots of matrix multiplication, there’s lots of linear algebra involved, there are various algorithms, it’s just like these are machines and you can understand it as a machine.

What I would think of is there’s these flights of fancy that people then launch off of where they make extrapolations, in some cases, literally billions of years into the future. I read this book Superintelligence, which is the one that is kind of the catechism urtext for the AI doomers. [Nick Bostrom] goes from these very general descriptions of possible forms of future intelligence to these extrapolations of literally what’s going to happen billions of years in the future. These seem like fine thought experiments, this seems like a fine way to write science fiction, but I don’t see anything in it resembling engineering.

Then also the other thing really striking is there’s an absence of science. So what do we know about science? We know that science involves at its core the proposing of a hypothesis and then a way to test the hypothesis such that you can falsify it if it’s not true. You’ll notice that in all these books and all these materials, as far as I’ve been able to find, there are no testable hypotheses, there are no falsifiable hypotheses, there are not even metrics to be able to evaluate how you’re doing against your hypothesis. You just have basically these incredible extrapolations.

So I read this stuff and I’m like, “Okay, fine, this isn’t engineering”. They seem very uninterested in the details of how any of this stuff works. This isn’t science. there are no hypotheses so it reads to me as pure speculation. Speculation is fun, but we should not make decisions in the real world just based on speculation.

What’s the testable hypothesis that supports your position? What would you put forward that, if something were shown to be true, then that would change your view of the matter?

MA: Yeah, I mean, we have these systems today. Are they seizing control of their computers and declaring themselves emperor of earth?

I mean, I did have quite the encounter with Sydney.

MA: (laughing) How’s it going? Yeah, well, there you go. Right? Well, so look, the meme that I really like on this, there is a meme I really like on this, I’ll make the sin of trying to explain a meme, but it’s the eldritch horror from outer space.

I put a version of that in my article about Sydney.

MA: The kicker is the evil shoggoth, AI doom saying thing is mystified why the human being isn’t afraid of it. Then the human being’s response is, “Write this email”.

So again, this is the thing — what do we do? What do we do when we’re engineers and scientists? We build the thing, and we test the thing, and we figure out ways to test the thing, we figure out do we like how the thing is working or not? We figure out along the way what are the risks, then we figure out the containment methods for the risk.

This is what we’ve done with every technology in human history. The cumulative effect of this is the world we live in today, which is materially an incredibly advanced world as compared to the world that our ancestors lived in.

Had we applied the precautionary principle or any of the current trendy epistemic methods to evaluating the introduction of prior technologies ranging from fire and the wheel all the way to gunpowder and microchips, we would not be living in the world we’re living in today. We’d be living in a much worse world, and child mortality would be through the roof and we’d all be working these god awful physical labor jobs and we’d be like, “Wow, is this the best we can do?” I think our species has actually an excellent track record at dealing with these things, and I think we should do what we do, we should build these things and then we should figure out the pros and cons…

Was crypto a mistake, and I mean both in terms of the technology, but also in terms of how closely a16z became tied to it reputationally? Is there a bit where you wish you had some of those reputation points right now for your AI arguments, where maybe that’s more important to human flourishing in the long run?

MA: Yeah, I don’t think that, so that idea that there’s some trade off there, I don’t think it works that way. This is a little bit like the topic of political capital in the political system, and there’s always this question if you talk to politicians, there’s always this question of political capital, which is do you gain political capital by basically conceding on things, or do you gain political capital by actually exercising political power? Right? Are you better off basically conserving political power or actually just putting the throttle forward and being as forceful as you can?

I mean, look, I believe whatever political power we have, whatever influence we have is because we’re a hundred percent on the side of innovation. We’re a hundred percent on the side of startups, we’re a hundred percent on the side of entrepreneurs who are building new things. We take a very broad brush approach to that. We back entrepreneurs in many categories of technology, and we’re just a hundred percent on their side.

Then really critically, we’re a hundred percent on their side despite the waxing and waning of the moon. My experience with all of these technologies, including the Internet and computers and social media and AI and every other thing we can talk about biotech, they all go through these waves. They all go through periods in which everybody is super excited and extrapolates everything to the moon, and they all go through periods where everybody’s super depressed and wants to write everything off. AI itself went through decades of recurring booms and winters. I remember in the 1980s, AI went through a big boom in the 1980s, and then crashed super hard in the late eighties, and was almost completely discredited by the time I got to college in ’89. There had been a really big surge of enthusiasm before that.

My view is like, “We’re just going to put ourselves firmly on the side of the new ideas, firmly on the side of the innovations. We’re going to stick with them through the cycles”. If there’s a crypto winter, if there’s an AI winter, if there’s a biotech winter, whatever, it doesn’t really matter. By the way, it also maps to the fundamentals of how we think about what we do, which is we are trying to back the entrepreneurs with the biggest ideas, building the biggest things, to the extent that we succeed in doing that building big things takes a long time.

4. The private credit ‘golden moment’ – Robin Wigglesworth

By ‘private credit’ or ‘private debt’, we’re mostly (but not only) talking about direct loans between an investment fund and a corporate borrower, usually a small or mid-sized company.

These sometimes struggle to get traditional banks interested in their custom — for big banks it’s more attractive to lend to big blue-chip companies that you can also sell M&A advice, derivatives and pension plan management etc — but remain too small to tap the bond market, where you realistically need to raise at least $200mn in one gulp, and ideally over $500mn.

Private credit funds therefore often depict themselves as helping bread-and-butter ma-and-pa small businesses that mean ol’ banks are shunning. In reality, most of the lending is done to private equity-owned businesses, or as part of a distressed debt play. So it can arguably be better seen as a rival (or complement) to the leveraged loan and junk bond markets…

…As you can see from the fundraising bonanza, private credit has morphed from a cottage business mostly focused on distressed debt into a massive business over the past decade. And after starting out overwhelmingly American it is beginning to grow a little in Europe and Asia(opens a new window) as well.

Morgan Stanley estimates the overall assets under management at about $1.5tn (of which about $500bn was money raised but not yet lent, aka ‘dry powder’ as the industry loves to call it).

That makes it bigger than both the US high yield and leveraged loan markets for the first time, says Cyprys:..

…Why has it been growing? Well, for investors it is the promise of both smoother and stronger returns, in an era where even the high-yield bond market for a long time made a mockery of its moniker. Remember when some European junk-rated companies could borrow at negative rates(opens a new window)? Happy days.

Direct loans are also more attractive when interest rates are rising, because they are floating rate, as opposed to the fixed rates that public market bonds pay. At the same time, since these are private, (mostly) untraded assets, their value doesn’t move around as much leveraged loans or traditional bonds…

…In many respects the growth of private credit is a healthy development. It is arguably far better that an investment fund with long-term locked-up capital takes on the associated credit risk than a traditional deposit-taking commercial bank.

But as we wrote earlier this year, there are a lot of reasons to be wary of the current private credit boom. Things have basically gone a bit nuts as money has gushed in.

Using data on business development companies — publicly listed direct lenders, often managed by one of the private capital industry’s giants — Goldman has put some meat on one of our skeleton arguments: floating rate debt is great for investors, but only up to a point.

At some point the rising cost of the debt will crush the company, and we may be approaching that point.

UBS predicts that the default rate of private credit borrowers will spike to a peak of 9-10 per cent early next year as a result, before falling back to about 5-6 per cent as the Federal Reserve is forced into cutting rates.

Default rates like that might seem manageable. It’s hardly Creditpocalypse Now. But the problem is that, as Jeff Diehl and Bill Sacher of Adam Street — a US private capital firm — wrote in a recent report(opens a new window), loss avoidance is the name of the game in private credit:

Benign economic and credit conditions over the last decade have allowed many managers to avoid losses, leading to a narrow return dispersion . . . The benign climate has changed with higher rates, wider credit spreads and slowing revenue growth, all of which is likely to put pressure on many managers’ portfolios…

…And to be fair, as our colleague Mark Vandevelde wrote in a fab recent column, the broader danger isn’t really that there’s been silly lending going on. These are investors and asset managers that (mostly) know what they’re doing, in an area people know is risky. People will lose money, the world will keep turning etc.

The issue, as Mark writes, is that private credit firms are now big and extensive enough to plausibly become shock conduits between investors, borrowers, and the broader economy:

In short, the biggest risks inherent in the rise of private credit are the ones that critics most easily miss. They arise, not from the misbehaviour of anyone on Wall Street, but from replacing parts of an imperfect banking system with a novel mechanism whose inner workings we are only just discovering.

This may seem like vague hand-waving by journalists, but the reality is that the complex interlinkage of private credit, private equity and broader debt markets is opaque. As the Federal Reserve noted in its latest financial stability report(opens a new window):

Overall, the financial stability vulnerabilities posed by private credit funds appear limited. Most private credit funds use little leverage and have low redemption risks, making it unlikely that these funds would amplify market stress through asset sales. However, a deterioration in credit quality and investor risk appetite could limit the capacity of private credit funds to provide new financing to firms that rely on private credit . Moreover, despite new insights from Form PF, visibility into the private credit space remains limited. Comprehensive data are lacking on the forms and terms of the financing extended by private credit funds or on the characteristics of their borrowers and the default risk in private credit portfolios.

5. Debt: The First 5,000 Years – Johan Lunau

Economists claim that we started off with barter, moved to coinage, and only then discovered the infinite wonders of credit. Each iteration in this supposedly linear evolution is presented as a logical solution to a common problem.

  1. Whilst barter, the original system, did allow for the exchange of goods and services, it required a double coincidence of wants: I need to have something you want, and you need to have something I want.If there’s no match, there’s no exchange.
  2. It therefore made sense to store things that everybody wanted, making transactions much more flexible and frequent (commodities like dried cod, salt, sugar, etc.). But certain issues remained… what if the goods were perishable? And how could transactions far from home be made practical?
  3. Enter precious metals, which are durable, portable, and divisible into smaller units. As soon as central authorities began to stamp these metals, their different characteristics (weight, purity) were extinguished, and they became the official currencies in specific national economies or trade regions.
  4. Banks and credit followed thereafter, as the final step.

However, Graeber’s main argument is that the above timeline is wrong, as intuitive as it is. Specifically, he posits that we actually started off with credit, then transitioned to coinage, and resort to barter only when an economy or central authority collapses (as with the fall of the Soviet Union). Moreover, he writes that this progression was chaotic and not linear; there were constant rise-and-fall cycles of credit and coinage. It’s obvious that this account is much, much harder to teach at universities, lacking the elegant simplicity of the version that is commonly presented in textbooks.

In fact, to the frustration of economists, it appears there is no historical evidence for a barter system ever having existed at all, except among obscure peoples like the Nambikwara of Brazil and the Gunwinggu of Western Arnhem Land in Australia. And even then, it takes place between strangers of different tribes in what to us are bizarre ceremonies.

However, there is evidence for widespread debt transactions as far back as 3,500 BC in Mesopotamia, which is now modern Iraq. Merchants would use credit to trade, and people would run up tabs at their local alehouses. We know this because Sumerians would often record financial dealings on clay tablets called bullae in cuneiform (successful translation of this language kicked off in the 1800s), which were dug up by archaeologists.

And whilst Sumeria did have a currency (the silver shekel), it was almost never used in transactions. Instead, it was a simple unit of account for bureaucrats. 1 shekel was divided into 60 minas, each of which was equal to 1 bushel of barley on the principle that temple labourers worked 30 days a month and received 2 rations of barley each day. Though debts were often recorded in shekels, they could be paid off in any other form, such as barley, livestock, and furniture. Since Sumeria is the earliest society about which we know anything, this discovery alone should have resulted in a revision of the history of money. It obviously didn’t…

…As stated, Graeber wrote that history is marked by flip-flop cycles of credit and coinage. But the question is, why? Likely because of cycles of war and peace.

“While credit systems tend to dominate in periods of relative social peace, or across networks of trust (…), in periods characterised by widespread war and plunder, they tend to be replaced by precious metal”.

The reason for this is twofold. Unlike credit, gold and silver can be stolen through plunder, and in transactions, it demands no trust, except in the characteristics of the precious metal itself. And soldiers, who are often constantly travelling with a fair probability of death, are the definition of an extremely bad credit risk. Who would lend to them? Armies typically created entire marketplaces around themselves.

“For much of human history, then, an ingot of gold or silver, stamped or not, has served the same role as the contemporary drug dealer’s suitcase of unmarked bills: an object without a history, valuable because one knows it will be accepted in exchange for other goods just about anywhere, no questions asked.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Intuitive Surgical, Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 09 July 2023:

1. Intellectual Laziness – Philo

The collapse of General Electric stands apart. GE was the bluest of the blue-chips: descended from Thomas Edison and J.P. Morgan, it was one of the original twelve components of the Dow in 1896, and grew to become one of the leading technology giants of the early 20th century. After WWII, GE evolved into an industrial behemoth with dominant positions in a dizzying array of electricity-adjacent markets, from jet engines and turbines to light bulbs and home appliances.

In the 1980s, GE ascended to new heights. Jack Welch took the reins as CEO in 1981, and he established GE a major player in media and financial services while expanding GE’s leadership positions in its industrial markets. For most of the 1990s and 2000s, GE was the most valuable company in America, with a valuation topping out at over $1 trillion (as measured in current dollars). While GE had its skeptics and critics at the time, it was widely seen as a corporate paragon, regularly named by Fortune as the most admired company in the world. Welch was regarded as a management guru, and his underlings were routinely poached to become CEOs at other Fortune 500 companies.

And then, a few years ago, it all unraveled in spectacular fashion. Much of the supposed success from the Welch era of the 1980s and 1990s proved to be illusory, the product of temporary tailwinds and aggressive accounting. GE’s fortunes worsened under the reign of Welch’s handpicked successor, Jeff Immelt, who took over in 2001. Immelt struggled to cope with the problems he inherited, which were compounded by the 2008 financial crisis and major management missteps of his own. In 2017, when the extent of GE’s problems became clear, GE’s stock nose-dived, and Immelt was pushed out…

…Jack Welch had most of the traits we typically associate with a great executive. He was incredibly smart (earning his PhD in chemical engineering in only three years), he was demanding of his subordinates, and he worked tirelessly. He had deep operating experience, he was willing to buck convention, and he produced quantifiable results. He was charismatic, ambitious, and a world-class marketer and publicist. And yet, he will forever be remembered as the father of the biggest corporate disaster in American history…

…The story of the fall of GE is worthy of an authoritative book, and we looked at a pair of early entries a couple of years ago – Lights Out, written by the WSJ journalists that covered its fall, and Hot Seat, Jeff Immelt’s memoir.

Power Failure, weighing in at nearly 800 pages, is the most ambitious yet. The author, William Cohan, did an early-career stint as a junior analyst at GE Capital in the 1980s, before becoming an investment banker and then a business writer, putting him in a unique position to tell the GE story.

What sets Cohan’s effort apart is that he got almost everybody to talk to him for his book. He managed to interview both Jack Welch (before he passed away in 2020) and Jeff Immelt, and many former and current senior GE executives as well. Dozens of GE critics, counterparties, and journalists also weigh in throughout…

…Power Failure also doesn’t really offer an overarching theory of why GE failed. Power Failure lists many different things that went wrong at GE — bad management, bad acquisitions, bad incentives, bad accounting, bad luck — but almost all companies suffer from some of these issues without running into a GE-scale disaster. Maybe the failure of GE was the result of an unlucky confluence of individual problems, but it feels like for a group of smart, hard-working people to produce such an exceptionally catastrophic result, there must be a larger lesson to be drawn.

One possible clue comes from the story of David Cote, a star GE finance executive who rose to become the head of the Appliances division in the 1990s, and was one of five early candidates to succeed Jack Welch as the CEO of GE. However, he was eliminated before the three finalists were chosen, and he was asked to leave GE. It is suggested that Cote was doomed by the divisional assignment he drew; the finalists were the ones who had been assigned to oversee GE’s crown jewels, while he was stuck trying to fix a basket case.

Cote eventually landed a position in 2002 as the CEO of Honeywell, a much smaller industrial conglomerate – Cohan at one point refers to it as a “mini-GE”. Honeywell had been run since 1991 by Larry Bossidy, who before then had spent his career as a top executive at GE, a close associate of Jack Welch…

…Cote had an incredibly successful run at Honeywell, leading it until his retirement in 2017. While GE foundered, Honeywell soared. A $1,000 investment in Honeywell in 2003 would be worth over $9,000 today, while the same investment in GE would now be worth only $450. Remarkably, Honeywell managed to surpass GE in overall value as well: Honeywell’s current market capitalization is $140 billion, while GE is now worth less than $90 billion. GE is slated to be broken up, but as it stands today, is nothing more than a mini-Honeywell.

This would seem to be the perfect natural experiment. A GE cast-off takes over a small company run by Jack Welch’s former right-hand man, and turns it around and surpasses GE. What did Cote do so differently from Welch, Immelt, and Bossidy, to get such a spectacular result?…

…What is Cote’s diagnosis of the root problems at Honeywell? Cote opens the book by telling the story of an internal meeting at the beginning of his tenure, a business review of Honeywell’s Aerospace division. The head of Aerospace was steeped in the old culture, and had even been a candidate for the CEO job that Cote won. The meeting does not start well:

We sat down in a conference room so that team members could present their strategic plan to me. A copy of the plan had been placed on the table facing each seat. Flipping through mine, I saw that it was thick–maybe 150 pages long, full of charts and tables. Uh oh, I thought, not good. I had found so far at Honeywell that executives and managers often made presentations far longer than necessary, overwhelming audience members with facts, figures, and commentary to preempt sharp, critical questioning.

Nevertheless, Cote interrupts them with sharp, critical questions. The Aerospace team responds with annoyance — they had planned to put on a show and receive a pat on the back — but Cote interrogates them about the root cause of the $800 million in cost overruns on their biggest project. The team eventually relents and agrees to probe the root causes of their biggest issues, and they turn the ship around. Cote concludes (emphasis mine):

What I learned, to my chagrin, was that Aerospace had become adept at lying to itself, shoehorning costs here and there into a budget without acknowledging them openly. This put enormous strain on the organization, which then had to patch together aggressive bookkeeping and special deals with customers and others, to make its goals. A dysfunctional approach if I’d ever seen one.

Cote says that this approach was pervasive at Honeywell:

Lacking any drive to think deeply about their businesses, and unchallenged by leadership to do so, teams held meetings that were essentially useless, their presentations clogged up with feel-good jargon, meaningless numbers, and analytic frameworks whose chief purpose was to hide faulty logic and make the business look good. When you did a bit of digging, you found that most executives didn’t understand their businesses very well, or even at all.

Cote defines this as intellectual laziness. It is the tendency of organizations to “juke the stats” and lie to themselves instead of diagnosing and solving root problems. This kind of anecdote is everywhere in Power Failure; recall Steve Burke’s appraisal that GE “never had the intellectual curiosity or the drive” to understand and manage NBCU…

…GE Capital was central to GE’s ability to manipulate reported earnings. Accounting rules allow a company to book a profit whenever they sell an asset for more than they paid for it. In the course of their normal business, GE Capital owned hundreds of billions of dollars of assets, like bonds and office buildings and parking lots (which they funded with short-term and long-term borrowings). Over time, real assets tend to appreciate, at least in nominal terms. Whenever GE was having a bad quarter, they would sell some of these appreciated assets–say, an office building that was bought decades ago for $10 million that was now worth $20 million–and report the $10 million accounting profit as a part of regular earnings, to compensate for the earnings shortfall from the core business. As for GE Capital CEO Gary Wendt put it in Power Failure:

I always had a lot of [asset sales] available for the quarter. I had to because I knew [Jack Welch] was going to call up and say, “I need another $1 million or another $2 million or whatever,” and so I’d go over to [GE Capital CFO James] Parke and I’d say, “Okay, let’s do this one and this one.” Making your earnings was just life to us.

This kind of one-time accounting gain from asset sales is fundamentally different in nature from operating profits from selling jet engines and power turbines. The $20 million office building was already worth $20 million before GE sold it, despite being on the books for $10 million; selling it converts it to cash but does not make shareholders any wealthier (in fact, by triggering a tax bill, it can make them worse off), despite the accounting profit that gets booked. Bundling these kinds of accounting gains with normal operating results only serves to obscure the true performance of the business from investors.

Regardless, the most of the senior GE executives who talked to Cohan continued to stand behind the practice of earnings smoothing:

Over lunch at a Connecticut pub, Denis Nayden, who was one of Wendt’s successors at GE Capital, also defended the practice of harvesting GE Capital assets as needed. “What’s the job of a public company?” he asked rhetorically. “Produce earnings for shareholders.”

“The job of a public company is to produce earnings for shareholders” is a hell of a thing for the former chairman of GE Capital to be saying after the collapse of GE. If you ask GE’s investors, they would say the job of a public company is to make money for shareholders. GE was among the best at consistently “producing earnings” for shareholders; they did so for decades. They were just abysmal at making money. 

There is a plethora of ways to produce short-term earnings without making money, and GE somehow seemed to engage in all of them. You can sell appreciated assets to record an accounting profit. You can overpay for assets with high current earnings and poor long-term prospects. You can sell power equipment to Angola on credit, with little hope of ever getting paid in cash. You can book immediate paper profits from the long-tail insurance policies you sell today, and then find out two decades later that your assumptions were too optimistic and you have to come up with $15 billion of cash to plug the gap. There are no magic metrics, and GAAP earnings are as subject to Goodhart’s Law as any other measure.

According to Power Failure, almost every time GE made a major decision that destroyed shareholder value, the obsession with manipulating earnings was front and center in the thought process. GE lost a lot of money in insurance, but why was a manufacturing company in the insurance business in the first place? Well, insurance companies offer a lot of accounting leeway, in terms of the way reserves are taken and assets are sold for profit, and could act as “shock absorbers” that let Jack Welch report smooth earnings when other divisions stumbled.

Why did GE Capital recklessly allow itself to become dependent on funding from short-term commercial paper, a practice that would almost bankrupt it in 2008? Well, short-term borrowing lowers interest expense, which boosts short-term earnings.

Why did GE buy a subprime mortgage broker in 2004? They had just spun off their insurance business, and Immelt felt they needed to replace the earnings that the insurance business had previously generated. 

Why did GE keep expanding GE Capital? Well, it was a good way to increase earnings. Why didn’t GE sell out of noncore businesses like real estate and media when valuations were sky-high in the mid-00s? GE didn’t want to lose the earnings those divisions produced. The catastrophic 2015 acquisition of Alstom? Immelt thought the synergies would definitely increase earnings. The mistimed $40 billion stock buyback in 2015? Jeff Immelt decided on a $2 per share earnings target, and wanted to do anything he could to hit that goal.  Never in Power Failure does it seem like GE management gave any thought to shareholder value when making major decisions: it was always earnings, earnings, earnings.

Even putting aside the obsession with reported earnings, GE’s culture seems to have been generally lacking in intellectual rigor. GE’s strategies were supported by narratives that sounded compelling at a superficial level, but fell apart under any kind of scrutiny.

A classic example: Jack Welch liked to tell everyone that his brilliant insight about expanding into finance was that it had higher revenue per employee than industrial manufacturing, thus it must be a better business to be in. Of course, that is nonsense: there is no reason to expect there to be any relationship between revenue per employee and return on invested capital.

Welch told this story even after GE learned this lesson the hard way in the 1980s, overpaying to acquire Kidder Peabody, a venerable investment banking firm (investment banking being perhaps the highest revenue per employee business that exists), a deal that was an endless source of trouble, and ultimately led to a $2 billion loss when GE finally got rid of it in 1995. (Cohan discovers when talking to a former director that Welch managed to prevent this massive loss from affecting reported earnings by raiding the reserves of the insurance business.)

Return on invested capital is mostly determined by factors like barriers to entry and sustainable competitive advantage, which GE’s industrial businesses had in spades but which GE Capital completely lacked — after all, money is a commodity. After the financial crisis, GE Capital’s return on invested capital collapsed not because revenue per employee declined, but because GE Capital’s lenders and regulators came to understand the true risk inherent in the business, and demanded higher rates, lower leverage, and closer oversight.

As GE placed no value on intellectual rigor, it is no surprise that they ended up promoting executives on the basis of polish and storytelling ability. So it was that when it came time to pick a new CEO, Welch elevated Jeff Immelt, a slick-talking salesman with little understanding of GE’s businesses and little patience for details, and dismissed David Cote, who would go on to have so much success at Honeywell. 

It is not clear that GE’s decision-making process was any worse under Immelt than it was under Welch. Immelt would be skewered by accusations that he encouraged “success theater”, a culture where executives never confronted root problems and pretended everything was going well, but the culture of extreme intellectual laziness certainly dated back to his predecessor. In fact, Welch’s best-selling autobiography was subtitled “Straight from the Gut”.

It would be technically accurate to state that the dramatic collapse of GE resulted from a perfect storm of mistakes — wrong CEO, bad investments, strategic missteps, operational snafus. But underlying all of those seemingly unrelated mistakes was one thing: this culture of intellectual laziness, the willingness to juke the stats and tell comforting stories rather than diagnose and solve root problems. GE failed to create shareholder value because they didn’t really try to create shareholder value; they were content to be able to report some shiny meaningless numbers and a pleasant narrative…

…At this point, we have to ask: how does one identify management teams that demand intellectual rigor, and avoid management teams that are intellectually lazy?

The answer is simple, but not easy. In each example we presented here, the intellectually lazy managers are actually initially exposed when they present their story to a knowledgeable audience. To be sure, they are able to assemble a narrative that sounds convincing to a layman, peppered with vanity metrics and impenetrable business-speak.

However, the narrative is usually all form and no substance, pure business theater. It leans heavily on rhetorical tricks: accounting chicanery employed to meet previously announced financial targets might be rationalized as “exceptional dedication to meeting our public commitments”. (The implication being that if you don’t fudge the numbers, maybe you’re just the type of person that doesn’t take their commitments seriously.)

Nonsense axioms are invented out of thin air – recall the continued insistence of former GE executives that companies must consistently announce growing earnings, in the face of the evidence that most successful companies did no such thing.

Then there is the midwit appeal to complexity: anyone who argues that the narrative is a convoluted, illogical mess is accused of being an ignorant simpleton who is incapable of grasping such sophistication and brilliance.

The intellectually lazy narrative always contains these sorts logical gaps. When confronted about these inconsistencies, managers respond with plausible-sounding non sequiturs, answers that might be accepted by a novice but would never pass muster with anyone with real expertise.

In the case of GE, experienced analysts knew that an inherently cyclical business could not produce perfectly smooth metrics, and they also realized that GE Capital’s reliance on cheap short-term funding was not sustainable — points they raised at the time. At Honeywell, David Cote immediately identified the flaws in the stories that his underlings were telling, and called them out. 

2. Value of BRK Float, Buffett Market View etc. – The Brooklyn Investor

For example, it is true that BRK only owns $328 billion in stocks against $500 billion in equity. This looks bearish, compared to say, back in 1994/1995 as you see. That looks like equity exposure of only 66% or so.

But as we all know, BRK has been buying a lot of operating businesses. For example, Burlington Northern now is a wholly owned subsidiary. Owning 100% of something is no less ‘equity exposure’ than owning just some of the stock. Right? So our equity exposure is much higher than 66% if you include all the other operating businesses. What is that number? Let’s say we include equity method investments (which is clearly equity) of $26 billion, and the book value of the Rails, Utilities and Energy business of $140 billion. That’s $166 billion. Add that to the $328 billion stock portfolio and you get $494 billion. And this doesn’t include some stuff in the “Insurance and other” (where I assume manufacturing, services and retail is), and we are already pretty much at 100% equity exposure. That, to me, is as good as “fully invested”.

How is that bearish? It’s not, actually. Bearish is if you take all those businesses / stocks and actually sell it down so your actual net equity exposure to all business is way below your shareholders equity. If you tell me that the above $494 billion is actually $250 billion, and the rest is cash, then I would agree BRK is waiting for the end of the world.

As it stands now? Not at all…

…This is the sort of thing that Buffett would hate because I am going to tell you what he is thinking, and I will do so without having any idea. So, having said that…

Rates are now back up to over 5% on the short end, and almost 4% on the long end (10 year). What does Buffett think of interest rates? Well, he won’t tell you. He will probably tell you he thinks long rates are too low and that it can’t stay low forever, but that’s all.

But let’s see what he is doing to see what he thinks of interest rates. With the long end approaching 4%, does Buffett think bonds are interesting?

Below, I went back through the recent 10-K’s (when you get old, even going back 25 years is recent, lol…) and jotted down the cash and fixed income investments at BRK. This way, we can actually see when he started to get allergic to long term bonds, and then we can see if he is getting interested again.

First of all, I can tell you that fixed income on BRK’s balance sheet has been steadily in the $20s billions, despite net worth, cash etc. increasing over the years. Spoiler alert: in the 2023 10Q, this is still $23 billion, so he is not expressing any interest in bonds yet…

…So when did Buffett start to get away from long bonds? It is clear from the above table that he really started to dislike them in 2003. There is a clear pivot in that year, when cash rose a lot and fixed income investments went down. He seemed fine with bonds in 2001 and 2002, when they were around 5% or so…

…So it is clear that Buffett started to really dislike bonds when it started to go below 5%. I was going to argue 4% is the level, but you see rates above 4% for a few years after 2003, but Buffett didn’t bite; fixed income levels remained low, which seems to suggest 5% is the level he won’t accept anything below. The slight rise in this during the financial crisis could be from the emergency financing he did for GE, BAC and others, but I didn’t check. I think those were factors other than the general level of interest rates, so we can ignore that rise in bond holdings during that period.

So, reasonably or unreasonably, I am going to assume that 5% is the point Buffett won’t go below for long term rates. 

3. The Full Story of Large Language Models and RLHF – Marco Ramponi

Language Models (LMs) are a class of probabilistic models explicitly tailored to identify and learn statistical patterns in natural language. The primary function of a language model is to calculate the probability that a word succeeds a given input sentence.

How are these models trained to do this? The core process is a general technique known as self-supervised learning, a learning paradigm that leverages the inherent structure of the data itself to generate labels for training.

In the context of natural language processing, self-supervised learning enables models to learn from unannotated text, rather than relying on manually labeled data, which is relatively scarce and often expensive.

During the training process, an LM is fed with a large corpus (dataset) of text and tasked with predicting the next word in a sentence. In practice, this is often achieved by randomly truncating the last part of an input sentence and training the model to fill in the missing word(s). As the model iterates through numerous examples, it learns to recognize and internalize various linguistic patterns, rules, and relationships between words and concepts. One can say that via this process the model creates an internal representation of language.

The outcome of this training process is a pre-trained language model. By exposure to diverse linguistic patterns, the model is equipped with a foundation for understanding natural language and for generating contextually appropriate and coherent text. Some people refer to such pre-trained models as foundation models…

…How good can a language model become?

As it turns out, the effectiveness of LMs in performing various tasks is largely influenced by the size of their architectures. These architectures are based on artificial neural networks, which are computational models loosely inspired by the structure and functioning of biological neural networks, such as those in the human brain. Artificial neural networks consist of interconnected layers of nodes, or “neurons” which work together to process and learn from data.

Neurons in the network are associated with a set of numbers, commonly referred to as the neural network’s parameters. The numerical value of these parameters is supposed to represent the strength of connections between different neurons. The parameters within a neural network are adjustable, and they get iteratively updated during the training process to minimize the difference between the model’s predictions and the actual target values.

In the context of LMs in particular, larger networks with more parameters have been shown to achieve better performance. Intuitively, the more parameters, the greater their “storage capacity”, even though it should be noted that language models do not store information in a way comparable to the standard way storage memory works in computers (hard drives).

Essentially, a higher number of parameters allows the model to “internalize” a greater variety of statistical patterns (via the numerical relationships of its parameters) within the language data they are exposed to. Larger models, however, also require more computational resources and training data to reach their full potential.

A language model is more than just a neural net.

Modern language models comprise various components or blocks, often formed by different neural networks, each designed to perform specific tasks and featuring specialized architectures. Virtually all current LMs are based on a particularly successful choice of architecture: the so-called Transformer model, invented in 2017.

Starting from the field of Natural Language Processing (NLP), Transformers have been revolutionizing nearly all areas of applied AI, due to their efficiency at processing large chunks of data at once (parallelization) rather than sequentially, a feature that allowed for training on bigger datasets than previous existing architectures. On text data, Transformers have proved exceptionally good at carrying out a form of natural language contextual understanding, which made them the de facto standard choice for most NLP tasks nowadays. Two components are key for this success: the attention mechanism and word embeddings.

  • Word Embeddings are high-dimensional vector representations of words that capture their semantic and syntactic properties. These representations enable the model to numerically manipulate words in a mathematical space, a sort of semantic space, where physically nearby words share some form of relationship of meaning or other kinds of similarities. Instead of treating words as isolated entities, word embeddings allow the model to learn and understand the complex interplay of words within a given context.
  • Attention Mechanisms allow the model to weigh the importance of different words or phrases in the text. This helps the model to selectively focus on specific parts of the input, assigning different attention scores to the words based on their relevance to the task at hand. Attention can be thought of as a numerical operation that is supposed to mimic the “focusing ability” of a model to the local, specific context as it reads through or generates text…

…Previous prevailing heuristics have long been claiming that increasing the size of a model was the most effective way to improve its performance, while scaling the training datasets was less important. However, more recent research has radically reshaped this perspective, revealing that many of the current LLMs are, in fact, significantly undertrained with respect to the amount of data seen during pre-training.

This fundamental shift has led to the formation of a new set of guiding heuristics, emphasizing the importance of training large models with more extensive datasets. In practice, in order to fully train the next massive LLM following these new principles one would need an immense amount of data, corresponding to a significant fraction, if not all of the text data available on the entire internet today.

The implications of this new perspective are profound. On the one hand, the total amount of training data actually available might turn out to be the true fundamental bottleneck for these AI systems…

…Scaling language models yields more than expected.

With scaling, the performance of LLMs has (predictably) shown consistent improvements across a number of quantitative metrics that are supposed to measure to which extent an LM is able to do what it was primarily designed for: calculate probability distributions over words. An example of such metrics is perplexity, a measure of fluency of generated text.

We have seen, however, how the process of scaling language models requires training them on enormous quantities of data, often sourced from the extensive troves of text available online. LLMs thus get to be “fed” with substantial portions of the web, spanning a vast array of information. Being exposed to such a diverse range of linguistic patterns and structures during training, LLMs progressively learn to emulate and reproduce these patterns with high fidelity.

As a byproduct, this process has appeared to give rise to fascinating qualitative behaviors. Empirical studies have found that, as LLMs are scaled, they are able to suddenly “unlock” new capabilities that seem to emerge in a discontinuous manner, in contrast to the more predictable linear improvement of quantitative metrics.

These emergent abilities encompass a wide range of tasks, such as translation between different languages, the ability to write programming code, and many others. Remarkably, LLMs acquire these skills through the mere observation of recurring patterns in natural language during the training process, that is, without explicit task-specific supervision…

…The phenomenon of emergent abilities in LLMs, although quite recent and still not fully understood by researchers, is also not a completely obscure one.

Even though there is no prediction on exactly which new cognitive capabilities further scaled LLM may acquire in the future, the general pattern that allows this to happen is fairly clear. Let’s consider the example of Question-Answering.

Within this massive language dataset, the internet of text, there exist numerous instances of questions followed by answers. These question-answer pairs occur in diverse contexts, such as forums, articles, or educational resources, and cover a multitude of topics, from everyday trivia to specialized technical knowledge.

Ultimately, a statistically significant number of these answers is in fact correct, and this is reflected in the ability of an LLM to carry out a form of information retrieval from web knowledge, by giving reasonably correct answers to common sense questions on disparate topics when requested to do so.

Unfortunately, the internet is also filled with (a statistically significant amount of) false facts and wrong answers to common sense questions. Due to the sheer volume of this data, it is virtually impossible for the researchers to regulate the content LLMs are exposed to during training.

As a matter of fact, LLMs may occasionally exhibit various types of undesirable behavior, such as reproducing harmful or biased content, or generating so-called hallucinations by fabricating nonexistent or false facts.

When such models are proposed as general purpose conversational chatbots (like ChatGPT), it becomes a lot more difficult to identify all the possible threats that arise from a mass use of these systems, since it is almost impossible to predict a priori all the possible scenarios…

…Can a machine learn human values?

Fundamentally, RLHF is based on a straightforward premise. Imagine having two language models: a baseline (unaligned) model and a secondary preference model. The preference model’s role is to determine which action a human would prefer within a given list of possibilities (e.g., two different responses from the baseline model to a user’s request). This model could assign a numerical score to each action, effectively ranking them according to human preferences. In technical terms, this is known as a reward model.

Utilizing the reward model, the baseline model can be refined iteratively, altering its internal text distribution to prioritize sequences favored by humans (as indicated by the reward model). In some sense, the reward model serves as a means to introduce a “human preference bias” into the baseline model…

…OpenAI has applied the general methodology of RLHF to fine-tune ChatGPT through a three-step process.

The initial step involves collecting human demonstrations using a group of about 40 human annotators for a pre-selected set of prompts. The prompts are sourced from two different origins: some are created by annotators or developers, while others are sampled from OpenAI’s API requests.

These demonstrations can be thought of as the “ideal answers”, or responses to these prompts, and together constitute a training dataset. This dataset is then used to fine-tune a pre-trained model in a supervised manner, yielding the Supervised Fine-Tuned (SFT) model.

As mentioned earlier, this approach has scalability limitations, resulting in a relatively small dataset (approximately 15k examples).

The second step revolves around preference orderings. Labelers (or annotators) are tasked with voting on a number of SFT model outputs, thereby creating a new dataset composed of comparison data. The reward model is trained on this dataset.

In practice, a list of prompts is chosen, and the SFT model generates multiple outputs (between 4 and 9) for each prompt. Annotators rank these outputs from best to worst, forming a new labeled dataset with rankings serving as labels.

Although the exact details remain undisclosed by OpenAI, the dataset’s size may be roughly ten times larger than the curated dataset used for the SFT model.

Finally, the third step involves applying Reinforcement Learning to teach the SFT model the human preference policy through the reward model, essentially as described in the previous section. The SFT model is fine-tuned via the reward model. The outcome is the so-called policy model…

…As we have previously discussed, by treating the language model as a reinforcement learning policy during the fine-tuning phase, RLHF introduces biases into the distribution.

Operationally, we can interpret this effect as the introduction of a mode-seeking behavior which guides the model through the distribution and leads to outputs with higher rewards (as modeled by learned human preferences), effectively narrowing the potential range of generated content…

…While RLHF improves the consistency of the model’s answers, it inevitably does so at the cost of diversity in its generation abilities. This trade-off could be viewed as both a benefit and a limitation, depending on the intended use case.

For instance, in LLM applications such as search engines, where accurate and reliable responses are paramount, RLHF is an ideal solution. On the other hand, when using language models for creative purposes, such as generating novel ideas or assisting in writing, the reduction in output diversity may hinder the exploration of new and intriguing concepts.

4. Why transformative AI is really, really hard to achieve – Arjun Ramani and Zhengdong Wang

Humans have a good track record of innovation. The mechanization of agriculture, steam engines, electricity, modern medicine, computers, and the internet—these technologies radically changed the world. Still, the trend growth rate of GDP per capita in the world’s frontier economy has never exceeded three percent per year.

It is of course possible for growth to accelerate. There was time before growth began, or at least when it was far closer to zero. But the fact that past game-changing technologies have yet to break the three percent threshold gives us a baseline. Only strong evidence should cause us to expect something hugely different.

Yet many people are optimistic that artificial intelligence is up to the job. AI is different from prior technologies, they say, because it is generally capable—able to perform a much wider range of tasks than previous technologies, including the process of innovation itself. Some think it could lead to a “Moore’s Law for everything,” or even risks on on par with those of pandemics and nuclear war. Sam Altman shocked investors when he said that OpenAI would become profitable by first inventing general AI, and then asking it how to make money. Demis Hassabis described DeepMind’s mission at Britain’s Royal Academy four years ago in two steps: “1. Solve Intelligence. 2. Use it to solve everything else.”…

…Neither this essay nor the economic growth literature rules out this possibility. Instead, our aim is to simply temper your expectations. We think AI can be “transformative” in the same way the internet was, raising productivity and changing habits. But many daunting hurdles lie on the way to the accelerating growth rates predicted by some…

…Productivity growth almost definitionally captures when a new technology efficiently performs useful work. A powerful AI could one day perform all productive cognitive and physical labor. If it could automate the process of innovation itself, some economic growth models predict that GDP growth would not just break three percent per capita per year—it would accelerate.

Such a world is hard to achieve. As the economist William Baumol first noted in the 1960s, productivity growth that is unbalanced may be constrained by the weakest sector. To illustrate this, consider a simple economy with two sectors, writing think-pieces and constructing buildings. Imagine that AI speeds up writing but not construction. Productivity increases and the economy grows. However, a think-piece is not a good substitute for a new building. So if the economy still demands what AI does not improve, like construction, those sectors become relatively more valuable and eat into the gains from writing. A 100x boost to writing speed may only lead to a 2x boost to the size of the economy.

This toy example is not all that different from the broad pattern of productivity growth over the past several decades. Eric Helland and Alex Tabarrok wield Baumol in their book Why Are the Prices So Damn High? to explain how technology has boosted the productivity of sectors like manufacturing and agriculture, driving down the relative price of their outputs, like TVs and food, and raising average wages. Yet TVs and food are not good substitutes for labor-intensive services like healthcare and education. Such services have remained important, just like constructing buildings, but have proven hard to make more efficient. So their relative prices have grown, taking up a larger share of our income and weighing on growth…

…Progress in fine motor control has hugely lagged progress in neural language models. Robotics workshops ponder what to do when “just a few cubicles away, progress in generative modeling feels qualitatively even more impressive.” Moravec’s paradox and Steven Pinker’s 1994 observation remain relevant: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.” The hardest “easy” problems, like tying one’s shoelaces, remain. Do breakthroughs in robotics easily follow those in generative modeling? That OpenAI disbanded its robotics team is not a strong signal.

It seems highly unlikely to us that growth could greatly accelerate without progress in manipulating the physical world. Many current economic bottlenecks, from housing and healthcare to manufacturing and transportation all have a sizable physical-world component…

…Current methods may also not be enough. Their limits may soon be upon us. Scaling compute another order of magnitude would require hundreds of billions of dollars more spending on hardware. According to SemiAnalysis: “This is not practical, and it is also likely that models cannot scale to this scale, given current error rates and quantization estimates.” The continued falling cost of computation could help. But we may have exhausted the low-hanging fruit in hardware optimization and are now entering an era of deceleration. Moore’s Law has persisted under various guises, but the critical factor for transformative AI may be whether we will reach it before Moore’s Law stops.

Next look at data. Villalobos et al. warns that high quality language data may run out by 2026. The team suggests data efficiency and synthetic data as ways out, but so far these are far from complete solutions as Shumailov et al. shows.

In algorithms, our understanding of what current architectures can and cannot do is improving. Delétang et al. and Dziri et al. identify particularly hard problems for the Transformer architecture. Some say that so-called emergent abilities of large language models could still surprise us. Not necessarily. Schaeffer et al. argues that emergence appears “due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale.” …

…Humans remain a limiting factor in development. Human feedback makes AI outputs more helpful. Insofar as AI development requires human input, humans will constrain productivity. Millions of humans currently annotate data to train models. Their humanity, especially their expert knowledge and creative spark, becomes more valuable by the day. The Verge reports: “One engineer told me about buying examples of Socratic dialogues for up to $300 a pop.”…

…A big share of human knowledge is tacit, unrecorded, and diffuse… We are constantly surprised in our day jobs as a journalist and AI researcher by how many questions do not have good answers on the internet or in books, but where some expert has a solid answer that they had not bothered to record. And in some cases, as with a master chef or LeBron James, they may not even be capable of making legible how they do what they do.

The idea that diffuse tacit knowledge is pervasive supports the hypothesis that there are diminishing returns to pure, centralized, cerebral intelligence. Some problems, like escaping game-theoretic quagmires or predicting the future, might be just too hard for brains alone, whether biological or artificial…

…The history of economic transformation is one of contingency. Many factors must come together all at once, rather than one factor outweighing all else. Individual technologies only matter to the extent that institutions permit their adoption, incentivize their widespread deployment, and allow for broad-scale social reorganization around the new technology…

…All agree that history is not inevitable. We think this applies to AI as well. Just as we should be skeptical of a Great Man theory of history, we should not be so quick to jump to a Great Technology theory of growth with AI.

And important factors may not be on AI’s side. Major drivers of growth, including demographics and globalization, are going backwards. AI progress may even be accelerating the decoupling of the US and China, reducing the flow of people and ideas.

AI may not be able to automate precisely the sectors most in need of automation. We already “know” how to overcome many major constraints to growth, and have the technology to do so. Yet social and political barriers slow down technology adoption, and sometimes halt it entirely. The same could happen with AI.

Comin and Mestieri observe that cross-country variation in the intensity of use for new technologies explains a large portion of the variation in incomes in the twentieth century. Despite the dream in 1954 that nuclear power would cause electricity to be “too cheap to meter,” nuclear’s share of global primary energy consumption has been stagnant since the 90s. Commercial supersonic flight is outright banned in US airspace…

…Automation alone is not enough for transformative economic growth. History is littered with so-so technologies that have had little transformative impact, as Daron Acemoglu and Simon Johnson note in their new book Power and Progress. Fast-food kiosks are hardly a game-changer compared to human employees. Nobel laureate Robert Fogel documented that in the same way, railroads had little impact on growth because they were only a bit better than their substitutes, canals and roads. Many immediate applications of large language models, from customer service to writing marketing copy, appear similar.7

OpenAI’s own economists estimate that about “19% of jobs have at least 50% of their tasks exposed” to GPT-4 and the various applications that may be built upon it. Some view this as game-changing. We would reframe it. That means over 80% of workers would have less than 50% of their tasks affected, hardly close to full automation. And their methodology suggests that areas where reliability is essential will remain unaffected for some time…

…There is a deeper point here. GDP is a made-up measure of how much some humans value what others produce, a big chunk of which involves doing social things amongst each other. As one of us recently wrote, we may value human-produced outputs precisely because they are scarce. As long as AI-produced outputs cannot substitute for that which is social, and therefore scarce, such outputs will command a growing “human premium,” and produce Baumol-style effects that weigh on growth.

5. Compounding Optimism – Morgan Housel

The question is: Did George Wheelwright know that he would influence Edwin Land, who would then influence Steve Jobs, who would then design a phone that 2.5 billion people would use?

Did Michael Faraday, who died in 1867, know that his ideas would directly influence the light bulb, which effectively led to the creation of everything from the modern power grid to nightlife?

Did Ben Graham know that his 1950s finance class would lead to 45,000 trekking to Omaha every year to hear his student speak?

Of course not. It’s so hard to know what an idea, or an invention, or a philosophy, will influence, and what a person who’s influenced by it will go on to create.

Visa Founder Dee Hock says, “A book is far more than what the author wrote; it is everything you can imagine and read into it as well.” An author might write something that’s dull or obvious, but it could inspire a reader to go do something incredible…

…Most new ideas and inventions are pretty bland on their own. But when you mix several of them together, you can get magic. Plastic is great. Electronics are neat. Metal is special. But mix them together in the right way and you get an iPhone, which is pure magic…

…I think part of the reason pessimism is so much easier and more common than optimism is that compound growth is not intuitive.

It’s hard to imagine, say, our incomes doubling over the next few generations. That seems like such a massive leap, like we’d have to boil the ocean to get it done. But doubling the average income over 30 years works out to about 2.3% growth per year. It’s not crazy at all. It’s actually quite achievable. What made it seem so ambitious to begin with is that compound growth is easy to underestimate.

If you look at the end result of a long period of compounding, it’s astounding. But all it took to get it done was little bits of incremental growth strung together for a long time.

All progress is like that.

Technological progress is easy to underestimate because it’s so counterintuitive to see how, for example, the philosophies of a guy who invented Polaroid film would go on to inspire the iPhone. Or how an 18th-century physicist would write a notebook that would set the foundations for a modern electrical system.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of DeepMind), Apple (parent of the iPhone), and Visa. Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 02 July 2023:

1. Creating a Monster – Marc Rubenstein

Dennis Weatherstone needed a number. He’d just been appointed chairman and chief executive officer of JPMorgan and was in the process of reorienting the bank away from traditional lending towards trading…

…A currency trader by background, Weatherstone understood the risks inherent in such businesses. According to colleagues, he maintained “a steely insistence on evaluating the downside risk” of any trading decision. It was an insistence he imposed on the overall firm. Every afternoon, at 4.15pm New York time, JPMorgan held a treasury meeting to go through its various risk exposures. As risks proliferated, Weatherstone thought it would be useful for the risk management team to present a single number at the meeting, representing the amount of money the bank might lose over the next twenty-four hours. “At the end of the day, I want one number,” he instructed staff. 

In 1990, JPMorgan introduced a new model, Value-at-Risk (VaR), to satisfy Weatherstone’s request. Volatility had long been used to measure fluctuations in a security’s price; Value-at-Risk took this further, using volatility as an input to estimate the minimum loss that might be expected on a day where the firm suffers large losses.

To illustrate, let’s say you own a portfolio of stocks worth $10,000. If the portfolio’s 99% daily Value-at-Risk is $200, it means that one day out of a hundred, you would expect to lose $200 or more; the other ninety-nine days, you would expect either to make money or suffer losses lower than $200.

The measure was a useful way for JPMorgan to keep track of firmwide risk and became the basis for risk budgets. Years later, JPMorgan would use it to measure risk on 2.1 million positions and 240,000 pricing series. But rather than keep it private, JPMorgan opened this valuable intellectual property to the world. In October 1994, it published full details of the model under the name Riskmetrics. Other banks and trading firms swiftly adopted it.

A currency trader by background, Weatherstone understood the risks inherent in such businesses. According to colleagues, he maintained “a steely insistence on evaluating the downside risk” of any trading decision. It was an insistence he imposed on the overall firm. Every afternoon, at 4.15pm New York time, JPMorgan held a treasury meeting to go through its various risk exposures. As risks proliferated, Weatherstone thought it would be useful for the risk management team to present a single number at the meeting, representing the amount of money the bank might lose over the next twenty-four hours. “At the end of the day, I want one number,” he instructed staff.

In 1990, JPMorgan introduced a new model, Value-at-Risk (VaR), to satisfy Weatherstone’s request. Volatility had long been used to measure fluctuations in a security’s price; Value-at-Risk took this further, using volatility as an input to estimate the minimum loss that might be expected on a day where the firm suffers large losses.

To illustrate, let’s say you own a portfolio of stocks worth $10,000. If the portfolio’s 99% daily Value-at-Risk is $200, it means that one day out of a hundred, you would expect to lose $200 or more; the other ninety-nine days, you would expect either to make money or suffer losses lower than $200.

The measure was a useful way for JPMorgan to keep track of firmwide risk and became the basis for risk budgets. Years later, JPMorgan would use it to measure risk on 2.1 million positions and 240,000 pricing series. But rather than keep it private, JPMorgan opened this valuable intellectual property to the world. In October 1994, it published full details of the model under the name Riskmetrics. Other banks and trading firms swiftly adopted it…

… But VaR is no panacea. While good at quantifying the potential loss within its level of confidence, it gives no indication of the size of losses in the tail of the probability distribution outside the confidence interval. The one-in-a-hundred day event may be a lot more debilitating than the $200 loss in the example above. In addition, correlations between asset classes can be difficult to ascertain, particularly when banks begin to act in unison. The diversification benefits that VaR supposedly captures in a portfolio of different asset classes falls away when crisis hits and correlations surge.

In 2008, the year Weatherstone died, the complex balance sheets his number facilitated unravelled spectacularly. Citigroup took $32 billion of mark-to-market losses on assets that year, an order of magnitude greater than the $163 million of VaR it reported at the end of 2007. Value-at-Risk didn’t cause the crisis, but it certainly cultivated a false sense of security leading up to it.

“Dennis, you created a monster by asking for that one number,” says Jacques Longerstaey.

2. Shanghai 2023 – Graham Rhodes

I visited Shanghai this month, my first overnight trip to mainland China since January 2020. So much has happened in that time, and I can’t tell you how much I’ve yearned to be back. Separated by just a river, Hong Kong is a world away. It’s been hard to be apart from friends, and harder still as an investor to understand the nuance of events in China without being there in person…

…The purpose of my visit was to present to a group of fellow investors who meet monthly to discuss a business and share what they see at work…

…My most important observation first: Mainland China’s dynamic-zero COVID policy is history, and everyday life has returned to normal. I had to make a health self-declaration upon entry, but that was it. Only a tiny minority of people wore masks, even on public transport. Restaurants and bars were open and bustling. The Bund, Shanghai’s riverfront promenade, was heaving with visitors from out of town. I raised the topic of Shanghai’s almost three-month lockdown with my friends, more as a way to enquire about their emotional well-being than to probe for details. And, for the most part, it is a thing of the past. They survived and have moved on. Perhaps their most lasting trace of zero COVID will be an unseen one: the children they didn’t have because they chose to wait until better times.

Twenty years ago, when I first visited Shanghai, there were a lot of rough edges. Now, you have to look hard to find them. I enjoyed the tasteful elegance of Swire’s HKRI Taikoo Hui Mall on West Nanjing Road and was awed by the opulence of Hang Lung’s Grand Gateway 66 Mall in Xujiahui. Even the malls in the outer inner suburbs, whose names I forget, were pleasant enough. Service in restaurants and elsewhere has improved dramatically, too, I suspect because of the transparency and intense competition created by rating apps like Meituan’s Dazhong Dianping. And I did everything through WeChat; if apps killed the open web in China, have mini-programs killed apps?…

…You can tell an EV in China by its green licence plate, and there were many of them on the streets of Shanghai. I have never seen cars showcased in shopping malls before. But Tesla and its Chinese EV competitors are doing just that. Does it reflect intense competition? Or cutting out the dealers to sell direct? Or both, perhaps? The Chinese EVs look good: they have stylish interiors and many clever features.

My friends wanted to know if I have less invested in China today than four years ago. The answer is yes. It’s been hard to keep confidence without regularly spending time on the ground. And given how far certain events were out of my expectations, I have had to ask myself if what I once took as understanding and insight were simply overconfidence and luck. It was reassuring to hear, then, that some things puzzled them too. For example, what impact will the sudden dismissal and arrest of the CEO of China’s best bank have on its development?…

…We’re not out of the woods, though; one friend opined that business sentiment today is worse than it was in October last year. The real estate market has not healed, and local governments have no money. Businessmen lack the confidence to invest. The consensus is that China has already entered a period of low growth. We discussed the implications of this for long-term stock-picking: can organisations built and tuned for the days of high growth adapt and re-invent themselves? Will first-generation founders be able to slow down? Will second-generation managers have the vision and chutzpah? And will either be willing to return capital to minority shareholders rather than chase at windmills?

It was amusing to hear that the group’s ‘deep value’ investors now own erstwhile growth stocks, while the more ‘quality-minded’ investors have become “flexible” enough to buy coal and utilities. China is, after all, a complex economy with the breadth and depth of listed companies to match. All companies have their cycles, too.

3. Scott Goodwin – Know the Names – Patrick O’Shaughnessy and Scott Goodwin

Patrick: [00:03:28] I think there’s different personality types that thrive in equity versus credit. I know early on in your career, you figured out that equities weren’t for you. Maybe describe, in your mind, the prototypical skill set differences between those two types and who would thrive the most.

Scott: [00:03:42] Well, Morgan Stanley didn’t want me back after my junior year of summer. So everybody’s work going to be for me because I said, “No. Give me a job.” I think for me, I’m naturally skeptical, and in credit, you’re always thinking about how much can I lose, how am I going to get my principal back, am I going to get my interest payments.

And when I think about the smart equity investors I know who have, the last 10 years, made a lot more money than I have because they’ve been thinking about the upside. How can earnings or revenue for this business double, triple, quadruple? So that difference of thinking about downside versus thinking about upside is very fundamental.

And then when you think about credit investing, you have the asset side of the balance sheet, which the equity guys are focused on, okay? So how many widgets does this company make? How many PCs does this company make? But then there’s the liability side of the balance sheet, that the equity universe, frankly, misses a lot. I think they’re learning about it again now a little bit.

Thinking about what Carvana or companies like that have gone through, you’re seeing the liability side start to matter more. But what’s the debt structure? When are the maturities? What are the covenants? What assets can the company sell? What can they not sell? Can they move assets around? So that liability structure and the sort of the unholy acts that can be done, by creditors or to creditors, is something that we like to meld into our process from a credit perspective…

Patrick: [00:06:02] Can you talk about the concept of a credit cycle, which listeners will be roughly familiar with, but I think it drives a lot of where the opportunity is? And I want to talk about, in the credit cycles that you’ve seen and/or studied, but really seen and participated in, how they felt different, maybe going back to, like, say, 2000? So that we can talk about this one specifically and how it’s different. But first, what is a credit cycle from your perspective?

Scott: [00:06:23] So when we think about credit cycles, we think of booms in Boston business, booms in Boston economy associated with companies that are either cyclical, that have a problem due to an economic change. So in COVID, that meant rental car companies and cruises and airlines that literally couldn’t perform their business. Their balance sheets were fine one day and not fine the next day. And then you have another type of credit cycle which is more driven by secular change, so Amazon killing all the retailers over the past 10 years.

So for us, credit cycles aren’t just ’02, ’07 and ’08, and COVID. There are series of micro-cycles going on all the time in different sectors. Maybe the energy thing in 2016 is the best example of that. If I unpack that and go back — I started at Salomon City in 2002 working for Jim Zelter, and what kind of the learnings were from him early on, it was there are a lot of companies that need money right now for project finance in telecom and power. That’s what’s been built up, and there was a series of asbestos bankruptcies as well.

That was a bubble built up largely in the high-yield market. Tradable bonds, investment-grade market, power, telecom, and fraud were the main parts of that credit cycle. There was a huge amount of money to be made in distressed because you had mutual funds that would — bonds would default or they get downgraded and they sell them to distressed guys. And there wasn’t as much competition for those people in distressed.

And the liability side of the balance sheet we talked about, those people had real edge, go to the courthouse. They would have lawyers. They would know exactly what’s going on. That liability side edge, because of the advent of real research and everybody having a dock person on staff, has largely been competed away within the credit universe. That’s the first cycle I was a part of.

Then we get to the LBO boom and bust. So if you think about LBOs and — probably 40% of high-yield issuance was driven for LBOs in 2007. I don’t think we’ve seen that number since then. And there was a ton of leverage built up in the system very quickly, chasing a private equity boom, you had a housing bust that took the economy down that took those deals down as well. So those companies weren’t actually the problem. It was the housing bust that took the economy down caused them to have a problem. That was another very fast V for a lot of those companies.

I was at Citi and then left in 2010 to go to Anchorage. But I’m at Citi, my mentors, Jim, John Eckerson, Ronnie Mateo, had all left. They’re gone. I’m kind of there by myself with a few people who are left, moving the deck chairs around, watching the stock be at $1, and frankly, learning from my clients.

One of the reason I went to Anchorage was I had a lot of the same shorts as the Anchorage guys in ’08, and we worked to turn them into longs in ’09. A lot of my career has been about finding shorts then getting long in the other side and following these credits through the cycle. And I liked how Kevin and Tony and the Anchorage team did that. So they asked me to join in 2010, and I joined them coming out of the GFC cycle.

But then soon after that, we had a cycle in Europe. I got there, I think, in May of 2010, and there — all of a sudden, Greece is exploding. Frankly, the learnings from the European sovereign cycle were very relevant to what happened during COVID because it was the first time in my career that you’d seen real intervention by sovereign corporate debt markets, buying a lot of debt and supporting the market.

So you had, in ‘20 and Draghi, whatever it takes, they’re going to buy Italian bonds, buy Spanish bonds. Eventually, they ran out of those bonds to buy. They bought corporate bonds with the CSPP program, and then distorted the corporate bond market in Europe for a long time which allowed REITs to issue at 1% that’s going to now create a good distressed opportunity. But what we saw then was whatever-it-takes intervention works in investment-grade and corporate bonds.

2015, ’16, ’17 is the energy and commodity bust. That’s a real credit cycle, sector-driven like I was talking about. So there’s a ton of new issuance in energy. The shale boom is being built up over many years.

I remember meeting with Aubrey McClendon from Chesapeake in 2003 or 2004 at Citi in a road show. And he showed us a chart — I think they issued the Chesapeake 9s 1032 that year, the 9s of like ’08 or ‘09. And he showed us a chart of where natural gas was going to go. And I don’t think it saw that target for a long time, but that was the beginnings of it, like in the early 2000s.

At Anchorage, in 2010, ’11, and ’12, we were financing companies in the Bakken, the Marcellus, the Mississippi Lime, the Permian. We knew all these basins. So when energy started to trade poorly in the middle of 2014 and started to trade down a lot, and you’d start to have these high correlation sell-offs, that’s one characteristic of credit cycles is. Whether it’s a sector-based cycle or it’s a macro-cycle, the beginning sell-off in credit is 0 dispersion, high correlation.

People are selling what they can sell. That creates tremendous opportunities because in that first wave, things will go down that probably shouldn’t have gone down at all. And you can buy those and short the bad stuff.

So we looked at that, that first sell-off, in 2014, and you had the Permian credits, many of which have now been rolled up. The Parsleys, the CrownRocks, the Diamondbacks had gone from par to $0.70 on the dollar.

The Mississippi Lime, which is a worse basin, the SandRidges, and the offshore credits had gone from, say, par to $50. The distressed funds are all looking at the south of $50. They’re heuristically saying, “I have to buy the lowest dollar price. That’s what I’ve been trained to do.”

And they’re generalists, generally. They don’t have sector specialists, although that’s changing because of some of the mistakes made in the teens. But they’re drawn to that low dollar price. We’re sitting there saying, “Wow, this stuff in the Permian is covered at par even if oil is at $30 or $40.”

Whereas the Mississippi Lime stuff, we didn’t like anyways. “Let’s buy the Permian stuff at $70 and short this at $50.” And that trade ended up making maybe 30 points on the long and 50 on the short. I wish we’d held at that whole time. In extremis, that’s what it would have made, but you had multiple bites at the apple and fits and starts in that credit cycle, and you usually do.

Rarely do you go, like COVID, from A to Z in 1 month. Credit cycles are — and one we’re about to talk about, the post-COVID cycle that we’re about to enter now is it’s a slow-moving cycle, much more like the ’02, ’03, ’04, ‘05, what I went through at the beginning of my career, which was a buildup of excess in certain sectors driven by some economic shifts and changes in the interest rate environment that led to a cycle.

The things in 2011 and ’12, systemic. GFC, systemic. Energy, not systemic, but commodity price. If you have a bond that’s at par, that works at $70 oil, that was – and oil is at $20, it doesn’t work — it’s not that the bonds were at $50. It might be worth 0.

Now, what happened in the energy thing was you had all these bonds that went to trading at $0 to $0.20 on the dollar. But some of them had a couple of years of cash around and could fund their interest payments. So — I could buy for $0.05 to $0.10. When I think about the best trades I saw in ’08 and ’09, it was people coming in and buying the LBO unsecured debt — I was a — the broker-dealer of Citi — because there was a lot of option value at those spots.

Now, we’re sitting here at Anchorage and we’ve got – “Wow, there’s some really interesting opportunities.” These bonds are at $0.05, $0.10, $0.15. So we tried to figure out which ones had enough runway and the – and same thing happening again in COVID. And you ended up buying what were IOs that recovered par because oil didn’t stay at $20 or $30 forever. It went back up eventually because supply and demand balances.

And I think in commodities, you had the same thing in Freeport and some of the copper companies as well. When you have a commodity, a first-quartile or second-quartile commodity company, that trades down a lot in credit, it’s a really unique opportunity because the commodities have such a high volatility factor associated with them. If they have enough runway that they can last 12 to 24 months, you’re supposed to take a shot on that debt…

Patrick: [00:17:44] Could you give a story, hypothetical or real, that helps us understand like one of these decision moments where it is seconds or minutes that you’re making, I’ll call it, a substantial decision, whether that’s with dollars or percent of the portfolio or however you want to interpret it? I just want to like get in the room a bit on why this all comes together as an advantage for you and your investors.

Scott: [00:18:03] Sure. Sure. I’ll give you an example from COVID. That’s maybe the most interesting example. The levered loan market is a market that is very opaque. 70% of the market is private issuers, which means there’s no public stock you can file. You have to go on the interlink 1819 site to get the financials.

And levered loans don’t settle like stocks or bonds. It’s mind-boggling, but levered loan settlement process could take anywhere from a week to months. Hopefully, someday blockchain will fix that, but it hasn’t yet.

So we’re sitting there in the second week of March, and we share with the banks names we’re focused on. So I’m sharing with the banks each morning, “Here are the names we’re focused on.” So they know if they get a sell-off of anything on that list, they should call me. We want to be transparent and open with them. Again, we’re trying to make them smarter, warehousing that risk they’re looking to move in.

Patrick: [00:18:52] Are you e-mailing them, calling them?

Scott: [00:18:54] I’m sending them an IB and then I’m also talking to them. For each bank, sort of nuance the list a little bit, as are the traders on our team. And the head of loan trading at — BofA calls me. He says, “Hey, at 7:00 a.m., I’ve got a mutual fund that’s got a $1 billion outflow in loans.” They’re calling us because we are the fastest settlement process for loans.

“Well, okay. They own these names on your list. Can you buy $500 million by 8 a.m. because I want to make some progress?” I call Jon. We’re like, “Let’s not buy cyclical stuff. We don’t know what’s going to happen here.” We’re starting to buy a little bit of IG because we think the government is going to start buying IG, but this is junk-rated loans.

And we had had our analysts in software learn all the software loans in 2019 because we said, “Well, if there’s a recession and there is a cyclical environment, the whole loan market is going to trade down because that’s where a lot of the excesses are building up, but software will be the most defensive place. It’s stickier.”

So we bid that firm for $500 million of loans, of which $350 million was software loans. Let’s say the average price on them the prior day was in the high 80s 1954. We bid around 80s, so down, say, 10% or 8 points, and they sold it to us.

And I think there are probably 2 firms in the world that could have responded to that call within 15 minutes. And we responded, I think, within 5 minutes. But they called us because, a, we had shared the list with them, and, b, they knew we had a track record of providing liquidity into these dislocations and responding fast.

So that speed of capital in that situation provided a lot of alpha. Two of the loans were Sprint, which is getting bought by T-Mobile, and Infor Lawson, which was getting bought by the code 2025 family. So they were getting bought by investment-grade companies, we were buying them in the lower mid-80s. Sprint was a little higher.

But that opportunity, I think, exemplifies being ready, learning things proactively, not necessarily because there’s investment to do today, but because you know when there’s an inflection point, there are certain kind of things you want to buy. And that does create some level of busy work, but it’s all that process and being prepared so that you can make fast decisions.

Patrick: [00:20:50] Can you talk about how you think of the evolution of where alpha comes from in credit over — maybe just across your whole career? You said a liquidity provision there, which to me is a really important thing to think about and talk about as a source of alpha. But like what have been the sources of alpha across your career? And what are still here, what are gone?

Scott: [00:21:10] Early on, it was the liability side of the distressed market, and that was the firms that were early in that, that were excellent 20, 25 years ago, that were early and — in there, and they knew the docs and other people didn’t. That was a real source of alpha.

I think that alpha in terms of just understanding the docs better than other people or having the information is gone. If you fast-forward to the GFC, I think a lot of the alpha there was liability structure. Who could hold the trade?

We were — at Citi, we sold most of our levered loan book to a bunch of private equity guys and gave them back leverage. They had to re-up that leverage, but they were able to hold that trade from those loans going from 80% to 40% to par.

And if I look at funds that were successful during that time period, it was those that could hold the trade or had liquid enough investments that they could change their mind. When we think about liquidity, we — and investing, we’re not just focused on what is the best risk-adjusted return, we’re also, for our hedge fund and dislocation fund, thinking about what’s the best liquidity-adjusted return because most of the time, you’re not getting paid enough to go into illiquids if you have capital that’s supposed to be doing liquid things.

And we don’t – I was a lot about holding the trade because if you were levered, Selwood was one of our biggest counterparties. 2007, the market goes down for like 3 days, and they were — I mean these numbers are pretty incredible, but they were something like 90:1 levered on levered loans on LCD, yes, which is a product that doesn’t exist anymore as a secured-, unsecured-basis trade. They were gone in 3 days, basically.

And that was a real lesson for me about gross and leverage and watching how quickly that unwound. And at Citi watching some of the mistakes that were made, there were real lessons to be learned in my career of, frankly, watching other people make mistakes and learning from them versus having to make them myself.

But that source of alpha of liability structure is still around in credit today. I think it’s much more appropriately distributed. Now, you have private credit funds have funding that matches the — not only LP capital, but the leverage matches the duration of the assets.

There are a lot of the CLO equity, which is a more volatile product from market-to-market perspective. Great products through a number of cycles, but maybe not for somebody who has quarterly liquidity. That now sits in different hands, more insurance, more pension, more long-term liability type of money.

There’s still a lot of money in daily-liquidity ETF, mutual fund. That creates a lot of the intraday and intra-month volatility in credit because the underlying assets don’t necessarily match the daily liability structure.

I would say speed is something that — when credit markets were much more liquid and the banks were taking a lot of risk, when I was on the sell side, call it, 2002 to 2010 and maybe a little bit after that ’11,’12, ’13, there was more liquidity in the market. The banks were committing a lot of capital. Speed wasn’t as relevant because the bank traders were always the fastest.

They were seeing everything going on. They knew what everyone was doing. As they became less focused on risk and knowing the names, frankly, and more focused on just moving widgets around from one account to another, the combination of understanding the underlying credits and being fast – because I think a lot of people understand the credits, most of them are slow and reactionary – having a process that allows for speed of decision-making is alpha. There’s no doubt about it. The example I gave you about the levered loan things is an extreme one in COVID, but happens every day…

Patrick: [00:41:54] We talked earlier about equity versus credit. And the idea of imagination is really important in equity investing, like imagining what the TAM might be, what might become, what a team could accomplish. What role, if any, does imagination play in credit investing?

Scott: [00:42:08] A lot on the structuring side. So if you think about what’s happening right now with a lot of the companies that are in need of money in both, let’s call it, generally the private credit and levered loan space where the bubbles have been built up, they are moving assets away from creditors to raise capital, and they’re doing it in very clever ways. One has just done it by creating a double dip, which is essentially an extra claim through an intercompany without actually moving any assets. There’s a lot of imagination and structure around that.

And then I think about when you’re in a distressed situation, when we’re sitting there looking at Hertz in the middle of 2020, and we’re saying, “Well, what’s this business going to be?”, you have to think about the narrative of a company as it goes through the process.

In equities, you have one stock. In credit, in the case of Hertz, let’s say you have the common equity, then you had the senior unsecured debt. Then you have the second-lien debt, the first-lien debt. Then you got the ABS on the cars. So you have all these layers you can invest in the capital structure.

And you have to think about — Hertz is doing no business right now. But I’m looking at the data from China. And China has reopened already in May, June of 2020. And it seems like nobody is taking a public transportation. Well, what does that mean? They’re going to drive. Well, there are no cars that are being produced. What does that mean then? They’re going to buy used cars.

Okay. Why did Hertz go bankrupt? Well, a, no one is traveling, but, b, most of the Hertz debt is just a margin loan on the ABS and the used car securitizations. So used car prices crash. If used car prices are going to go up a lot, that’s going to benefit Hertz.

So we were an investor in the first lien. And we’re sitting here looking at this, and we bought the first lien at like $0.75 or $0.80. We ended up being — us and Apollo, we’re the two largest investors in the first lien.

If this is what’s happening in China and that happens in the U.S., used car prices are going to skyrocket. And maybe the narrative is going to change in this bankruptcy that the first lien isn’t the fulcrum or the controller of the equity through the bankruptcy. It can be the junior debt, which is trading at $0.15. So we bought the junior debt on that option. You have this convex option that used car prices are going to skyrocket.

That’s a simplistic example, but thinking about how companies evolve through a bankruptcy process, and through their life cycle and how the capital structure interacts with changes in the macro and changes in the micro is a lot of how you have to think creatively about credit. It’s less about, “There’s this huge TAM and delivery. How can I address it?”

What we’re trying to solve for is knowing names and then touching them at different points in their life cycles, be it long, short, different parts of the capital structure, that we think management and the micro and macro economy are going to favor or disfavor…

Patrick: [00:51:16] As you look at the landscape of investing firms kind of – at large, what do you think most will or needs to change over the next decade?

Scott: [00:51:25] We talked about the shift from equity to yield. I do think that liability structures have tricked people into believing that being illiquid was better than being liquid necessarily. So the vol-washing Kieran talked about with you a couple of weeks ago needs to be exposed.

And the asset classes that have vol-washed to have either sharps that are artificial or low dispersion within an asset class, that will be exposed from an asset management perspective. And then fees can be calibrated not based off what the product is, but based on how good the manager is.

Because right now, if you talk about private credit, which is a business we’re about to get into, you spend a lot of time with Kieran, you’ve had almost no vol in the returns, no dispersion. And the biggest winners have been the guys who had the most second lien or the most equity co-invest who use the most leverage. Those are probably going to be the biggest losers in the next few years.

And the de novo private credit opportunity right now is pretty incredible. You’re talking about first-lien debt, 50% loan-to-value, 11%, 12%. The structure we’re going to use to raise the capital around it — that Apollo is seeding will have a higher return based on the seed economics…

Patrick: [01:05:30] What else is going on in the world, if anything, that you think matters that change the dynamics of capital markets right now?

Scott: [01:05:37] I mean the banks. So they are being disrupted from a capital perspective in terms of the private credit lenders, direct lenders. And we’re seeing now that the regulators are more focused on them. Obviously, the yield curve doesn’t help. We consider them great partners, but now they’re needing to, through credit risk transfer transactions, do essentially derivative hedging trades to create more capital.

And I think whether it’s Basel III or Basel IV, future regulatory things that are coming, that’s only going to become more acute. And for us to be a counterparty on the other side of those credit risk transfer transactions, I don’t want to get too in the weeds on them because they’re complicated, is a great thing for us.

The banks are transferring us very high-quality risk. We’re taking a junior slice alongside them, and we’re getting paid teens to 20% returns for what we think is a high-quality portfolio of underlying assets. The regulation of banks and the opportunities it creates has been an ongoing opportunity.

4. China will not be able to De-Dollarize under Xi Jinping – Mark Dittli and George Magnus

In early 2023, investors had high hopes of a recovery boom in China. It has turned out to be a disappointment. What happened?

The government has been quite vocal that they wanted to see a consumption led recovery. Many economists thought it was almost inevitable that there would be a consumption rebound as people had become very restrained in their spending in 2022 because of the lockdowns under the zero Covid policy. What’s happened is that although we’ve seen a bit of a rebound in low-ticket items such as eating out and travel, we haven’t seen a robust recovery in home sales, automobile sales and more expensive things. There was much greater caution by households than we thought was likely based on what we’d seen in other countries that had left Covid behind.

Is there a crisis of confidence among consumers?

We may still see a delayed rebound in consumer confidence and sales in bigger ticket items. We shouldn’t rule it out just yet. But the clock is ticking, and there is a possibility that it won’t happen.

Why would that be?

Part of it is a psychological thing, and part of it is a structural problem. The psychological issue is caused by what’s been going on in real estate during the past two plus years, about homes that have been promised that haven’t been delivered. China has a pre-sale model of home sales, which means you start paying your mortgage even before the property is built or finished. A lot of households have been affected by this. Given the fact that so much household wealth is tied up in housing, people have become very cautious. They have built up their savings deposits in banks, and so far they haven’t wanted to liquidate them.

And the structural problem?

This predates Covid. It’s the familiar story that in China, because of the unbalanced nature of its economy, household incomes are a low part of the economy, and consumer spending is only about 40% of GDP. They don’t account for nearly as high a proportion as in other emerging market peers, let alone in the US, Europe and East Asia. That’s the structural issue which the government has not wanted to deal with for years. So we’re looking at a double whammy, a structural constraint and a psychological problem which both affect consumers’ willingness to spend…

The property market, which is around 20 to even 25% of GDP, seems to be unable to gain traction. What’s the problem there?

In a nutshell, it’s the result of a long term housing boom. The property market in China has seen minor cyclical downturns before, but it has never really had a shakeout. It was continuously propped up and expanded to the point where it’s become laden with debt and excess capacity. It’s possible that the property market is just going to mark time for the next five to seven years, because there is such a vast amount of overconstruction. This is not necessarily a problem in Tier 1 cities like Shanghai or Shenzhen, but it is a huge problem in smaller Tier 3 or 4 cities. This is where about 60 to 75% of the housing stock and most of the excess inventory is located. No markets go up forever. Eventually, overly high prices and high inventories combine to bring about a problem. There is also a huge demographic challenge, given that the cohort of first-time buyers, who are typically aged between 25 and 40, is going to fall by about a quarter over the next 15 to 25 years…

The Party leadership has talked about a rebalancing of the economy and strengthening the consumer sector for years. Why is that so hard?

A large part of the answer arises from the economic philosophy of the CCP. It does not believe in the welfare state as we know it in Western Europe. It’s very much focused on what it calls supply side structural reform, which is really about the community benefiting from the uplift in economic growth which arises from allowing companies to produce more. The Party has a strong focus on production, but not a big focus on consumption. Xi Jinping’s China has this view that if they can fine-tune the production side, that this will lift employment and incomes throughout the economy…

So the property market will not be a driver of growth, investment neither, consumption is not coming along, and exports are in a slump. This looks rather bleak, doesn’t it?

Yes. We’ve all developed our careers in the last twenty or so years being accustomed to either double digit economic growth in China or something close to that. But in fact growth in China has been halving each decade recently. We had growth of roughly 10% to 12% per annum during the 2000s, then about 5 to 6% in the 2010s, and I think in the 2020s China’s sustainable rate of growth is probably no more than 2 or 3%. That stepwise halving in each decade is a reality, you can’t argue that it’s some freak factor. There is obviously something going on in terms of sources of sustainable growth. So I think China’s policy makers will have to choose between either good 2 to 3% growth or bad growth. The good growth would come from a rebalancing of the economy, if they were finally to do something about household income and consumption. Bad growth would be if they tried to fuel it just by building more infrastructure and real estate…

Can they achieve a de-dollarization?

My answer is No. This is not like changing a pair of shoes. I don’t think many of the people that advocate de-dollarization – which includes some emerging countries or the crypto crowd, which has a vested interest in undermining the dollar-based system – really have thought this through. It’s very easy to talk about de-dollarization, but to really achieve it, you’d have to turn the entire global financial and economic system on its head. I don’t think that’s going to happen. This does not mean that the dollar will forever be the dominant currency, but for the foreseeable future I don’t think it’s under a great threat.

Saudi Arabia selling crude to China for yuan, or Brazil selling soy to China and getting paid in yuan: That’s not de-dollarization to you?

If you sell products for yuan instead of dollars, you are technically de-dollarizing. But what really matters for the global monetary system is not the currency in which you settle your trade, but the currency in which you accumulate your balances. If you are Saudi Arabia and you peg your currency to the dollar, you have no use for accumulating balances in yuan. You need dollar reserves. If you are Brazil and you are exporting commodities that are globally priced in dollars, you have to accumulate liquid dollar reserves. The dollar system allows large imbalances in the global economy to accumulate because the United States is unique in allowing unfettered foreign access to all of its assets, be it bonds, equities, or real estate. If the people who are advocating de-dollarization really wanted to achieve it, it would mean that China, Germany, Japan, Brazil, Saudi Arabia, etc. would no longer be able to run current account surpluses if the US no longer accommodated their surplus savings. It would mean imposing symmetry between surplus and deficit nations. Do you really think the surplus countries would want that?…

What about talks about a BRICS currency?

If I twisted my own arms, I could possibly see them setting up something they might call a BRIC, which is an accounting unit for settlement of transactions, in much the same way the Special Drawing Right is an accounting unit for the IMF. But I don’t see a BRICS currency. How would it be valued? What would it be linked to? China has a convertible currency only for current account transactions, not for capital transactions. A BRICS currency is really just a fancy way of talking about a pumped up internationalization of the yuan in a way that makes the other members of the BRICS club feel better about it.

So, it’s rather simple: As long as China’s capital account remains mainly closed, there won’t be any de-dollarization?

There are certainly officials in the PBoC and government as well as a number of economists in China who think that not only is it unlikely that full internationalization can happen as long as capital controls are in situ, but also that it would be a bad idea. If they did abandon capital controls, it’s highly likely that there would be a huge outflow of capital from China. The yuan would depreciate. That would compromise the stability of the financial system in China. There is an argument that the CCP doesn’t trust its own citizens to keep their capital at home. That’s why I don’t think this is something that the CCP would ultimately endorse. Renminbi literally translates as the people’s currency. The CCP must have control over the people’s currency. Control is what drives Xi Jinping’s interest. I’m not saying it would never happen, but I am confident that it won’t happen under the leadership of Xi.

5. Eastern philosophy says there is no “self.” Science agrees – Chris Niebauer

The great success story of neuroscience has been in mapping the brain. We can point to the language center, the face processing center, and the center for understanding the emotions of others. Practically every function of the mind has been mapped to the brain with one important exception: the self. Perhaps this is because these other functions are stable and consistent, whereas the story of the self is hopelessly inventive with far less stability than is assumed.

While various neuroscientists have made the claim that the self resides in this or that neural location, there is no real agreement among the scientific community about where to find it — not even whether it might be in the left or the right side of the brain. Perhaps the reason we can’t find the self in the brain is because it isn’t there.

This may be a difficult point to grasp, chiefly because we have mistaken the process of thinking as a genuine thing for so long. It will take some time to see the idea of a “me” as simply an idea rather than a fact. Your illusionary self — the voice in your head — is very convincing. It narrates the world, determines your beliefs, replays your memories, identifies with your physical body, manufactures your projections of what might happen in the future, and creates your judgments about the past. It is this sense of self that we feel from the moment we open our eyes in the morning to the moment we close them at night. It seems all-important, so it often comes as a shock when I tell people that based on my work as a neuropsychologist, this “I” is simply not there—at least not in the way we think it is…

…As a matter of background, it is important to remember that the brain has two mirror halves connected by a large set of fibers called the corpus callosum. In research undertaken to try to mitigate severe epilepsy, Roger Sperry and Michael Gazzaniga believed that by cutting this bridge between the two sides of the brain, seizures would be easier to control. They were correct, and Sperry would win the Nobel Prize in 1981 for this work.

While each side of the brain is specialized to do certain types of tasks, both sides are usually in continuous communication. When this connection was disrupted, however, it became possible to study the job of each side of the brain in isolation. With the sides disconnected in these epileptic patients, scientists could test each on its own and gain insight into the functional differences between the left and right sides of the brain. These patients were referred to as “split-brain” patients.

To understand this research, it is also important to know that the body is cross-wired — that is, all the input and output from the right half of the body crosses over and is processed by the left brain, and vice versa. This crossover is also true for vision, so that the left half of what we see goes to the right side of the brain, and vice versa. Again, this only became obvious in the split-brain patients. And research with these subjects led to one of the most important discoveries about the left side of the brain — one that has yet to be fully appreciated by modern psychology or the general public.

In one of Gazzaniga’s experiments, researchers presented the word “walk” to a patient’s right brain only. The patient immediately responded to the request and stood up and started to leave the van in which the testing was taking place. When the patient’s left brain, which is responsible for language, was asked why he got up to walk, the interpreter came up with a plausible but completely incorrect explanation: “I’m going into the house to get a Coke.”

In another exercise, the word “laugh” was presented to the right brain and the patient complied. When asked why she was laughing, her left brain responded by cracking a joke: “You guys come up and test us each month. What a way to make a living!” Remember, the correct answer here would have been, “I got up because you asked me to,” and “I laughed because you asked me to,” but since the left brain didn’t have access to these requests, it made up an answer and believed it rather than saying, “I don’t know why I just did that.”

Gazzaniga determined that the left side of the brain creates explanations and reasons to help make sense of what is going on around us. The left brain acts as an “interpreter” for reality. Furthermore, Gazzaniga found that this interpreter, as in the examples mentioned, is often completely and totally wrong. This finding should have rocked the world, but most people haven’t even heard of it.

Think about the significance of this for a moment. The left brain was simply making up interpretations, or stories, for events that were happening in a way that made sense to that side of the brain, or as if it had directed the action. Neither of these explanations was true, but that was unimportant to the interpretive mind, which was convinced that its explanations were the correct ones…

…I am distinguishing mental suffering from physical pain. Pain occurs in the body and is a physical reaction—like when you stub your toe or break an arm. The suffering I speak of occurs in the mind only and describes things such as worry, anger, anxiety, regret, jealousy, shame, and a host of other negative mental states. I know it’s a big claim to say that all these kinds of suffering are the result of a fictitious sense of self. For now, the essence of this idea is captured brilliantly by Taoist philosopher and author Wei Wu Wei when he writes, “Why are you unhappy? Because 99.9 percent of everything you think, and of everything you do, is for yourself — and there isn’t one.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Meituan, Tencent (parent of WeChat) and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 25 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 25 June 2023:

1. Vision Pro – Benedict Evans

There’s a strong echo here of mobile 20 years ago. From the late 1990s to 2007, we had mobile internet devices that were OK but not great, and slowly improving, we knew they would eventually be much better, and we thought ‘mobile internet’ would be big – but we didn’t know that smartphones would replace PCs as the centre of tech, and connect five billion people. Then the iPhone came, and the timeline broke.

Apple’s Vision Pro isn’t an iPhone moment, or at least, not exactly. At $3,500, it’s very expensive in the context of today’s consumer electrics market, where the iPhone launched for $600 (without subsidy, and then rapidly switched to $200 at retail with an operator subsidy). And where the iPhone was a more-or-less drop-in replacement for the phone you already had, nine years after Meta bought Oculus, VR is still a new device and a new category for almost everyone. Indeed, the Vision Pro actually looks a bit more like the original Macintosh, which was over $7,000 (adjusted for inflation) when it launched in 1984, and most people didn’t know why they needed one.

I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product – that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that. 

This is the opposite decision to Meta: indeed Apple seems to have taken the opposite decision to Meta in most of the important trade-offs in making this. Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware – Apple is trying to catalyse an ecosystem while we wait for the right price. So the Vision is a device pulled forward from years into the future, at a price that reflects that. It’s as though Apple had decided to sell the 2007 iPhone in 2002 – what would the price have been?…

…Apple didn’t say AR or VR, and it certainly didn’t say ‘metaverse.’ Metaverse (as I wrote here last year) has become an entirely meaningless word – you cannot know what someone else means when they say it. But when Mark Zuckerberg talks about it, it sounds like a place – a new environment somehow different from ‘the internet.’ Meta talks about what it will be ‘like’ in the ‘metaverse.’ But Apple makes computers, and Apple thinks this is a computer, that runs software, that could be all sorts of things. For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere – it’s a screen and there could be five different ‘metaverse’ apps. The iPhone was a piece of glass that could be anything – this is trying to be a piece of glass that can show anything.

This reminds me a little of when Meta tried to make a phone, and then a Home Screen for a phone, and Mark Zuckerberg said “your phone should be about people.” I thought “no, this is a computer, and there are many apps, some of which are about people and some of which are not.” Indeed there’s also an echo of telco thinking: on a feature phone, ‘internet stuff’ was one or two icons on your portable telephone, but on the iPhone the entire telephone was just one icon on your computer. On a Vision Pro, the ‘Meta Metaverse’ is one app amongst many. You have many apps and panels, which could be 2D or 3D, or could be spaces. Developers can make whatever they want…

…That makes it unlikely that media companies and games companies will invest much in creating custom experiences any time soon. Apple has been spending a lot of money shooting 3D content itself and Disney’s Bob Iger took the stage briefly to show an obviously hasty ‘sizzle reel’ of ideas, while lots of developers are interested in experimenting, but this isn’t going to have millions of apps in 2024. On the other hand, that may not matter for the people who do buy it – part of the benefit of the AR thesis, and Apple’s broader ecosystem leverage, is that almost all your iPad and iPhone apps will already work. There just won’t be much VR.

Where does that leave Meta?

Mark Zuckerberg, speaking to a Meta all-hands after Apple’s event, made the perfectly reasonable point that Apple hasn’t shown much that no-one had thought of before – there’s no ‘magic’ invention. Everyone already knows we need better screens, eye-tracking and hand-tracking, in a thin and light device. Meta is still selling millions of Quests, and it’s not clear how many people will switch or postpone a purchase give the price and timing of the Vision Pro. There will be voices saying that Meta should push even harder to build up its commanding position ahead of Apple’s proposition becoming more mass-market in, say, 2025 or 2026. It could also pursue the Android strategy of licensing a platform to the rest of the industry, leading the ‘open’ side of the market against Apple’s closed side (except that the Android team had a whole industry of phone OEMs hungry for a way to make the jump to smartphones, and who are the hungry VR OEMs today?). It’s worth remembering that Meta isn’t in this to make a games device, nor really to sell devices at all per se – rather, the thesis is that if VR is the next platform, Meta has to make sure it isn’t controlled by a platform owner who can screw them, as Apple did with IDFA in 2021. (This is also one reason Android was created, yet Google seems to have dropped out of VR entirely, though the Quest runs Android.)

On the other hand, the Vision Pro is an argument that current devices just aren’t good enough to break out of the enthusiast and gaming market, incremental improvement isn’t good enough either, and you need a step change in capability. That was also the idea behind the much less ambitious (and flopped) Quest Pro. Who won that argument? Meta just announced the Quest 3 for later in the year (just such an incremental improvement), but should it pause after that and work on a jump forward of its own? Can it? Should it be trying to compete with Apple at frontier hardware tech?

2. The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content – Carl Franzen

The age of generative AI is here: only six months after OpenAI‘s ChatGPT burst onto the scene, as many as half the employees of some leading global companies are already using this type of technology in their workflows, and many other companies are rushing to offer new products with generative AI built in.

But, as those following the burgeoning industry and its underlying research know, the data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney comes initially from human sources — books, articles, photographs and so on — that were created without the help of artificial intelligence.

Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?

A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: “We find that use of model-generated content in training causes irreversible defects in the resulting models.”…

…As another of the paper’s authors, Ross Anderson, professor of security engineering at Cambridge University and the University of Edinburgh, wrote in a blog post discussing the paper: “Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.”…

…In essence, model collapse occurs when the data AI models generate ends up contaminating the training set for subsequent models.

“Original data generated by humans represents the world more fairly, i.e. it contains improbable data too,” Shumailov explained. “Generative models, on the other hand, tend to overfit for popular data and often misunderstand/misrepresent less popular data.”

Shumailov illustrated this problem for VentureBeat with a hypothetical scenario, wherein a machine learning model is trained on a dataset with pictures of 100 cats — 10 of them with blue fur, and 90 with yellow. The model learns that yellow cats are more prevalent, but also represents blue cats as more yellowish than they really are, returning some green-cat results when asked to produce new data. Over time, the original trait of blue fur erodes through successive training cycles, turning from blue to greenish, and ultimately yellow. This progressive distortion and eventual loss of minority data characteristics is model collapse. To prevent this, it’s important to ensure fair representation of minority groups in datasets, in terms of both quantity and accurate portrayal of distinctive features. The task is challenging due to models’ difficulty learning from rare events.

This “pollution” with AI-generated data results in models gaining a distorted perception of reality. Even when researchers trained the models not to produce too many repeating responses, they found model collapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently.

“There are many other aspects that will lead to more serious implications, such as discrimination based on gender, ethnicity or other sensitive attributes,” Shumailov said, especially if generative AI learns over time to produce, say, one race in its responses, while “forgetting” others exist…

…Fortunately, there are ways to avoid model collapse, even with existing transformers and LLMs.

The researchers highlight two specific ways. The first is by retaining a prestige copy of the original exclusively or nominally human-produced dataset, and avoiding contaminating with with AI-generated data. Then, the model could be periodically retrained on this data, or refreshed entirely with it, starting from scratch. 

The second way to avoid degradation in response quality and reduce unwanted errors or repetitions from AI models is to introduce new, clean, human-generated datasets back into their training.

However, as the researchers point out, this would require some sort of mass labeling mechanism or effort by content producers or AI companies to differentiate between AI-generated and human-generated content. At present, no such reliable or large-scale effort exists online…

…While all this news is worrisome for current generative AI technology and the companies seeking to monetize with it, especially in the medium-to-long term, there is a silver lining for human content creators: The researchers conclude that in a future filled with gen AI tools and their content, human-created content will be even more valuable than it is today — if only as a source of pristine training data for AI.

3. Bill Nygren, Alex Fitch – First Citizens Bank: The Bank Buyers – Matt Reustle, Bill Nygren, and Alex Fitch

Bill: [00:02:30] Obviously, everybody knows what a bank is, but I don’t think there’s a lot of thought as to how you actually operate a bank. And certainly, in the wake of all the problems recently with SVB and the First Republic, we’ve learned that a lot of people in both the government and the media don’t really understand how banking works.

So I’m going to just start with an example. Let’s say I wanted to open a bank, and I put $100,000 in cash. So I’ve got $100,000 of equity, no debt. And then you come along and say, you’ve got $900,000 that you’d like to invest in a savings account. So now I’ve got $1 million in cash, $900,000 in deposits, and $100,000 in equity.

My deal with you is I’ll give you something like 150 basis points less than I can earn on T-bills, and that’s enough to cover my expenses for recordkeeping, processing your transactions, and running a branch banking network. So if I collect 5% on the T-bills I invest in, that’s $50,000. I pay you $32,000 of interest, that’s 3.5% on your money, and a net interest income of about $18,000 before my expenses.

And then I have about 100 basis points of expenses, leaving me with $8,000 before tax, $6,000 after. So I’m earning a 6% ROE on my investment. Now clearly, that’s a very low-risk bank, but it doesn’t return enough to be worth my $100,000 investment. So nobody would run a bank on those terms.

So I get smarter, and I say Alex wants to buy a house. So instead of $1 million in T-bills, I decided to write a mortgage to Alex. I collect 150 basis points over treasuries. So now that same math works out to me earning a 17% return on equity. And you can start to see the attraction of banking. But there are three huge risks that I’ve created: credit risk, liquidity risk, and duration risk. So start with credit. What happens if Alex stops paying on his mortgage.

Well, then I don’t have the money to pay you back on your deposits. So credit risk is always the most important risk that banks have to focus on. The other risk of liquidity is I’m giving you daily withdrawal rights on your money, but Alex doesn’t have to give me his mortgage back until 30 years go by. So I’ve got a huge asset-liability mismatch and managing that is a very important aspect of running a good bank. Lastly, what happens if rates go up?

I can’t change the rate Alex is paying on his mortgage because that’s contractual, but you expect higher rates on your safe bank’s account because rates are now higher. So I’ve also got a big duration risk in the bank and that too has to be managed to have a long-term successful bank. So to us, it’s kind of disingenuous when you hear people saying today as they look at what happened to Silicon Valley, that banks shouldn’t be run in a risky way. Banking is all about risk.

You’re taking short-term deposits. You’re making long-term loans. You’re expecting people to pay back that money. You’re making an estimate of how long the deposits will stay with you. And all of the banking is to get enough diversification in your depositors and your borrowers so that instead of me dealing with a 1% or 2% chance that Alex defaults on his mortgage, I’ve got enough mortgages out there that I can make a pretty good guess that 1% or 2% of the people will default.

And as analysts looking at the banking industry, we look at it and say, it’s generally a commodity business, it’s hard to run a bank so well that it’s a better-than-average business, but the people become even more important in banking than they are in most industries because the leverage is so high.

In most industries, if a management team risks 10% of their assets, they’re also risking about 10% of their equity. In banking, if you risk 10% of your assets, you’re putting the entire equity at risk. So to us, the people become exceptionally important in banking as well as the quantitative analysis of how good a job they’re doing, managing the risks that they’re underwriting…

Matt: [00:14:36] This business sounds very interesting. It was a very detailed answer there with a lot that I want to tap into. But just from the early description, as you mentioned, it’s the perhaps most important bank that no one has heard of, not hosting conference calls, this deep history of M&A. Share a bit more about that management team and who the leadership is today, how long they’ve been around, and how much they’ve changed the business model. Is the M&A and all of those deals and acquisitions, is that something that’s specifically happened within their tenure?

Alex: [00:15: 06] The bank has been run by the same family for three generations. R.P. Holding took over as CEO in 1935. And again, what’s been a more than 80-year run of consistent management by the holding family. R.P.’s son, Lewis, took over in the 1950s. He was the CEO of the bank until 2008 when Frank Holding took over. Frank Holding is still the CEO today. Really, the Holding family is deeply intertwined with this bank.

Frank and his four sisters own something like 24% of the shares outstanding. They control around 40% of the vote. Frank started in the business at 22, working his way up through junior roles in the bank. His sister, Hope, started at the bank in 1986. Today, she owns almost 5% of the company and is the Vice Chairman. His brother-in-law, Peter Bristow, is the bank president. The business is very intertwined with the family in for more than 80 years, they’ve been running it.

The strategy has evolved over time. For a long time, I think they were organically focused on opening branches in adjacent geographies. The acquisitions started before Frank. They’ve started branching out into various markets through takeovers of banks in other states, but it really accelerated under Frank. He took over in 2008 and you had the financial crisis. And so from 2009 to 2011, there was a lot of opportunity in failed banks through FDIC auctions.

So from 2009 to ’11, they completed something like a half-dozen FDIC-assisted takeovers with meaningful gains associated with taking over those businesses. And in the subsequent 12 years, continued down that path. Doesn’t feel like there are FDIC auctions and bank failures every year given how newsworthy the recent ones have been, but there are. And they’ve relatively consistently found opportunities to buy failed banks through these FDIC auctions at what have been very attractive prices.

That’s become really a core competency, and it’s not the type of advantage you typically think about a bank having. But at this point, they seem to have a real muscle memory around integrating FDIC transactions. They know the processes. They know how they’re going to bid. They know on the next day, how they’re going to begin the integrations, how they’re going to structure employee retention packages, how they’re going to communicate depositors.

Every step of that process, they’ve mapped out and executed on the north of 15x now. It gives them a real skill set here that the vast majority of large banks have never even considered building. And when you have something like the Silicon Valley failure come up, that can turn into a real asset.

Matt: [00:17:54] Do they have much competition in these auctions? You mentioned, it just seems like a specialty or something that other banks don’t even consider doing. But when they are participating in these, are there others that they’re often participating against or others who have operated a somewhat similar strategy?

Bill: [00:18:11] There are certainly others that compete in FDIC auctions. But I think the FDIC’s own summary of what happened after SVB got sold to Citizens is pretty interesting because they criticize themselves for not offering the opportunity to a large enough set of bidders to perhaps extract the highest price, they could have from SVB.

They haven’t publicly said exactly who they restricted, but it’s been written in various places that they told hedge funds, they couldn’t bid on the portfolio. They told the top 10 banks not to bother bidding. They told banks that were smaller than SVB that they were too small that they needed a larger bank than SVB to assure that the public would be comfortable that the rescue would have staying power. They ruled out banks that would meet a capital raise to be able to buy SVB.

And then I’ve also read that they ruled out banks that previously hadn’t purchased from the FDIC. So if you consider that First Citizens was barely larger than SVB, at least before the deposit run started at SVB, you are probably talking about only 20 or so banks that were large enough to compete. And of those, the overwhelming majority were not experienced at FDIC takeovers or needed capital. I think it’s fair to guess that it was a very, very small number of banks that would have put a bid in on SVB.

Alex: [00:19:48] It reminds me of that quote that for a lot of management teams, it’s better to succeed conventionally than fail unconventionally. This is an area that requires specific knowledge and specific experience. And for the vast majority of management teams that were allowed to bid, you can imagine the dynamic internally that they’re taking on a lot of risk for something they don’t fully understand.

They don’t even know what questions to ask, being asked to build out this competency in a couple of days and potentially risk their career on this major decision, the likes of which they’ve never made before. You contrast that with Frank’s family and their business, they don’t have to worry about losing their jobs due to some perceived short-term issue. There’s a certain decisiveness that comes with being the ultimate owner and acting like an ultimate owner.

Now they care quite a bit about ensuring First Citizens succeed that they maintain this legacy, their family’s worth, their position in the community. But there’s an ability to act more decisively than when you’re a hired CEO who has to be more concerned about others questioning his decisions in the short term…

Matt: [00:32:39] This is a more thematic question. Bill, you might be the right person to answer this. I think with SVB and the rate at which that run on the bank happened, especially compared to quarterly results, which showed there was a racing mismatch and an interest rate exposure, but it seemed to happen very quickly. And you would not refer to those as sticky assets. Once it started, it happened very quickly.

Do you think that that was signal or indicative of anything having changed with the overall markets and the ability to move funds faster, technology, and the way that information can spread? Do you think there’s anything that happened with that event, which should be a broader concern for the overall system?

Bill: [00:33:22] It has certainly made all bankers attuned to how easy it is to shift funds. It doesn’t mean getting in your car, driving to a branch, waiting in line; instead, you’re pulling out your iPhone and you move funds in seconds. But while there’s been a lot of focus on how technology has made it easier for depositors to transfer funds out of a bank, I think the real thing here was, at SVB, how much of the money was tied to the same industry and some non-financially sophisticated people who look to the same leaders to help them with their financial decisions.

So you have one of those leaders tweet that he thinks people should move money out of SVB. And most of the depositors at SVB were probably followers of that person. I think where Alex has talked about First Citizens having generational relationships, SVB couldn’t possibly have been in a more different position. They couldn’t make a reasonable estimate of how sticky their deposits were because they haven’t had them for long enough. When I was talking in the introduction about the three risks, credit risk wasn’t a problem at SVB.

Liquidity risk was a big problem because they had very liquid deposits and not-so-liquid assets and then duration mismatch was a problem because the deposit side of the balance sheet floated completely with interest rates and the asset side did not. So one of the issues is not only the investment community, but also the regulators were so backward looking and thinking about banking risk that credit risk is what got all the banks in trouble in the GFC.

So the focus in the regulatory environment has been on minimizing credit risk. And ironically, we’ve had some of the large bank managements that we’ve talked to post-SVB say that regulators were actually pushing them to extend duration by buying mortgage backs just like SVB had done. So it’s funny. You think about you want to protect your capital base and you also want to protect your income stream, but sometimes those are at odds with each other.

And the regulators were more worried about what would happen to the earnings of the banking system if low rates or negative rates persisted or came into being than they were about what happens if rates go from nothing to 5% to 6% in a very short period of time. So I think there are some pretty unique factors at work here in addition to this technology change that’s attracted all the focus of how easy it is now for people to change where they bank.

Matt: [00:36:16] Absolutely. Very interesting and a lot to learn from that experience. With all that in mind, as investors, when you are generally approaching banks, I think you’ve referenced some of the metrics here in terms of ROE, book value. How do you think about this as value investors yourselves? How do you think about the industry? And with all of those qualitative factors in mind and thinking about those when approaching any investment, how do you think about the actual valuation of banks?

Bill: [00:36:45] I think if you look at the past generation or two, where banks trade versus the S&P 500, they’ve typically sold at about two-thirds of the S&P 500 multiple. We think that kind of makes sense. I think it’s hard to argue that this is a better-than-average industry and difficult to say why banks should sell at 18x earnings when the S&P 500 sells there.

But at Oakmark, we’re always looking for opportunities of where prices get out of line with both their history and what we think fundamental value is. And today, the average bank sells at probably less than half of the S&P 500 multiple. So a larger discount than it has historically. And also, we would argue the industry itself is in much better shape than it was at the time of the GFC, especially regarding credit risk. We think there’s an unusual opportunity in banking.

I mentioned earlier to us getting an opinion about the people in charge of various banks is one of the most important things because of the leverage and the opaqueness of the financial accounting, capital allocation is hugely important. One of the reasons that we think the industry is more attractive today than it was pre-GFC is almost all of the leadership teams of the large banks agree that when they cannot grow at the rate they want to making loans to creditworthy customers, they’re all willing to grow by shrinking the denominator today.

When there are organic growth opportunities, returning capital to the shareholders, both through dividends and share repurchase is central to our philosophy at Oakmark that we want managements that are comfortable giving capital back to the owners when they don’t have good growth opportunities. I think book value is a good starting point. A well-run bank ought to be worth book value.

It’s probably hard to get much more than twice that in terms of what the underlying value could be. And it’s funny. I started in this business a little over 40 years ago, and one of the rules of thumb back then was that if you’re looking at a bank, the PE should roughly equate to its return on equity. So if it earns 8% on equity, it should sell at 8x earnings. If it earns 15% on equity, it could sell at 15x earnings.

And through all of the changes in the past 40 years and whether interest rates have been near zero or up over 10%, the math behind that very simple PE should about equal return on equity still approximately holds today. So for us, that’s one of the other metrics that we would look at is how big a discount PE is available in the market relative to the return on equity the company is achieving…

…Bill: [00:40:47] One last thing I’d throw in there, Matt. First Citizens has two classes of stock. There’s the regular Class A stock that has normal voting rights and then when Alex mentioned earlier that the family has about 40% voting control despite not owning nearly that much of the underlying share base, it’s because their Class B shares have super voting rights.

And a strange anomaly in the market today is investors are so concerned about illiquidity that these super voting shares that don’t trade nearly as frequently as the regular vote shares, actually trade at about a 10% discount to the normal voting shares.

So especially for individual investors who don’t need to accumulate a large position to be meaningful to their assets and who can be in complete control of when they decide to liquidate a position in First Citizens, to us to get paid 10% to get extra voting rights seems like it makes a really good deal an even better deal.

Matt: [00:41:57] That’s very interesting. Same dividend rights and everything else. It’s just a matter of liquidity that explains that discount?

Bill: [00:42:05] Yes.

Matt: [00:42:06] When you look at the business model moving forward, there used to be these general rules of thumb with where interest rates were and whether that would be positive or negative for the banks. Just thinking about First Citizens specifically, they have the acquisition and integration of the acquisition, which will, I assume, take some time to fully integrate and to smooth out.

But anything else that you think about as a key driver of the business model and not that I’m asking you to make a rate call, but how important are interest rates in terms of impacting their earnings outlook and anything else that’s a key variable in driving the outlook for the business?

Alex: [00:42:47] It’s an interesting and kind of ironic dynamic the industry has found itself in for a really long period of time through the 2010s. We were sitting here thinking we need to get off this 0% interest rate floor because the high-quality deposit franchise and the low-quality deposit franchise, they can both pay roughly the same amount when rates are zero and the high-quality deposit franchises as a result, under-earn.

So there was this idea that higher rates would be extremely helpful because you’d be able to flex that high-quality deposit franchise value and actually realize some of it by paying less than lower-quality peers on your liabilities. That happened, and you’ve seen meaningful net interest margin expansion for those banks, but now the industry has found itself in a different predicament, which is that the unrealized losses have increased so much from higher interest rates that at this point, it’s not clear if the banks are still beneficiaries of rates being this high.

And in a lot of circles, for some banks, there’s fear around what if rates go higher, those unrealized losses could expand…

…Bill: [00:54:23] When I started in this industry, I think there were 14,000 some banks a little more than 40 years ago, and we have maybe 25% of that number today, just over 3,000. I think both in politics and in the communities at large, people have a misperception that the small number of banks relative to what we used to have means banking has become more inconvenient. We actually have more than twice as many branches today as we had 40 years ago.

So the distance somebody has to drive to their local bank has actually gone down. My hope is that from a regulatory perspective and even just a political perspective that this drumbeat that we need to keep all the small banks independent that, that might die down. There are such strong economies of scale in banking that to earn the same rate of return, a small bank has to take incrementally much more risk, and it’s not good for the system.

And when the small banks get acquired, they inherit better technology, more economies of scale, better regulatory compliance. I think it’s actually good for the system to see more mergers and acquisitions in banking. And people say like, oh, wouldn’t it be awful if we get down to a world where we only had 20 banks in the United States? I’m not so sure why that would be a bad thing. 

4. When The Stock Market Plunges… Will You Be Brave Or Will You Cave? – Jason Zweig

In fact, if I could give you only one piece of financial advice, it would be this: Spend less time studying your investments and more time studying yourself. That’s because how much money you make in an investment often depends far more on how you behave than on how it does. “It’s people that lose money,” says Patrick Chitwood, an investment adviser in Birmingham with a Ph.D. in psychology. “It’s not investments.”

To see what I mean, look at PBHG Growth Fund. In the second half of 1990, when the U.S. stock market slipped 6%, this small-stock fund skidded 21%. Over the next two years, investors yanked out nearly all their money, shriveling PBHG’s assets from $12.5 million to $3.5 million. Bad move: From the end of 1990 through 1995, PBHG Growth’s 35.1% annual return transformed a $5,000 investment into $22,503. Someone who fled PBHG and earned the overall market average of 16.6% annually would have turned $5,000 into just $10,776 — less than half what PBHG produced.

That huge $11,727 difference is the price of poor self-knowledge. Chances are, most of the people who bailed out of PBHG had honestly believed they were long-term investors who could stomach the fund’s high risks. They were wrong…

…In 1975, Steven Spielberg’s movie about a killer shark hit the theaters, and suddenly Americans were terrified of going into the ocean — even though there had been a grand total of only 66 shark attacks in U.S. waters over the preceding 10 years.

“We tend to judge the probability of an event by the ease with which we can call it to mind,” explains Kahneman. But that’s a bad way to assess risk; an event does not become more likely to recur just because it is recent or memorable. In 1975, for instance, the odds of being attacked by a shark in U.S. waters were about one in 300,000,000 — and, since sharks don’t go to the movies, the odds certainly didn’t worsen after the film was released. But because Jaws was so vivid and fresh in people’s minds, it drowned out all the statistical proof that beaches were safe.

Similarly, after the October 1987 stock market crash, panicked investors virtually stopped buying stock mutual funds for the next year and a half. Instead, investors snapped up bonds and cash — despite the overwhelming historical evidence that stocks had outperformed them both over the long run…

…Then there’s the “near miss.” Say the winning number in a lottery was 865304. John picked 361204; Mary picked 965304; Peter picked 865305. Which of them is the most upset? Most people agree that Peter feels the worst, because he came “closest” (even though all losing numbers are equally incorrect). As Kahneman explains, “People become more frustrated in a situation where a more desirable alternative is easy to imagine.”…

…A group of people was asked which is longer, the Panama Canal or the Suez Canal, and then asked how certain they were that their answer was correct. Among those who were 60% certain, 50% of them got the answer right — meaning that this group was 10% too sure. But among those who were 90% certain, only 65% got the answer right, meaning that this group was 25% too sure.

The more convinced we are of our knowledge, the bigger the gap is likely to be between what we actually know and what we think we do. Such overconfidence leads us to inflate the value of our own skill, leading to what psychologists call the illusion of control. Years ago, when a Spanish national lottery winner was asked how he selected the ticket number, he answered that he was positive his lucky number ended with 48 — because, he said, “I dreamed of the number seven for seven straight nights. And seven times seven is 48.”

No wonder Kahneman says that “When people take risks, it’s often because they don’t understand the odds. One of the hardest challenges is to know just how little you really know.” If you overestimate your skills and knowledge, you may be unrealistically optimistic about your investment prospects. That will worsen your shock when the market tumbles, increasing the odds that you will panic and bail out at the bottom.

One group of people is asked to assess the probability that the population of Turkey is more than 5 million; another is asked the likelihood that Turkey’s population is less than 65 million. Then both groups are asked for their best guess of Turkey’s population. The first group guesses 17 million; the second, 35 million. (The correct answer: roughly 63 million.)…

…According to a recent study by the American Stock Exchange, 38% of young middle-class investors check their investment returns at least once a week, 17% check them monthly, 10% check yearly — and the rest “never” check. While never is not often enough, once a week is way too often. The more frequently you check on your investments, the more volatile they will look to you. My advice: Force yourself to check the value of your investments no more than once a month…

…If you make a habit of dollar-cost averaging into a particular mutual fund — investing a fixed amount at regular intervals — you’ll stand a better chance of sticking with it than if you’d thrown in a big chunk of money all at once. Think of Ulysses in Homer’s Odyssey, who resisted the deadly lure of the Sirens’ songs by having his crew “tie me hard…to hold me fast in position upright against the mast.”…

…Let me leave you with these thoughts. Successful investors control the controllable. You can’t prevent the market from crashing someday, but you can control what you do about it. The more honestly you understand your own attitudes toward risk, the more likely you are to thrive no matter what the market throws at you.

5. Charlie Silk’s 150-Bagger – Peter Lynch

My candidate for the world’s greatest amateur investor is Charles Silk. I met this fellow Bostonian halfway around the world, at a reception at the Bible Lands Museum in Jerusalem in 1992. We were part of a trade mission to Israel sponsored by the state of Massachusetts. It turned out we had a few friends and many stocks in common. On a bus ride to historic sites, we had our first extended chat. Not about historic sites, but about Blockbuster Entertainment, Charlie’s most successful pick.

Charlie bought Blockbuster many splits ago, in 1984, for $3 a share. It wasn’t called Blockbuster yet. It was called Cook Data Services, which fit into Charlie’s area of expertise. He had had his own data-processing company, which had fallen on hard times, and he was forced to shut it down. He was sitting home, doing telemarketing for a software outfit and wishing he could find another way to make a living.

Cook Data Services solved his problem. The shares he bought for $3 apiece a worth $450 today, so his $10,000 investment became a living in itself. Thanks to this one exciting stock, he was able to abandon telemarketing and devote himself to his favorite hobby – looking for more exciting stocks…

…Call Charlie a lucky man for stumbling onto Cook Data Services, but luck didn’t make him a millionaire. The hard part was holding on to the stock long enough to get the full benefit. After the price had doubled and then tripled, he didn’t say to himself, I’ll take my profits and run, like many investors who invent arbitrary rules for when to sell. He wasn’t scared out when the price dropped, as it did several times, and he ignored the highly publicized negative comments made by forecasters and “experts” who knew less about Blockbuster than he did. He had the discipline to hold on as long as the fundamentals of the company were favorable. It was not a guess on his part. He was doing his homework all along.

In my investing career, the best gains usually have come in the third or fourth year, not in the third or fourth week or the third or fourth month. It took eight years for Charlie to get his 150-bagger, but in a way, he’d been preparing for the opportunity since college…

…He searches for good stocks among small companies that are relatively debt free and have been beaten down in the market, to the point that they’re selling for less than cash in their bank accounts. “I’m paying nothing for the company itself,” Charlie says in his rich Boston accent. “The only thing I’m risking is my patience.”…

…Now we move forward to 1984. Another hot IPO market was followed by a collapse at the end of that year. Small high-tech stocks suffered the most. For Charlie, it was 1974 all over again, except this time he didn’t have to bother with pink sheets. NASDAQ had launched its computerized trading system.

He surveyed this latest wreckage. Cook Data Services caught his eye. It sold software programs to oil and gas companies – right up Charlie’s alley. It came public in 1983 at $16 a share and quickly rose to $21.50, but the price had fallen to $8 when Charlie began tracking it. He was still tracking when year-end selling dropped the price to $3.

This was the kind of risk Charlie liked to take: a company with no debt and $4 a share in cash, selling for $3. But cash in itself is no guarantee of success. If a company is sick to begin with, it has to spend its cash to stay alive. Cook Data was quite healthy. Its revenues had increased four years in a row. “To produce a record like that,” Charlie says, “they had to have something on the ball.” His $10,000 investment was as much as he could scrape up. It made him one of the largest shareholders. 

A few months after Charlie bought his shares, Cook Data announced it was moving away from data services and into the “consumer area.” The company’s president, David Cook, had an ex-wife who was a movie buff apparently; she still had some influence and convinced him to open a video superstore in Dallas…

…One of the most interesting things the company sent Charlie was an independent study on the future of the video-rental industry. “When I read that thing,” Charlie says, “I found out that 30 percent of American households owned VCRs, and that eventually 60-70 percent would own these machines. [This estimate turned out to be conservative.] All these millions of people with VCRs were going to need an endless supply of tapes.”

It got more interesting when he went to the library and looked up company filings in the SEC’s Official Summary of Security Transactions and Holdings. He saw that two different groups, the Sanchezes from Texas and Scott and Lawrence Beck from Illinois, had become major shareholders. Scott Beck was coauthor of the video study and obviously impressed by this own research. Charlie also learned that revenues from the Dallas superstore had more than doubled in the first three months of operation. His sources at the company confirmed these numbers and told him how crowded the store was. It was amazing, they said. People were driving from as far as 30 miles away…

…In six months from 1984 to early 1985, he’d already made five times his money. Some of his friends were urging him to be sensible and to take his wonderful profit. This is where many investors would have tripped up, but having missed some spectacular gains in the 1970s, Charlie kept focus where it belonged – not on the stock price but on the company itself…

…A week or so before the offering, Charlie was reading Alan Abelson’s column in Barron’s, when he came to a pan of Blockbuster. Abelson’s argument: Who needs another video store?

Abelson’s comment produced a spate of selling that caused the stock price to drop 15 percent. Charlie was a fan of Abelson’s, but he was confident that he knew more about Blockbuster. The sales figures from Blockbuster showed that people were flocking to the new superstores…

…Toward the middle of 1987, Charlie started worrying about the stock market in general and the fact that he had too much money riding on one issue. So he sold a portion of his shares in the high 30s, just before the big correction in October of that year. Short term, this proved to be a smart move, because Blockbuster stock promptly fell by half, to $16. But longer term, he would have been better off to hold on to every share to get all of Blockbuster’s tenfold gain over the next four years.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 18 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 18 June 2023:

1. Sharing Memories of Ben Graham with Warren Buffett in Omaha, 2023 – Beyond Ben Graham blog

“When I worked with him,” Mr. Buffett continued, “Ben told me: ‘Don’t worry too much about making money. It will change how your wife lives but not how you live.’” Mr. Buffett laughed gleefully. With a jovial smile, he remembered Grandpa Ben’s advice to him: “‘You and I will still wear the same clothes and eat at the same cafeteria, so relax.‘”…

…I suspect that ours was an unusual encounter for Warren, with no talk of investments, stocks, earnings, companies, banks, and the economy. Instead, I asked him, “How did Ben treat you, when you went to work for him at Graham-Newman?” In 1954, Warren had been twenty-four years old.

“Kindly. Ben treated me kindly. Same as he treated everyone else in the office,” Warren asserted.

I pictured my grandfather, his ready smile, his benevolent presence, the way he had welcomed me to his Aix-en-Provence cottage when I arrived for an unannounced visit, scruffy from weeks of camping, my long hair in dire need of a wash, at the age of twenty-one.

On our second visit with Warren, I asked him: “Would you say that you and Ben became friends?” When he didn’t answer right away, I continued: “In some of Ben’s letters and postcards in your files, he expressed a wish for you and Susie to visit him in California. That sounds like friendship to me.”

“Well, I think I wanted the friendship more than he did.” Warren paused, and when he spoke again, his voice cracked. “Ben was my hero and my friend.” His light blue eyes widened and his face took on a youthful, eager, and fierce expression. “It helps to have heroes who are better than you.”

I felt honored to be in the room. A deep, essential part of me perceived, from the tenor of Warren’s voice, his fervent gaze, that Warren loves my grandfather. Not just back in the ’50s when he venerated Ben as his most admired Columbia Business School professor and his dream boss. Not just in the late ’50s when he and his wife Susie stayed at the Beverly Hills Hotel and joined Ben and Estey for dinner, or in 1968 when Warren organized a tribute to his mentor by convening twelve of Ben’s former Columbia students (including himself, Charlie Munger and Walter Schloss) on Coronado Island in San Diego to listen to Graham, the Great Man. Ben died in 1976, and Warren still finds meaning in his relationship with Ben. We humans have the capacity to feel love for a person who has passed—love that nourishes the soul and informs how we live. Warren’s heartfelt connection with Ben continues to sustain him…

…In his gracious treatment of me and my husband, Warren embodied the kindness and generosity he saw in Ben Graham. He treats the twenty-four staffers who work with him at the Omaha office considerately too. Investment manager Ted Weschler appeared relaxed and glad to be there. Each person we chatted with in the lunch room seemed at ease and content, in marked contrast to the stressed employees I have encountered in Bay Area tech firms.

Warren Buffett follows in Ben Graham’s footsteps by manifesting kindness in his treatment of shareholders, and compassion in his way of conducting business. For example, back in the ’70s, Warren Buffett stood up for Berkshire Hathaway textile workers the way Ben Graham advocated for ordinary investors when Ben compelled Standard Oil to distribute surplus cash to shareholders in the 1928 Northern Pipeline contest. From a business standpoint, Buffett knew he should close the failing Berkshire Hathaway textile mill and invest its assets in a profitable enterprise, but he chose to keep it open in order to give the workers a livelihood.

Inspired by his hero Ben Graham’s generosity, Warren Buffett has far surpassed Ben in giving to charity. In 2022, according to Forbes, Warren’s 17th annual summer gift brought his total lifetime giving to charitable foundations to a record $48 billion, “[solidifying] his place as the likely biggest philanthropist of all time.”…

…A smiling executive assistant boxed up the papers. “It’s been so nice to meet you in person,” she enthused. “You know, Warren talks about your grandfather all the time.”

“You mean, because he was expecting my visit?” I asked.

“No,” she answered. “I’ve been here twenty-five years. He talks about Ben Graham all the time.”

2. Can We Have a New Bull Market With 3% Unemployment? – Ben Carlson

Many historical market relationships have been turned on their head since the pandemic but there has been a clear correlation between stock market returns and the unemployment rate over the past 75 years or so…

…There is a clear pattern in these results.

Average annual returns have been higher from higher unemployment rates and lower from lower unemployment rates…

…It can also be instructive to look at the range of returns around these historical averages. Here those are for 10 year performance:

You can have exceptional long-term returns from low unemployment rates. It’s just that you get a much higher floor investing when the economy is falling apart than when everything is humming along from a labor market perspective.

Markets are often counterintuitive. Historical relationships are helpful for setting expectations but they’re not written in stone.

So we could get a rip-roaring bull market from an unemployment rate of 3% or so but it’s probably not the base case.

3. Microsoft’s Satya Nadella Is Betting Everything on AI – Steven Levy and Satya Nadella 

STEVEN LEVY: When did you realize that this stage of AI was going to be so transformative?

SATYA NADELLA: When we went from GPT 2.5 to 3, we all started seeing these emergent capabilities. It began showing scaling effects. We didn’t train it on just coding, but it got really good at coding. That’s when I became a believer. I thought, “Wow, this is really on.”

Was there a single eureka moment that led you to go all in?

It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it’s achieved a lot of great benchmarks, but it doesn’t have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I’d dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that’s pretty cool.

Microsoft has been investing in AI for decades—didn’t you have your own large language model? Why did you need OpenAI?

We had our own set of efforts, including a model called Turing that was inside of Bing and offered in Azure and what have you. But I felt OpenAI was going after the same thing as us. So instead of trying to train five different foundational models, I wanted one foundation, making it a basis for a platform effect. So we partnered. They bet on us, we bet on them. They do the foundation models, and we do a lot of work around them, including the tooling around responsible AI and AI safety. At the end of the day we are two independent companies deeply partnered to go after one goal, with discipline, instead of multiple teams just doing random things. We said, “Let’s go after this and build one thing that really captures the imagination of the world.”…

OpenAI CEO Sam Altman believes that this will indeed happen. Do you agree with him that we’re going to hit that AGI superintelligence benchmark?

I’m much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn’t touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I’m not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That’s a fantastic world to live in.

What’s your road map to make that vision real? Right now you’re building AI into your search engine, your databases, your developer tools. That’s not what those underserved people are using.

Great point. Let’s start by looking at what the frontiers for developers are. One of the things that I am really excited about is bringing back the joy of development. Microsoft started as a tools company, notably developer tools. But over the years, because of the complexity of software development, the attention and flow that developers once enjoyed have been disrupted. What we have done for the craft with this AI programmer Copilot [which writes the mundane code and frees programmers to tackle more challenging problems] is beautiful to see. Now, 100 million developers who are on GitHub can enjoy themselves. As AI transforms the process of programming, though, it can grow 10 times—100 million can be a billion. When you are prompting an LLM, you’re programming it.

Anyone with a smartphone who knows how to talk can be a developer?

Absolutely. You don’t have to write a formula or learn the syntax or algebra. If you say prompting is just development, the learning curves are going to get better. You can now even ask, “What is development?” It’s going to be democratized.

As for getting this to all 8 billion people, I was in India in January and saw an amazing demo. The government has a program called Digital Public Goods, and one is a text-to-speech system. In the demo, a rural farmer was using the system to ask about a subsidy program he saw on the news. It told him about the program and the forms he could fill out to apply. Normally, it would tell him where to get the forms. But one developer in India had trained GPT on all the Indian government documents, so the system filled it out for him automatically, in a different language. Something created a few months earlier on the West Coast, United States, had made its way to a developer in India, who then wrote a mod that allows a rural Indian farmer to get the benefits of that technology on a WhatsApp bot on a mobile phone. My dream is that every one of Earth’s 8 billion people can have an AI tutor, an AI doctor, a programmer, maybe a consultant!…

… It’s all about saying, “Hey, can there be a more natural interface that empowers us as humans to augment our cognitive capability to do more things?” So yes, this is one of those examples. Copilot is a metaphor because that is a design choice that puts the human at the center of it. So don’t make this development about autopilot—it’s about copilot. A lot of people are saying, “Oh my God, AI is here!” Guess what? AI is already all around us. In fact, all behavioral targeting uses a lot of generative AI. It’s a black box where you and I are just targets.

It seems to me that the future will be a tug-of-war between copilot and autopilot.

The question is, how do humans control these powerful capabilities? One approach is to get the model itself aligned with core human values that we care about. These are not technical problems, they’re more social-cultural considerations. The other side is design choices and product-making with context. That means really making sure that the context in which these models are being deployed is aligned with safety…

You still haven’t said whether you think there’s any chance at all that AI is going to destroy humanity.

If there is going to be something that is just completely out of control, that’s a problem, and we shouldn’t allow it. It’s an abdication of our own responsibility to say this is going to just go out of control. We can deal with powerful technology. By the way, electricity had unintended consequences. We made sure the electric grid was safe, we set up standards, we have safety. Obviously with nuclear energy, we dealt with proliferation. Somewhere in these two are good examples on how to deal with powerful technologies…

AI is more than just a topic of discussion. Now, you’ve centered Microsoft around this transformational technology. How do you manage that?

One of the analogies I love to use internally is, when we went from steam engines to electric power, you had to rewire the factory. You couldn’t just put the electric motor where the steam engine was and leave everything else the same. That was the difference between Stanley Motor Carriage Company and Ford Motor Company, where Ford was able to rewire the entire workflow. So inside Microsoft, the means of production of software is changing. It’s a radical shift in the core workflow inside Microsoft and how we evangelize our output—and how it changes every school, every organization, every household.

How has that tool changed your job?

A lot of knowledge work is drudgery, like email triage. Now, I don’t know how I would ever live without an AI copilot in my Outlook. Responding to an email is not just an English language composition, it can also be a customer support ticket. It interrogates my customer support system and brings back the relevant information. This moment is like when PCs first showed up at work. This feels like that to me, across the length and breadth of our products.

4. Picking a Stock for the Year 2048 – Jason Zweig

Tiffany Gray, 22 years old, is a senior majoring in finance and wealth management at Delaware State, a historically Black university in Dover, Del. Jonathan Rivers, 20, is a junior double-majoring in environmental sciences and religious studies at the University of Virginia. 

Ms. Gray and Mr. Rivers, along with their peers, will assemble a portfolio of perhaps 15-20 stocks and lock it in place for the next 25 years. 

That sounds crazy, and maybe it is, but investors of all ages can learn from these young people.

They are part of an extremely long-term experiment created by Thomas Gayner, chief executive of Markel Corp,  a Glen Allen, Va.-based insurance company. Mr. Gayner has run Markel’s investment portfolio since 1990, building it up to $22 billion with a patient, conservative approach.

He has established a student investment fund at each of the two universities. By the year 2047, Mr. Gayner’s family will contribute, in 25 annual installments, a total of $750,000 to the two clubs. 

The students—29 of them this year at Virginia, nine at Delaware State—will use that money to pick investments that will be frozen for the next 25 years. Each year, the members will buy another round of picks for the next quarter-century. No one, no matter what, will ever be able to sell anything.

Starting in year 26, the members who picked the stocks 25 years earlier will disburse half the accumulated money for scholarships; the other half will be reinvested for the future by that year’s members…

…One lesson from these new clubs is old: the astonishing power of letting your winners run for as long as possible. You can’t lose more than 100% on even your biggest losers (unless you bought them with borrowed money), but the potential gains on your biggest winners are boundless…

…The key is not selling. In a 1984 article called “The Coffee Can Portfolio,” veteran investor Robert Kirby described a client’s husband, who had exactly copied all the buy recommendations Mr. Kirby’s firm had made to his wife, putting about $5,000 in each.

Unlike her, however, the husband had ignored all the sell recommendations. He’d never sold a share. Several of his holdings grew to more than $100,000 apiece. One, which became Xerox Corp., surpassed $800,000, greater than the value of his wife’s entire portfolio.

The long-term tailwind from letting your winners run is easy to underestimate; the human mind isn’t built to extrapolate giant growth rates over multidecade periods…

…Another leader of the Virginia club, Jacob Slagle, 21, says, “It really forces you to think of businesses in a different way: Can it survive 25 years?”

Omar Parker, Jr., a 20-year-old member of the Delaware State club, is already thinking beyond the year 2048: “When we’re long gone,” he says, “our fund will be a legacy to the future generations.”

5. The Exercise Problem – Paul Skallas

The exercise problem is this. We did not evolve to want to exercise, it was just a necessary part of life for survival.

We have created a society where we do not have to physically move our bodies very much in order to survive. We’ve built an incredibly convenient world. Physical stressors have disappeared. We do not need to hunt for our food, we drive to work and most office work and entertainment is sedentary. We can go through life working and living just sitting down day after day. We can even get really rich doing that. The incentives for moving around aren’t really there anymore.

But that isn’t the world we evolved from, nor have our bodies evolved to live in a sedentary world. We need to move around or else there will be consequences. But we haven’t figured out how to fit moving around in our modern world yet.

For nearly every day of their lives, hunter-gatherers, farmers and villagers engage in hours of physical work because they lack cars, machines and other labor saving devices. Their daily existence requires walking many miles and carrying things.

We have become so efficient and automated that farmers today have worse cardiovascular health than non-farmers. City people are healthier today. Which is probably the first time that’s ever occurred in history…

…Researchers put trackers on the modern hunter-gatherer tribe called the Hadza. They found that the Hadza are physically moving at pace of what we call exercise at least 90 minutes a day. Everyday. Including moving around all the time doing things. This population also has a low level of cardiovascular disease, including hypertension and optimal levels for biomarkers of cardiovascular health. But these people were not exercising. They are just responding to the needs of their environment. Exercise is something else.

Exercise can be defined as a voluntary, planned, structured physical activity undertaken for the sake of health and fitness. It’s a modern phenomenon. We shouldn’t confuse exercise with physically moving around. We moved around for a variety of reasons. For example, play is an end to itself, it is not exercise. Every animal plays…

…Only about 20-30 percent of Americans exercise at even decent government approved intervals. Which is the lowest common denominator. Recent studies show you can exercise 13 hours a week at moderate intensity and still get healthier…

…Of that 20-30 percent who actually work out, how many enjoy it? Only Half. The other half do not want to be there. They hate it. I’m sure you know what I’m talking about. So now we’re down to 10% of Americans who enjoy exercise for the sake of exercise. Which basically means enjoying exercising can be considered a fetish.

This is a serious problem. It is not trivial. The survey also found that 54 percent of Americans mentally check out of their workouts because they’re so bored. Another 18 percent claim their body is simply on autopilot during their routines.

Who’s fault is it that 90% of people dislike exercising? Is it their fault? I don’t think so. There’s something wrong inherent with the concept of exercise we need to address in order to solve the problem of moving around. The Fitness influencer yelling at you to workout is a symptom of a deeper issue. We haven’t figured out the exercise problem yet at scale…

…It makes sense that most people hate exercise since it is a misalignment from our evolutionary environment. But some people really do enjoy it. Who are some of these 10% of exercise enjoyers? What is their motivation?

1) The Corporate Endurance Athlete

I’ve worked at a number of medium and big companies and there has been a consistent trend throughout: You won’t find many powerlifters in upper management. What you will find is people who love doing cardio and endurance sports like running or bicycling for many, many miles. There are some statistics that back it up…

…But mainly, It takes a lot of consistency to reach the top of a hierarchy. And consistency means doing the same thing day after day and not getting tired of it. It’s not surprise enjoying endurance exercising (running a lot of miles every day) selects for a certain type of person.

Not only does this person have a high tolerance for boredom, but it could be coupled with a high tolerance of pain. They do not mind using a treadmill or doing a triathlon. The history of the treadmill showcases its evolution from a form of punishment to a widely used exercise equipment. They were created in the 19th century. These early treadmills were primarily used for punishment in prisons and workhouses. In these institutions, prisoners and inmates were made to walk or run on the treadmill for hours as a form of hard labor…

…2) The Bodybuilder

Many young males really enjoy going to the gym because it allows them to build their body to look a certain way. Unfortunately, that certain way only started a little over 100 years ago.

As a young man, I started going to the gym to build muscle to look better. It wasn’t for “health”. It was for show. Later on, I transitioned to Jiu-Jitsu and Muay Thai, and then to other forms of exercise. But if I stayed on the bodybuilding mindset I may have just gotten on various forms of steroids.

Is bodybuilding healthy? It’s certainly better than not moving at all and being sedentary. Absolutely. But skews your image of a lindy healthy body. How functionally strong athletes look like in the absence of steroids. Or just focusing on hypertrophy & muscles for show, not function…

…3) The Anti-Aging Warrior

The other exercise lover is the man who fears death. He will force himself to love exercise for the sake of staying on this planet. Death is a tremendous motivator. Especially to a man who has a life he enjoys and is succesful. This type of man is his 50s, or 60s. Some examples include Peter Attia, Bryan Johnson. There is no emphasis on joy or fun. The exercise is deeply serious and must be done…

…I sometimes think about the island with the oldest people in the world and how they just look a little happier just living their lives in the environment instead of being on this mission to exercise to stay alive.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Markel and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 11 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 11 June 2023:

1. Real estate is China’s economic Achilles heel – Noah Smith

Painting with a broad brush, you could say that China shifted from an export-led economy to a domestic-investment-led economy after 2008. And the biggest chunk of that domestic investment, by far, was real estate.

Real estate development and its related industries (such as real estate finance) don’t just create places for Chinese people to live; they also create vast amounts of employment in the Chinese economy. That’s a big problem right now, because in the wake of the real estate crash that began in 2021, China’s unemployment has risen a lot — officially, unemployment for the 16-24 age group is now at 20.4%, compared to 6.5% in the U.S. Having a vast number of unemployed young people is a threat to both social stability and the future quality of the workforce, and it’s definitely something that’s worrying the Chinese government right now.

That real estate bust, by the way, is still going on, and — as you might expect for a sector so large, it’s weighing heavily on the rest of China’s economy. The overall narrative about China’s recovery in early 2023 has been recovery from the Zero Covid policies of late 2022 — growth was forecast to bounce back to a rapid 5.2% this year. But the most recent monthly economic data shows that the troubles are far from over. Here’s Bloomberg:

China’s economic recovery weakened in May, raising fresh fears about the growth outlook…Manufacturing activity contracted at a worse pace than in April, while services expansion eased, official data showed Wednesday, suggesting the post-Covid rebound had lost momentum…

A stronger recovery in China will also depend on a turnaround in the property market, which makes up about a fifth of the economy when including related sectors. Home sales have slowed after an initial rebound, while real estate developers continue to face financial troubles. 

It’s highly likely that underneath the headline-grabbing drama of Zero Covid, the real force dragging down China’s short-term growth is the general crisis in the real estate sector that began a year and a half ago. That crisis is still ongoing, with more defaults coming periodically. As Adam Wolfe reports in a detailed thread, residential real estate investment is falling:

And that’s in spite of the Chinese government’s frantic efforts to revive the sector. In the past, China was able to use real estate as a form of fiscal stimulus that cost the central government very little — the government just called up the state-controlled big banks and told them to lend more, and the banks lent to property developers. That stimulus came at the expense of long-term productivity growth (since real estate tends to have lower productivity growth than other sectors), but it did prevent China from experiencing recessions for a long time. With the current crash, though, that policy looks to have reached the end of its rope.

The fact is, China just doesn’t need that many more places to live. Even as of 2017 — six years ago! — China had already basically reached developed-country levels of living space per person.

As China built more and more, vacancy rates rose steadily in all big cities except for the four “Tier 1” cities (Beijing, Shanghai, Shenzhen, and Guangzhou). Overall, vacancy rates were significantly higher in China than in most rich countries:…

…In any country, property will be an important component of wealth, alongside stocks and bonds. But in China, with its underdeveloped stock and bond markets, almost all financial wealth is real estate:

From looking at house prices compared to incomes, it’s clear that much of Chinese real estate is bought as an investment property rather than for its value as a place to actually live (and yes, this is speculative bubble behavior). In San Francisco — America’s famously least affordable big city — a typical house costs 10 times the typical resident’s annual income. In Chinese cities this ratio is often much higher:..

…The biggest losers from the real estate bust, however, will probably be China’s local governments.

China’s local governments famously rely on land sales rather than on property taxes for most of their revenue. The real estate market is thus what allows local governments to both provide essential public services and to conduct local industrial policy — which, until the mid-2010s, was China’s main type of industrial policy…

…Xiong explains that this system has a bunch of advantages and disadvantages. On the plus side, buying land in a city is basically like buying equity in that city — if the city government can produce local growth, your land price goes up. So businesspeople and homeowners all become shareholders in the city, which aligns everybody’s incentives toward growth. On the downside, the system creates a ton of different structural incentives for local governments to borrow too much, and for private investors to over-invest in un-economical and risky real estate projects, and for banks to finance these projects too cheaply. In other words, the combination of the local government sector and the property sector is a big reason why real estate looms so much larger in China’s economy than in other countries, and a big reason why the sector got so bloated.

Ultimately, relying on land sales to finance local governments is a strategy that just has a natural time limit. Eventually you run out of valuable land to sell. China’s local governments look like they’re hitting that point, which is why they’re increasingly asking the central government for money. And the central government is stepping in to replace the revenue from the lost land sales:

This means that many of the advantages that China got from federalism and local experimentation and initiative during its amazing growth boom in the 90s, 00s, and early 2010s will now be forfeit. Industrial policy will increasingly be conducted from the center; Xi Jinping and his clique will be making a lot more of the decisions regarding who builds what where, instead of partnerships of local governments and businesspeople. The virtuous cycle where the property sector aligns the interests of local governments and businesses toward growth will now be weakened if not broken altogether in many places.

2. Post-war Germany’s lessons on inflation – Michael Fritzell

Costantino Bresciani-Turroni was an Italian economist that lived between 1882 and 1963. He’s famous for being an anti-fascist intellectual and a proponent of free-market economics.

But more importantly, he wrote a book called The Economics of Inflation, which is widely regarded as the definitive book on Germany’s experience with hyperinflation between 1919 and 1923…

…The first World War broke out on 28 July 1914 when Austria-Hungary declared war on Serbia following the assassination of Archduke Franz Ferdinand. Germany joined the Austria-Hungary coalition. And on the side, Russia, France, the UK and the US formed the Allied forces of World War 1.

Just three days after the start of the war, the German central bank (“the Reichsbank”) suspended the conversion of its notes to gold. The German currency (“the mark”) became paper money without any value anchor. It, therefore, became known as the “paper mark”, as opposed to the previous, gold-backed “gold mark”.

The reason behind the suspension was that the government knew that it would be unable to finance the war through tax revenue. Instead, the Reichsbank took it upon itself to print money to cover any deficit. And in the following four years, the Reichsbank routinely bought government bonds used to finance the budget deficit…

…From July 1914 to the end of the war in December 1918, Germany’s total government debt rose from 300 million marks to 55 billion marks. The war cost roughly 147 billion marks in total, and so more than 1/3 of it was financed through government borrowing, much of it financed through central bank support…

…The war ended on 11 November 1918 with a German surrender, driven by a new German civilian government. From then onwards, the exchange rate started to depreciate rapidly – faster than domestic prices and the volume of circulation.

There had been hopes that a German victory would lead to spoils of war that could alleviate the country’s debt burden. But once the government declared defeat, those hopes were crushed.

In the eight months after the war ended, the budget deficit reached the 10 billion mark – an incredibly high number. when the Socialist Party took power in November 1918, it didn’t have the strength nor the ability to impose the taxes necessary to balance the budget.

The theory prevailing at the time in Germany was that the depreciation was caused by a deterioration in the balance of payments. But foreign voices and especially the British, believe that the depreciation of the currency was instead caused by an excessive budget deficit.

It’s possible that the holders of the mark feared heavy reparations payments and therefore sold the currency in anticipation of a coming crisis. The Treaty of Versailles was signed on 28 June 1919. The Treaty might have had a psychological influence on the German public, who feared that the government would resort to money printing to fund the deficit.

In reality, the budget deficits would have been high with or without the reparation payments. And the payments actually made under the Treaty of Versailles were not particularly onerous, representing only 1/3 of the total deficit between 1920 and 1922…

…Here is the exact process in which inflation pressures built up in the economy:

  1. The issuance of paper money caused the currency to depreciate as speculators use the newly issued money to buy foreign currency or buy cheaper foreign goods for import.
  2. After the currency depreciated, inflation picked up as imports – especially raw materials – became more expensive.
  3. Later on in the process, the newly printed money worked its way through the economy and eventually led to higher wages. But wages didn’t adjust immediately – instead, they adjusted with a long lag that caught the population off-guard.

There was a narrative early in the post-war era that a weaker currency would stimulate the economy. That was true, but only to a small extent. When the currency depreciated, companies saw their profit margins increase as selling prices adjusted quickly while wages took much longer to adjust. Companies then reinvested their profits and and “fake prosperity” ensued.

Exports did particularly well since they were sold at foreign, higher prices. Inbound tourism to Germany took off. Railway charges did not increase in proportion to the depreciation of the mark, so foreigners were able to enjoy cheap travel when they came to Germany. Pure labour arbitrage industries such as shipyards also did well as wages in Germany fell compared to foreign competitors.

Meanwhile, interest rates remained low. There was a kind of yield curve control in place, with the official discount rate fixed at 5% between 1915 and July 1922, even though inflation accelerated from 1919 onwards to incredible levels.

Instead of raising the interest rate when inflation picked up, the Reichsbank restricted credit instead, favouring certain borrowers over others. It continuously extended credits to private speculators, who proceeded to use these loans to buy foreign currency and profit from the depreciation of the mark. It’s unclear how these borrowers were selected. But they appear to have had a cosy relationship with the Reichsbank – to say the least…

…The hoarding of foreign exchange became more serious throughout 1922. German industrialists formed the habit of leaving the profits they made from exports overseas. Germans began to sell houses, land, securities – anything really – to get hold of foreign currency.

Eventually, Germans started using foreign exchange for their day-to-day transactions. Merchants began to set prices in the gold mark or foreign currency. While salaries were still paid in paper marks, wage earners would rush to buy goods as soon as they received the money. Or convert the money into foreign currency as soon as possible.

In February 1923, the Reichsbank tried to support the mark exchange rate artificially through foreign exchange operations. But continuous issuance of paper money caused inflation to continue, and by April, the dam finally broke with the mark being dumped at a record rate.

Workers came up with solutions to the inflation problem by adding surcharges for the depreciation of the currency added onto wage contracts. Wages became tied to cost-of-living indices.

Eventually, Germans started using foreign exchange for their day-to-day transactions. Merchants began to set prices in the gold mark or foreign currency. While salaries were still paid in paper marks, wage earners would rush to buy goods as soon as they received the money. Or convert the money into foreign currency as soon as possible.

It was only in 1923 that hyperinflation got out of control. Taxes were inflated away to almost zero since they were paid with a long lag and tax receipts ended up being only represented 0.8% of government expenses. The rest of the government’s tax revenues came from printing money. By the end of 1923, 75% of all government bonds were held by the Reichsbank…

…On 15 October 1923, a new bank called the “Rentenbank” was created. This bank issued liabilities that were meant to be used as a substitute for the paper mark. Later that year, the value of the paper mark was stabilised at a rate of 4,200 billion marks for a gold mark. And one Rentenmark became equivalent to one gold mark.

The new Rentenmark wasn’t convertible into gold. But just the simple fact that the new money had a different name from the old instilled confidence. As Bresciani-Turroni explained:

“Of the simple fact that the new paper money had a different name from the old, the public thought it was something different from the paper mark… the new money was accepted, despite the fact that it was an unconvertible paper currency.”

And so, when people stopped hoarding foreign currency, the velocity of circulation of paper marks declined. And the increased willingness to hold domestic currency reduced the inflation problem in and of itself. The Rentenmark ended up circulating together with the paper mark for almost a year.

The passing from hyperinflation to complete stability was sudden. The budget was re-established and expenses cut so that equilibrium was reached. The introduction of new taxes and reduced pressure in terms of reparation payments also helped. In 1924-25, the government finally achieved significant budget surpluses.

Counter-intuitively, a shortage of money emerged despite trillion dollar bills. The reason was that domestic prices had increased so much, and the depreciation was so severe that there was not enough money to satisfy the volume of transactions at current prices.

This shortage was best measured through the concept of “real money supply” (=money supply deflated by inflation), which started shrinking from late 1923 onwards. The circulation of money in mid-1922 was 15-20 times the pre-war days, while prices had risen 40-50 times.

The shortage of money in real terms led to the following outcomes

  • Trade was arrested as companies could not gain access to working capital. Factories closed, and unemployment rose.
  • Interest rates increased, and heavily indebted individuals went bankrupt. At the end of 1923, the “call money” interest rate reached 30% per day.

The real money supply shrunk so much that eventually, the entire money supply amounted to only 444 million gold marks, compared to a Reichsbank gold reserve of 1 billion gold marks. That enabled the Reichsbank on 30 August 1924, to fix the conversion rate of the new Reichsmark at a rate of 1 trillion paper marks per US Dollar. In other words, since the value of the money supply had dropped below the Reichsbank’s holdings of gold, it was easy to peg the currency to gold yet again.

After the new Rentenmark and Reichsmark were introduced, prices stopped rising, and the paper mark strengthened against gold. Factories re-opened, unemployment declined, and confidence revived.

3. Why AI Will Save the World – Marc Andreessen 

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.
  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.
  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.
  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.
  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.
  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.
  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those…

…My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well…

…This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages…

…Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet…

…But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe…

…China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

4. Apple Vision – Ben Thompson

This reality — pun intended — hits you the moment you finish setting up the device, which includes not only fitting the headset to your head and adding a prescription set of lenses, if necessary, but also setting up eye tracking (which I will get to in a moment). Once you have jumped through those hoops you are suddenly back where you started: looking at the room you are in with shockingly full fidelity.

What is happening is that Apple Vision is utilizing some number of its 12 cameras to capture the outside world, and displaying them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wearing safety goggles: you’re looking through something, that isn’t exactly like total clarity but is of sufficiently high resolution and speed that there is no reason to think it’s not real.

The speed is essential: Apple claims that the threshold for your brain to notice any sort of delay in what you see and what your body expects you to see (which is what causes known VR issues like motion sickness) is 12 milliseconds, and that the Vision visual pipeline displays what it sees to your eyes in 12 milliseconds or less. This is particularly remarkable given that the time for the image sensor to capture and process what it is seeing is along the lines of 7~8 milliseconds, which is to say that the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds…

…The key part here is the “real-time execution engine”; “real time” isn’t just a descriptor of the experience of using Vision Pro: it’s a term-of-art for a different kind of computing. Here’s how Wikipedia defines a real-time operating system:

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts…

… Notably, your fingers don’t need to be extended into space: the entire time I used the Vision Pro my hands were simply resting in my lap, their movement tracked by the Vision Pro’s cameras.

It’s astounding how well this works, and how natural it feels. What is particularly surprising is how high-resolution this UI is; look at this crop of a still from Apple’s presentation:

The bar at the bottom of Photos is how you “grab” Photos to move it anywhere (literally); the small circle next to the bar is to close the app. On the left are various menu items unique to Photos. What is notable about these is how small they are: this isn’t a user interface like iOS or iPadOS that has to accommodate big blunt fingers; rather, visionOS’s eye tracking is so accurate that it can easily delineate the exact user interface element you are looking at, which again, you trigger by simply touching your fingers together. It’s extraordinary, and works extraordinarily well…

…At the risk of over-indexing on my own experience, I am a huge fan of multiple monitors: I have four at my desk, and it is frustrating to be on the road right now typing this on a laptop screen. I would absolutely pay for a device to have a huge workspace with me anywhere I go, and while I will reserve judgment until I actually use a Vision Pro, I could see it being better at my desk as well…

…The keynote highlighted the movie watching experience of the Vision Pro, and it is excellent and immersive. Of course it isn’t, in the end, that much different than having an excellent TV in a dark room.

What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest.

It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing…

…What was far more striking, though, was how the consumption of this video was presented in the keynote:

Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:

I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.

Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.

That, though, is the trend: long-time readers know that I have long bemoaned that it was the desktop computer that was christened the “personal” computer, given that the iPhone is much more personal, but now even the iPhone has been eclipsed. The arc of technology, in large part led by Apple, is for ever more personal experiences, and I’m not sure it’s an accident that that trend is happening at the same time as a society-wide trend away from family formation and towards an increase in loneliness.

This, I would note, is where the most interesting comparisons to Meta’s Quest efforts lie. The unfortunate reality for Meta is that they seem completely out-classed on the hardware front. Yes, Apple is working with a 7x advantage in price, which certainly contributes to things like superior resolution, but that bit about the deep integration between Apple’s own silicon and its custom-made operating system are going to very difficult to replicate for a company that has (correctly) committed to an Android-based OS and a Qualcomm-designed chip.

What is more striking, though, is the extent to which Apple is leaning into a personal computing experience, whereas Meta, as you would expect, is focused on social. I do think that presence is a real thing, and incredibly compelling, but achieving presence depends on your network also having VR devices, which makes Meta’s goals that much more difficult to achieve. Apple, meanwhile, isn’t even bothering with presence: even its Facetime integration was with an avatar in a window, leaning into the fact you are apart, whereas Meta wants you to feel like you are together.

In other words, there is actually a reason to hope that Meta might win: it seems like we could all do with more connectedness, and less isolation with incredible immersive experiences to dull the pain of loneliness. One wonders, though, if Meta is in fact fighting Apple not just on hardware, but on the overall trend of society; to put it another way, bullishness about the Vision Pro may in fact be a function of being bearish about our capability to meaningfully connect.

5. SITALWeek #398 – Brad Slingerlend

Uber Eats will be rolling out up to 2,000 four-wheeled sidewalk robots for meal delivery. Serve, the Level 4 Autonomous delivery bot manufacturer, notes there are already 200 such robots delivering food in LA. Venture capital is pouring into the robotics market, especially for humanoid bipedal and quadrupedal forms. Serve has previously raised capital from Nvidia, Figure just raised $70M for their general-purpose bipedal robot, and, thanks to VC infusions, Sanctuary AI recently unveiled its Phoenix humanoid. General-purpose robots with embedded AI could far exceed the impact that AI has in the purely digital realm, but with a much larger array of potential outcomes…

…This Lex Fridman podcast interview with the director of the MIT Center for Bits and Atoms, Neil Gershenfeld, is packed with insight on computing, AI, and biology. I knew of Gershenfeld because he stumbled into inventing the airbag seat sensor while working on an apparatus for a magic trick in the 1990s. Given the density of knowledge Gershenfeld has, you have to sometimes pause in order to process what he’s saying, but if you can make it to the last quarter of the podcast, I think you’ll see the payoff. One of his more revelatory conclusions is that the advancements from the current wave of AI innovation are now essentially behind us, and its future impact is somewhat predictable. What he means by that conclusion is that we have reached the point where AI can simulate the human brain; therefore, these new systems will be able to do anything a human can do. Meanwhile, humans will also keep doing things humans can do despite AI subsuming a lot of human tasks. Gershenfeld also explains the far bigger disruption will be when AI is embodied in all sorts of objects down to the molecular level. The three minutes starting at this point are particularly insightful. Gershenfeld estimates that embodied human intelligence is eight orders of magnitude more powerful than a human brain on its own. I believe this means we will see far more emergent, unpredictable behaviors from embodied AI than AI running on servers. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 04 June 2023:

1. Some Things We’ve Learned This Year – Ben Carlson

Tech stocks don’t need lower rates to go up. Tech stocks got crushed last year with the Nasdaq 100 falling more than 30%. The Fed raised interest rates from 0% to more than 4% so that didn’t help long-duration assets like growth stocks.

But there was this theory many people latched onto that tech stocks were only a rates play. In the 2010s and early-2020s rates were on the floor while tech stocks went bananas so it seemed apparent that there was an inverse relationship. When rates were lower tech stocks would do well and when rates were higher tech stocks would do poorly.

However, this year the Fed has now taken rates over 5% and could continue raising rates one, maybe two more times before all is said and done. Meanwhile, the Nasdaq 100 is up more than 30% in 2023.

Does this mean easy money had nothing to do with tech stock gains? I wouldn’t go that far. Low rates certainly helped long-duration assets. But low rates alone didn’t cause Apple to increase sales from $170 billion to nearly $400 billion in 10 years. Low rates have nothing to do with the AI speculation currently taking place with NVIDIA shares.

Interest rates are an important variable when it comes to the markets and economy. But rates alone don’t tell you the whole story when it comes to where people put their money. Tech stocks were also a fundamental play on innovations that have now become an integral part of all our lives…

Higher rates and inflation don’t guarantee poor stock market returns. There are a lot of market/econ people who think we could be in a new regime of higher rates and higher inflation. It’s a possibility worth considering. Many of those same people assume this will be a bad thing for markets. After all, the past 40+ years of financial market returns are all of a product of disinflation and falling rates, right? Right?

Not so fast. These are the average annual returns for the U.S. stock market over a 40 year period of rising inflation and interest rates:

  • 1940-1979: 10.3% per year

And these are the average annual returns for the U.S. stock market over a 40 year period of falling inflation and interest rates:

  • 1980-2019: 11.7% per year

The results are surprising. Things were better during the 1980-2019 period but not as much as one would think. I don’t know if we are entering a new regime of higher rates and inflation. But if we are it doesn’t necessarily mean the stock market is doomed.

2. Private Equity Fundamentals – Daniel Rasmussen and Chris Satterthwaite

But we can look at the subset of PE-owned companies that are either publicly listed or have issued public debt as a partial reflection of what’s currently going on in the opaque but important asset class. And we can use this data to understand what’s happening to revenue, EBITDA, and debt generally across private portfolios.

We took a look at all PE/VC-owned public companies, or companies with public debt, that were 30%+ sponsor-owned, had IPOed since 2018, had a recognizable sponsor as the largest holder, and were headquartered in North America. There were 350 companies that met this criteria; the public equities are worth a combined $385B, and we estimate the companies with public debt are worth another $360B of equity, comprising $750B or 6.5% of the total private equity AUM of $11.7T. Notably, the sample of public equities is roughly 40% tech, which is a significant industry bet, and consistent with our previous estimates of private equity industry exposure…

… We looked at both pro-forma EBITDA, which 50% of the companies in our sample reported, and at GAAP EBITDA. We see below that PE-backed companies in our sample had significantly lower EBITDA margins than the S&P 500, especially on a GAAP basis, and have seen significant margin compression over the past few years. GAAP EBITDA is, perhaps unsurprisingly, much lower than adjusted EBITDA.

Rising SG&A costs have left the median company barely EBITDA profitable on a GAAP basis. 55% of the PE-backed firms in our sample were free cash flow negative in 2022, and 67% added debt over the last 12 months…

…As a group, these companies have a median leverage of 4.9x, which is roughly the ratio of the average B-rated company. However, this includes many overcapitalized VC-backed companies, which are difficult to parse out from the private equity LBOs. When we look at only those with net debt, the median leverage increases to 8.8x, which would put the median LBO well into CCC credit rating (for context, the median leverage for the S&P 500 is 1.7x).

With interest rates rising over 500bps in 2022, much of the increase in interest rates is still not reflected in the 2022 reported figures. The cost of loans has soared recently: a $1B loan for a junk-rated company now averages 12%, up from around a 7.5% average in 2021, according to Reuters…

…The sample of companies we looked at is nearly unprofitable on an EBITDA basis, mostly cash flow negative, and extraordinarily leveraged (mostly with floating-rate debt that is now costing nearly 12%). These companies trade at a dramatic premium to public markets on a GAAP basis, only reaching comparability after massive amounts of pro-forma adjustments. And these are the companies that most likely reflect the better outcomes in private equity. The market and SPAC boom of 2021 presented a window for private equity and venture capital firms to take companies public, and private investors took public what they thought they could. Presumably, what remains in the portfolios was what could not be taken public.

3. Olivine weathering – Campbell Nilsen

When the term ‘carbon sequestration’ comes up, most people think of trees: purchase a carbon credit when booking a flight and, more likely than not, you’ve paid someone to plant a sapling somewhere.

Unfortunately, tree planting has serious disadvantages. Most significantly, its space requirements are immense. To reduce atmospheric CO₂ (currently about 418 ppm) by 100 ppm, within striking distance of the 280 ppm found in preindustrial times, you’d need to convert 900 million hectares to mature forest (an area about 94 percent the size of mainland China and 85 percent the size of Europe).

Even if that was possible, mature forests (which sequester more carbon in their soil than in their trees) take a long time to grow, and much if not most of the land available for reforestation is held by private actors, which creates significant political difficulties.

More promising solutions for direct-air capture4 are more likely to come from chemistry rather than biology. Several companies have broken ground in this field, such as Climeworks, Carbon Engineering and 1PointFive. All use a reusable sorbent, a chemical that reacts with CO₂ in the air and then releases it when energy is supplied (usually when it’s heated up). The captured, concentrated CO₂ is then pumped underground, where it is permanently trapped in geological formations in its gaseous, pressurized form, or mineralized into stable carbonates via reactions with the surrounding rock.

Sorbent-based direct-air capture is not a new idea, and is already used on space stations to moderate CO₂ levels. Like space applications, Climeworks uses an amine sorbent, which releases its captured CO₂ at a relatively low temperature (about 100°C). Unfortunately, amine-based sorbents are extraordinarily expensive – a study on the economics of amine-based sorbents published last year concluded that each tonne of CO₂ captured would incur hundreds of dollars merely in capital expenditure costs for the sorbent. Energy costs are not trivial, either: each tonne sequestered requires no less than 150 kilowatt-hours (kWh).

It is no coincidence that Climeworks operates in Iceland, because its active geology gives Climeworks access to ample carbon-free geothermal and hydro electricity at a very low cost. Even then, Climeworks currently charges €1,000 per tonne of CO₂ sequestered; its eventual goal is €600 a tonne. For comparison, the social cost of each additional tonne of CO₂ is currently thought to be somewhere around $185 (about €170 as of the time of writing), though getting an exact figure is devilishly tricky and the error bars are wide.

1PointFive and Carbon Engineering use potassium hydroxide as the sorbent, which is much cheaper than Climeworks’s amines, but the energy costs are almost as large. To regenerate potassium hydroxide, both companies use a process which includes heating a calciner (steel cylinder) up to 900°C.6 For Carbon Engineering, the cost of producing a concentrated stream of CO₂ was about $100-$200 a tonne as of 2018, not counting the cost of long–term sequestration.

Ultimately, solutions based on reusable sorbents suffer from a key drawback: once carbon dioxide has been absorbed in a chemical reaction, the resulting compound usually won’t give it back up in purified form unless lots of energy is added to the system. Moreover, sorbent-based processes merely produce a concentrated stream of CO₂, which must be stored (usually underground) or used.

This is easy for the first few thousand or even a million tonnes; for billions or trillions of tonnes, the logistics become nightmarish (though possible). Capturing a trillion tonnes of CO₂ (only 40 percent of humanity’s cumulative carbon emissions) via this process would require about eight times the world’s total yearly energy consumption merely to run the calciners. It could be a small useful addition to our carbon mitigation strategy, but it’s unlikely to help us roll back to a preindustrial environment.

If carbon capture with reusable sorbents is astronomically costly, at least for the time being, could we use a non-regenerating sorbent – something that absorbs CO₂ and locks it away for good?

There is a trade-off here. While we’d save the energy costs of cycling the sorbent and storing gaseous CO₂, we’d also need to produce and store truly massive amounts of sorbent. The alternatives would have to be easily available or cheaply manufactured in vast quantities; and because of the storage requirements (reaching into the trillions of tonnes) the compound would need to be non-toxic and environmentally inert. Processing the substance should require relatively little energy, and its reaction with ambient CO₂ needs to operate quickly.

The idea that silicate minerals might be able to fill this role is not, in and of itself, a new one; the earliest proposal of which I am aware is a three–paragraph letter to the editor in the 1990 issue of Nature, proposing that pressurized CO₂ be pumped into a container of water and silicates; five years later, the journal Energy published a somewhat longer outline for carbon sequestration using several intermediate steps. Neither idea went terribly far; popular activism focused on reducing emissions rather than sequestering them, and ideas published in academic journals remained mostly of academic interest.

In 2007, however, the Dutch press began entertaining a rather more sensational idea: the Netherlands’s, and perhaps the world’s, carbon emissions could be effectively and cheaply offset by spreading huge amounts of ground olivine rock – a commonly found, mostly worthless silicate rock composed mainly of forsterite, Mg₂SiO₄ – onto the shores of the North Sea, producing mile after aesthetically intriguing mile of green sand beaches as a side effect. The author of the proposal, Olaf Schuiling, envisioned repurposing thousands of tankers and trucks to ship ground rock from mines in Norway, covering the coast of the North Sea with shimmering golden-green sand and saving the human race from the consequences of the Industrial Revolution.

It seemed too good to be true – so in 2009 the geoscientists Suzanne Hangx and Chris Spiers published a rebuttal. While it was true that ground forsterite has significant sequestration potential on paper (each tonne of forsterite ultimately sequestering 1.25 tonnes of CO₂), Hangx and Spiers concluded that the logistics of Schuiling’s proposal would make the project an unworkable boondoggle.

Start with transport requirements. For the past two decades, the Netherlands has emitted about 170 megatonnes of CO₂ a year on average; each year, around 136 megatonnes of olivine would be needed to sequester Dutch emissions in full. The nearest major olivine mine, Gusdal, is located in Norway, around a thousand kilometers away. Transporting the required olivine by sea with the most commonly-used cargo ship (the $150 million Handysize vessel, with a capacity of about 25 kilotonnes) for example, would require over 100 trips a week – five percent of the world’s Handysize fleet – further clogging some of the world’s busiest waters for shipping. And that’s just for the Netherlands, which is only responsible for about 0.5 percent of the world’s carbon emissions.

Then there’s the environmental angle. While forsterite on its own is harmless, olivine usually contains trace amounts of other minerals and heavy metals, most prominently nickel, whose effect on marine life, while understudied, is known to be less than benign.

But the real Achilles heels of the Schuiling proposal were matters of physics. The rate of rock weathering is, to a first approximation, a function of three variables: the concentration of CO₂ in the water, the ambient temperature, and (most importantly by far) particle size. While CO₂ concentration in surface ocean water is about the same everywhere, temperature is not: sequestration by forsterite is about three times faster at 25°C (the approximate water temperature off the coast of Miami) than at 15°C (the average in the North Sea). But there’s another problem: olivine needs to be extremely small to weather effectively. Hangx and Spiers estimated that olivine particles 300 microns in diameter (the average size of a grain of beach sand) would take about 144 years to finish half their potential sequestration, and seven centuries to react completely…

…But what if the problems with Schuiling’s idea were in the execution, not the concept? The Intergovernmental Panel on Climate Change (or IPCC), the world’s most authoritative body on the problem, takes the climate and atmosphere of 1750 – when the atmosphere was about 280 ppm CO₂ – as its starting point. What would it take to return to this point?

Since that time, humanity has pumped a little over two trillion tonnes of CO₂ into the atmosphere, which would require about 1.6 trillion tonnes of raw olivine to sequester. You can imagine this as a cube measuring about eight kilometers or five miles on each side. Luckily for us, sources of high-quality olivine are fairly common, bordering on ubiquitous; and because it’s not (yet) very economically valuable, most deposits haven’t been thoroughly mapped. Assuming we’re simply trying to speed up natural processes, the end destination for the olivine will likely be the ocean.

Rock weathering takes place only where the rock is exposed to the elements; a gigantic pile of olivine is only as good as its surface area, and the only way to increase surface area is to break the rock into smaller particles. If you halve the size of your particles, the surface area available is doubled at worst, and you sequester carbon at least twice as quickly (the exact proportion will depend on how many cracks and crevices there are in the breakage – the more jagged the particles, the more surface area and the faster sequestration proceeds). To get back to preindustrial concentrations on a time scale of decades, we’d want to process a lot of olivine and break it down into very small particles – not sand, which (with diameters in the hundreds of microns) is too large, but silt (with diameters in the 10-50 micron range).

What would it take to start making a serious dent in atmospheric CO₂? Say we shot for 80 gigatonnes of olivine a year, locking away 100 gigatonnes of the stuff when fully weathered. Unlike many proposals for carbon sequestration, olivine intervention is not contingent on undiscovered or nascent technology. Let’s take a look at the process through the lens of an increasingly small grain of rock.

Our particle of olivine would begin its journey on a morning much like every day of the past hundreds of millions of years; it is part of a large deposit in the hills of Suluwesi, a fifteen-minute drive from the coast. (Indonesia is particularly well-suited for processing due to its vast expanse of shallow, tropical seas, but the ubiquity of olivine formations means that sequestration could happen in any number of places.) 

This particular morning, however, is different. A mining worker has drilled a hole into the exposed surface of the formation, inserted a blasting cap, and – with a loud bang – smashed another fraction of the rock into pieces small enough to be carried by an excavator. The largest excavators in common use, which cost a bit under two million dollars each, can load about 70 tonnes at a time – a small, but important, fraction of the 220 megatonnes or so the world would need to process that day. Each of several hundred excavators takes no more than a minute or so to load up, complete a full trip to the haul truck, and come back to the front lines. It’s probably cheapest to run it, and the rest of the mining equipment, on diesel; even though it guzzles nearly 200 liters (50 gallons) an hour, the rock it carries will repay its five-tonne-a-day CO₂ footprint tens of thousands of times over.

Our grain of olivine (now part of a chunk the size of a briefcase) is off on a quick trip to the main processing facility in one of about a few thousand haul trucks (each costing nearly five million dollars and carrying up to 400 tonnes at a time), where it’s subjected to a thorough pummeling until it’s reached pebble size. Then it’s off to a succession of rock mills to grind it down to the minuscule size needed for it to weather quickly. 

It’s a good idea, at this point, to talk a bit about the main costs involved in such an immense proposal. As a rule of thumb, the smaller you want your end particles to be, the more expensive it is to get them there. Once a suitable olivine formation has been located, quarrying rock out of the formation is cheap. Even in high-income countries like Australia or Canada where mine workers make top-notch salaries, the cost of quarrying rock and crushing it down to gravel size is generally on the order of two to three dollars a tonne, and it requires very little energy. Since reversing global warming would entail the biggest quarrying operation in history, we might well expect costs to drop further. 

Depending on the deposit, haul trucks might prove unnecessary;8 it may be most cost-effective to have the crusher and mills follow the front lines. The wonderful thing about paying people to mill rocks is that we don’t have to know for sure from our armchair; the engineers tasked with keeping expenses to a minimum will figure it out as they go.

What is quite certain is that the vast majority of that expense, both financially and in terms of energy, comes not from mining or crushing but from milling the crushed rock down to particle size. Hangx and Spiers (the olivine skeptics above) estimated milling costs for end particles of various sizes; while sand-sized grains (300 microns across) required around eight kWh of energy per tonne of olivine processed, grains with a diameter of 37 microns were projected to need nearly three times as much energy input, and ten-micron grains a whopping 174 kWh per tonne. Since wholesale electricity prices worldwide are about 15 cents per kWh, that implies an energy cost of around $26 per tonne of olivine, or about $20 per tonne sequestered – at least $1.2 trillion a year, in other words, and a ten percent increase in the world’s electricity consumption. Can we do any better?

We probably can; it matters a lot, it turns out, what kind of rock mill you use. For example, while Hangx and Spiers assumed the use of a stirred media detritor (SMD) mill for the ten-micron silt, other researchers showed that a wet-attrition miller (WAM), working on equal amounts of rock and water, could achieve an average particle size of under four-microns for an all-inclusive energy cost of 61 kWh ($9.15) per tonne of rock – about $7.32 per sequestered tonne of CO₂, or around $732 billion a year in energy costs.

And the largest rock mills are large indeed; the biggest on the market can process tens of thousands of tonnes a day. It should be clear by now that capital expenditures, while not irrelevant, are small compared to the cost of energy. Though there’s no way to know for sure until and unless the sequestration industry reaches maturity, a reasonable upper estimate for capital investment is about $1.60 per tonne of CO₂ sequestered, giving a total cost per sequestered tonne of no more than nine dollars.9 The resulting bill of $900 billion per year might sound gargantuan – but it’s worth remembering that the world economy is a hundred-trillion-dollar-a-year behemoth, and each tonne of carbon dioxide not sequestered is more than 20 times as costly.

Upon its exit from the mill, our particle, now just five to ten microns in diameter, finds itself in a fine slurry, half water by mass. Silicates usually find their way down to the ocean via rivers, so we’ll have to build our own. Thankfully, the water requirements are not high in the grand scheme of things. 80 gigatonnes of rock a year will need about 2300 cubic meters of water a second; split across dozens of mines worldwide, water requirements can easily be met by drawing from rivers or, in a pinch, desalinating ocean water.

The slurry is pumped into a large concrete pipe (since it’s flowing downhill, energy costs are minimal), and our particle of magnesium silicate comes to rest on the ocean floor of the Java Sea, where it reacts with dissolved carbon dioxide and locks it away as magnesium bicarbonate within a few years. (Because the Java Sea is shallow, it is constantly replenished with atmospheric CO₂ from rainwater and ocean currents. Carbon in the deep ocean is cycled at a far slower pace.) 

While there are a handful of trace minerals in most olivine formations, especially nickel and iron, the ecological costs are local and pale in comparison to the global ecological costs of global warming and ocean acidification.

4. Agfa-Gevaert and Activist Investing in Europe – A Case Study – Swen Lorenz

Germany, the largest economy in Continental Europe, makes for an interesting case study. As the annual review of Activist Insight mentions in its 2017 edition: “Germany has long been a laggard in the space of shareholder activism due to both legal and cultural challenges.”

That’s a very diplomatic way of putting it. Legal scholars with a knack for history will point to a much juicier origin of the problem.

The reason why it had long been tremendously tricky to hold German boards to account for underperformance, dates back to the legal system established by the Nazis. Germany’s first extensive corporate law was written in 1937, and the new legal code’s approach to managing corporations was based on the “Fuehrer principle” (Führerprinzip).

Anyone who wants to study the relevant history should get a copy of “Aktienrecht im Wandel” (roughly: “Corporate Law during changing times”), the definitive two-volume book covering the last 200 years of German commercial law.

The Nazis specifically wanted to create a corporate law designed to:

  • Fend off “the operational and economic damage caused by anonymous, powerful capitalists”.
  • Enable directors to manage companies “for the benefit of the enterprise, the people, and the Reich”.
  • “Push back the power of the shareholders meeting”.

The Nazis lost the war, but the legal system underpinning German corporations and much of the underlying culture remained in place. It was only in 1965 that Germany’s corporate law was significantly reformed, primarily because of one man’s outrageously broad influence over leading German corporations: Hermann Josef Abs, who had been a director of Deutsche Bank since 1938.

During the years of Germany’s so-called economic miracle, Abs had created an impenetrable network of cross-holdings among companies and directorship positions doled out among a small clique of leading figures. This powerful elite of directors shielded each other from accountability; even investors with large-scale financial firepower found many German companies an impenetrable fortress. Germany’s government had no other choice but to (finally) act. The Lex Abs, as the legal reform was called in a rare legislative reference to one specific individual, did away with at least some of the corporate law’s problematic aspects.

Changing the legal code was one thing, changing the underlying culture another. So powerful and deeply-rooted was Abs & Co.’s system that I came across its influence on the German stock market as recently as the late 1990s. Germany’s large, publicly listed corporations used to be a closed shop, summarised by the expression “Deutschland AG” in foreign media.

It was only during the early 2000s that shareholder activism slowly started to become a more regular occurrence in Germany and across Continental Europe. Factors such as a generational change on boards, further legislative reforms, and a large number of newly listed companies managed by internationally trained directors and entrepreneurs led to an increased prevalence of the activist approach.

Once you join the dots from a 30,000 foot perspective and with the benefit of hindsight, it’s incredible how long it takes to soften up a well-entrenched system. Quite literally, it required the generation who had created the system to die.

5. Walking naturally after spinal cord injury using a brain–spine interface – [Numerous authors]

A spinal cord injury interrupts the communication between the brain and the region of the spinal cord that produces walking, leading to paralysis1,2. Here, we restored this communication with a digital bridge between the brain and spinal cord that enabled an individual with chronic tetraplegia to stand and walk naturally in community settings. This brain–spine interface (BSI) consists of fully implanted recording and stimulation systems that establish a direct link between cortical signals3 and the analogue modulation of epidural electrical stimulation targeting the spinal cord regions involved in the production of walking4,5,6. A highly reliable BSI is calibrated within a few minutes. This reliability has remained stable over one year, including during independent use at home. The participant reports that the BSI enables natural control over the movements of his legs to stand, walk, climb stairs and even traverse complex terrains. Moreover, neurorehabilitation supported by the BSI improved neurological recovery. The participant regained the ability to walk with crutches overground even when the BSI was switched off. This digital bridge establishes a framework to restore natural control of movement after paralysis…

…To establish this digital bridge, we integrated two fully implanted systems that enable recording of cortical activity and stimulation of the lumbosacral spinal cord wirelessly and in real time (Fig. 1a).

To monitor electrocorticographic (ECoG) signals from the sensorimotor cortex, we leveraged the WIMAGINE technology3,20. WIMAGINE implants consist of an 8-by-8 grid of 64 electrodes (4 mm × 4.5 mm pitch in anteroposterior and mediolateral axes, respectively) and recording electronics that are embedded within a 50 mm diameter, circular-shaped titanium case that has the same thickness as the skull. The geometry of the system favours close and stable contact between the electrodes and the dura mater, and renders the devices invisible once implanted within the skull.

Two external antennas are embedded within a personalized headset that ensures reliable coupling with the implants. The first antenna powers the implanted electronics through inductive coupling (high frequency, 13.56 MHz), whereas the second, ultrahigh frequency antenna (UHF, 402–405 MHz) transfers ECoG signals in real time to a portable base station and processing unit, which generates online predictions of motor intentions on the basis of these signals (Extended Data Fig. 1).

The decoded motor intentions are then converted into stimulation commands that are transferred to tailored software running on the same processing unit.

These commands are delivered to the ACTIVA RC implantable pulse generator (Fig. 1a), which is commonly used to deliver deep brain stimulation in patients with Parkinson’s disease. We upgraded this implant with wireless communication modules that enabled real-time adjustment over the location and timing of epidural electrical stimulation with a latency of about 100 ms (Extended Data Fig. 1).

Electrical currents are then delivered to the targeted dorsal root entry zones using the Specify 5-6-5 implantable paddle lead, which consists of an array incorporating 16 electrodes.

This integrated chain of hardware and software established a wireless digital bridge between the brain and the spinal cord: a brain–spine interface (BSI) that converts cortical activity into the analogue modulation of epidural electrical stimulation programs to tune lower limb muscle activation, and thus regain standing and walking after paralysis due to a spinal cord injury (Supplementary Video 1)


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 28 May 2023:

1. Yuval Noah Harari argues that AI has hacked the operating system of human civilisation – Yuval Noah Harari

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults…

…Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single AI adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.

At first, AI will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence…

…We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of AI cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

2. What Happens if the US Defaults on its Debt? – Nick Maggiulli

As U.S. Treasury Secretary Janet Yellen recently noted, unless Congress raises (or suspends) the debt limit, the U.S. government may run out of money as early as June 1.

With such a dire warning, many investors have begun to wonder: what happens if the US defaults on its debt? Though this scenario remains unlikely, it is important to understand the potential consequences of a default and how they could impact you…

…When it comes to the term ‘default’ there are two ways that this has been broadly defined:

  • An actual default: This is the traditional meaning of the term and it occurs when a borrower fails to make a required principal or interest payment to a lender. In the case of the United States (or any other sovereign nation), a default occurs if the government is unable (or unwilling) to make payments on its debt (e.g. if the U.S. failed to make payments on its Treasury bonds). Default in these cases can either be partial (failing to pay back some of the debt) or full (failing to pay back all of the debt). However, this isn’t the only kind of default that can occur.
  • A technical default: Unlike an actual (or traditional) default when a government fails to make payments on its bonds, a technical default occurs if the government fails to pay for its other obligations even if its bond payments were made on time. For example, the U.S. Treasury could decide to prioritize Treasury bondholders and pay them in full before paying out whatever was left to Social Security recipients and government employees. While this would avoid a default in the traditional sense of the term, it could still negatively impact millions of Americans who rely on income from the U.S. government to pay their bills…

…As we navigate the political and economic complexities of raising the debt ceiling in the coming weeks, it’s important to understand what could happen if the U.S. defaults on its debt. The consequences of a such an event would have a major impact not only in the U.S., but across the globe. And while we can’t predict the exact outcomes, below are some possible scenarios that could unfold based on economic studies, expert opinions, and historical precedent:

  • Global financial turmoil: Given the reliance of the global financial system on U.S. Treasury bonds and U.S. dollars, a default could lead to a loss of confidence in the U.S. government and a global market panic. The most visible impact of this would be declining asset prices and a disruption in international trade. The duration of such a panic would be determined by the severity of the U.S. default and how quickly the U.S. could restore confidence in financial markets.
  • Possible recession: Two economists modeled the potential impact of a U.S. default on employment and the results weren’t great. They argued that a technical default (where the federal government fails to make payments for some of its responsibilities) would raise unemployment from 3.4% to 7%, and an actual default (where the federal government fails to make payments to U.S. bondholders) would raise unemployment from 3.4% to above 12%. Such a quick rise in unemployment could lead to reduced consumer spending and a recession.
  • Rising interest rates: When the U.S. Treasury failed to make payments on $122 million in Treasury bonds in 1979, short-term interest rates jumped 0.6 percent. This was true despite the fact that the failure to make payments was a clerical error on the part of the Treasury and not an actual default (since all the bondholders were eventually paid back with interest). If the U.S. were to actually default, the cost of borrowing would rise sharply for individuals and businesses, ultimately slowing economic growth.
  • Depreciating value of the dollar: A U.S. default could reduce confidence in the U.S. dollar and push many nations to seek out more reliable alternatives. This would reduce the demand for the dollar, decrease its value, and increase the cost of imports in the U.S., leading to higher inflation.
  • Lower credit rating: If the U.S. were to default, credit rating agencies would downgrade the U.S.’s credit rating, which would make future borrowing more expensive for the U.S. government. Standard & Poor’s downgraded the U.S.’s credit rating for the first time ever in 2011 even though a default never occurred. Imagine what would happen if one did?
  • Impaired government functions: An actual default (and even a technical default) could force the government to delay payments to Social Security recipients, employees, and others who rely on their services. This could disrupt the lives of millions of Americans and severely impact economic growth. The White House released a report in October 2021 that outlined the potential consequences of such a default and how it could impact various sectors of the economy.
  • Political fallout: If your job was to get Donald Trump re-elected in 2024, there are few things that would help more than a U.S. default in 2023. Regardless of political beliefs, many Americans will hold the current party in power (Democrats) ultimately responsible in the event of a default. This would influence future elections and public policy for many years to come.

While these scenarios paint a sobering picture of what could happen if the U.S. were to default on its debt, it’s important to remember that no one knows the future. Don’t just take my word for it though. Consider what Warren Buffett said on the topic at the most recent Berkshire Hathaway shareholders meeting:

It’s very hard to see how you recover once…people lose faith in the currency…All kinds of things can happen then. And I can’t predict them and nobody else can predict them, but I do know they aren’t good.

3. Microsoft Bets That Fusion Power Is Closer Than Many Think – Jennifer Hiller

In a deal that is believed to be the first commercial agreement for fusion power, the tech giant has agreed to purchase electricity from startup Helion Energy within about five years.

Helion, which is backed by OpenAI founder Sam Altman, committed to start producing electricity through fusion by 2028 and target power generation for Microsoft of at least 50 megawatts after a year or pay financial penalties.

The commitment is a bold one given that neither Helion nor anyone else in the world has yet produced electricity from fusion.

“We wouldn’t enter into this agreement if we were not optimistic that engineering advances are gaining momentum,” said Microsoft President Brad Smith…

…“I had this belief that the two things that would matter most to making the future and raising the quality of life a lot were making intelligence and energy cheap and abundant, and that if we could do that, it would transform the world in a really positive way,” Mr. Altman said.

A number of prominent investors from Mr. Altman to Bill Gates have put money into fusion firms, which have raised more than $5 billion, according to the Washington, D.C.-based Fusion Industry Association.

The process of splitting atoms in nuclear-fission power plants provides nearly 20% of U.S. electricity. But nuclear fusion systems would generate electricity from the energy released when hydrogen atoms are combined to form helium.

The industry got a boost in December when the U.S. Energy Department announced a research breakthrough by scientists after a fusion reaction at the Lawrence Livermore National Laboratory produced more energy than was used to create it by firing lasers at a target.

To be a practical source of power, the entire facility would need to net produce rather than consume energy, and at a price that competes in the broader electricity market…

…David Kirtley, CEO at Helion, said that like a wind- or solar-power developer—the more typical energy firms involved in power purchase agreements—Helion would pay Microsoft financial penalties if it doesn’t deliver power on time. The companies declined to specify the amount.

“There’s some flexibility, but it is really important that there are significant financial penalties for Helion if we don’t deliver,” Mr. Kirtley said. “We think the physics of this is ready for us to signal the commercialization of fusion is ready.”

4. Some Things I Think – Morgan Housel

The fastest way to get rich is to go slow.

Many beliefs are held because there is a social and tribal benefit to holding them, not necessarily because they’re true.

Nothing is more blinding than success caused by luck, because when you succeed without effort it’s easy to think, “I must be naturally talented.”…

…The most valuable personal finance asset is not needing to impress anyone.

Most financial debates are people with different time horizons talking over each other…

…The hardest thing when studying history is that you know how the story ends, which makes it impossible to put yourself in people’s shoes and imagine what they were thinking or feeling in the past…

…Most beliefs are self-validating. Angry people look for problems and find them everywhere, happy people seek out smiles and find them everywhere, pessimists look for trouble and find it everywhere. Brains are good at filtering inputs to focus on what you want to believe…

…The market is rational but investors play different games and those games look irrational to people playing a different game.

A big problem with bubbles is the reflexive association between wealth and wisdom, so a bunch of crazy ideas are taken seriously because a temporarily rich person said it.

Logic doesn’t persuade people. Clarity, storytelling, and appealing to self-interest do…

…Happiness is the gap between expectations and reality, so the irony is that nothing is more pessimistic than someone full of optimism. They are bound to be disappointed…

…Nothing leads to success like unshakable faith in one big idea, and nothing sets the seeds of your downfall like an unshakable faith to one big idea…

…Economies run in cycles but people forecast in straight lines.

You are twice as gullible as you think you are – four times if you disagree with that statement.

Price is what you pay, value is whatever you want Excel to say…

…We underestimate the importance of control. Camping is fun, even when you’re cold. Being homeless is miserable, even when you’re warm…

…“If you only wished to be happy, this could be easily accomplished; but we wish to be happier than other people, and this is always difficult, for we believe others to be happier than they are.” – Montesquieu

With the right incentives, people can be led to believe and defend almost anything.

Good marketing wins in the short run and good products win in the long run…

…The most productive hour of your day often looks the laziest. Good ideas rarely come during meetings – they come while going for a walk, or sitting on the couch, or taking a shower…

…A good test when reading the news is to constantly ask, “Will I still care about this story in a year? Two years? Five years?”

A good bet in economics: the past wasn’t as good as you remember, the present isn’t as bad as you think, and the future will be better than you anticipate.

5. Layers of AI – Muji

AI is such a loose term, a magical word that simply means some type of mathematically-driven black box. It is generally thought of as a compute engine that can do a task at or better than a human can, driven by a “brain” (AI engine) making decisions. Essentially, AI is a bunch of inner mathematical algorithms that interconnect & combine into one big algorithm (the overall AI model). These take an input, do logic (the black box), and send back an output.

At the highest level, AI has thus far been Artificial Narrow Intelligence (ANI), a weaker form of AI that is honed to complete a specific task. As seen over the past few months, we are quickly approaching Artificial General Intelligence (AGI), a stronger form of AI that can perform a wider range of tasks, and can think abstractly and adapt. AGI is the holy grail of many an AI researcher.

Today, AI takes a lot of forms, such as Machine Learning (learning from the past to predict the future), Computer Vision (identifying structure in video or imagery), Speech-to-Text/Text-to-Speech (converting audio to text and vice versa), Expert Systems (highly honed decision engines), and Robotics (controlling the real world)…

…It is worth having some caution with AI, but know that the hype is real, and the potential of these cutting-edge AI models is palpable. At a minimum, we are at the precipice of a new era in productivity boosts from virtual assistance and automation. But as these engines mature, combine, and integrate with others more, it suddenly feels that AGI is on our doorstep.

ML is the subset of AI that is trained on past historical data in order to make decisions or predict outcomes. In general, ML processes a lot of data upfront in a training process, analyzing it to determine patterns within it in order to derive future predictions. With the rise of better models, honed hardware (GPUs and specialized chips from hyperscalers), and continually improving scale & performance from the cloud hyperscalers, the potential of ML is now heavily scaling up. ML models can make decisions, interact with the world (through text, voice, chat, audio, computer vision, image, or video), and take action.

ML is extremely helpful for:

  • processing unstructured content (text, images, video) to extract meaning, understand intent & context
  • image or video recognition to isolate & identify objects
  • make decisions by weighing complex factors
  • categorize & group input (classification)
  • pattern recognition
  • language recognition & translation
  • process historical data to isolate trends occurring, then forecast or predict those trends from there
  • generate new output (text, image, video, audio generation)

ML models are built from a wide variety of statistical model types geared for specific problems, each with a wide number of statistical algorithms that can be used in each. Some common types include:

  • Classification models are used to classify data into categories (labels), in order to predict a discrete value (what category applies to new data).
  • Regression models are used to find correlations between variables, in order to predict continuous values (numerics).
  • Clustering models are good for clustering data together around the natural groups that exist, such as for segmenting customers, making recommendations, and image processing.

There are a number of ways that ML can be taught, including:

  • Supervised Learning is training via a dataset with known answers. These answers become labels that the ML uses to identify patterns and correlations in the data.
  • Unsupervised Learning is training via raw data and letting the AI determine the features and trends within the data. This is used by ML systems for making recommendations, data associations, trend isolation, or customer segmenting.
  • Semi-supervised Learning is in between, which uses a subset of training on a labeled dataset, and another unlabelled one to enrich it further.
  • Reinforcement Learning is a model that gets rewarded for correct and timely answers (via internal scores or human feedback). This is used when there is a known start and end state, where the ML has to determine the best way to navigate the multiple paths in between. This is being leveraged in new language models like ChatGPT to improve the way the engine “talks”…

Some of the components of building ML that are helpful to understand:

  • Features are characteristics or attributes within the raw data that help define the input (akin to columns within a database). These are then fed in as inputs to the ML model, and weighed against each other to identify patterns and how they correlate to each other. Feature Engineering is the process where a data scientist will pre-identify these variables within the data, such as categories or numerical ranges to track. Feature Selection may be needed to select a subset of features in model construction, which may be repeatedly tested to find the best fit, as well as helps simplify models and shorten training times. Features can be collaboratively tracked in Feature Stores, which are similar to Metric Stores in BI stacks [both discussed in the Modern Data Stack]. Unsupervised Learning forces the ML engine to determine the important features on its own.
  • Dimensionality is based on the number of features provided as input into the model – or rather, represents the internal dimensions of the model of how each feature relates to and impacts every other feature (how one variable in a row of input impacts another). High-dimensional data refers to datasets having a wide set of features (a high number of input variables per row).
  • Observations are the number of feature sets provided as input while building the model (akin to rows within a database).
  • Vectors are features turned into numerical form and stored as an array of inputs (one per observation or row, or a sentence of text in NLP). An array of vectors is a two-dimensional matrix. [This is why GPUs are so helpful in ML training, as they specialize in vectorized math.]
  • Tensors represent the multi-dimensional relationships between all vectors. [Hence why Google and NVIDIA use the name often in GPU products, as they specialize in highly-dimensional vectorized math.]
  • Labels are pre-defined answers given to a dataset. This can be the identification of categories that apply to that data (such as color, make, model of a car), successful or failed outcomes (such as whether this is fraud or risky behavior or not), or the tagging and definition of objects within an image or video (this image contains a white cat on a black table). These are then fed into Supervised Learning methods of training ML models.
  • Parameters are what the ML model creates as internal variables at a decision point. This is a trained variable that helps set the importance of an individual feature within the ML engine. (This can be weights & biases within a neural network or a coefficient in a regression.) The parameter count is a general way that ML models use to show how much complexity they hide. (OpenAI’s GPT-3 had 350M-175B parameters in various flavors, and GPT-4 is believed to have up to 1T.)
  • Hyperparameters are external variables the data scientist can adjust in individual statistical algorithms used within the ML model. Think of them as the knobs that can be tuned and adjusted to tweak the statistical model within (along with the fact there are countless statistical models that can be used for any specific algorithm, which can be swapped out).

As with anything data related, it is “garbage in – garbage out”. You must start with good data to have a good ML model. Data science is ultimately the art of creating an ML model, which requires data wrangling (the cleaning, filtering, combining, and enriching of the datasets used in training), selection of the appropriate models & statistical algorithms to use for the problem at hand, feature engineering, and tuning of the hyperparameters. Essentially, data science is about asking the right questions in the right way.

ML models are trained with data, then validated to assure “fit” (statistical relevance) to the task at hand, as well as can be tuned and tweaked along the creation process by the data scientist (via the training data being input, the features selected, or hyperparameters in the statistical model). Once in production, it is typical to occasionally test it to ensure it remains relevant to real-world data (fit), as both models and data can drift (such as shifting behaviors of customers). Models can be trained on more and more data to become more and more accurate in classifications, predictions, and generation. More data generally means more insights and accuracy – however, at some point the model may go off the rails, and start trying to find patterns in random outliers that aren’t helpful. This is known as being “overfit”, where its trained findings aren’t as applicable to real-world data by factoring in noise or randomness more than it should. It must then be retrained on a more up-to-date set of historical data.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 21 May 2023:

1. The Borlaug Report #3: Bioprocessing – Borlaug

A small molecule drug is basically any of the pills in your average household medicine cabinet – they are chemicals, synthesized and pressed into a solid tablet or coated in dissolvable pill plastic. Most of the drugs that dit this description are very popular – meaning lots of them need to be made. This has traditionally been done in a stainless-steel fermenter – such as the one below:

The process for making drugs this way, in large batches, is fairly simple logistically – you are combining active pharmaceutical ingredients and blending them to create your drug, adding things like excipients along the way as you remove moisture, mill, and blend some more. Finally, you remove everything, press the substance into pills, and coat the pill as needed. Run complete! 

Before the next run, one must thoroughly sterilize these large steel tanks with cleaning chemicals. Logistically, this is a perfectly acceptable way of manufacturing chemicals at scale, and versions of this have been done for decades. 

However, things are changing. The future of the pharmaceutical industry is looking increasingly biological in nature, and producing biologics requires a little more TLC (and money). 

How Larger Molecules are Made

While most of small molecule manufacturing can be done with just a couple of discrete pieces of equipment, making a biologic drug has a plethora of steps that can be divided into upstream and downstream bioprocessing. I would add a third category to make clear that parts of “upstream” are generally for R&D purposes only.

R&D (Scale-Up): Before biologic drugs are commercially manufactured, there is a manufacturing component – scientists have to figure out how to ensure the drug can be scalably manufactured without compromising the effectiveness or safety profile of the drug itself. This is usually called “scale up”. The process is basically a guess-and-check exercise of finding cells that excel in a smaller (150mL) bioreactor, and keep finding the best cells as they multiply and test them in larger and larger bioreactors until you’re up to 1-2,000 liters or more. Other conditions, like what goes into the cell culture media, how much oxygen is let in, temperature, stirring speeds, etc. are all tinkered with here. Once the “process” is defined, R&D is over and production can proceed. This has become a key value proposition of a lot of contract manufacturers, because it can be extremely hard to do, especially in gene therapy.

Upstream: Basically, what you are doing in upstream bioprocessing is taking a bunch of cells (the “active ingredient” per se) and putting them in a soup of nutrients (media) that stimulates them to multiply at high rates based on the R&D process you tested out. In doing so, you are getting to a giant vat of soup that has an adequate volume of those cells (the drug) floating inside. In the image below, most of this is “production” in the manufacturing process itself. It’s actually a lot like the stainless steel process up until here, given everything is going into a bioreactor – the reactor is just smaller in this case.

Downstream: Once you have your cell soup, you engage in the “downstream” half of the process which separates those cells from all of the things that you don’t want in your final product. Once you’ve purified and filtered everything, it goes into a freezer (“cryo-preservation”) and is then shipped elsewhere to be put into the right delivery mechanism (IV bags, syringe vials, etc.) and boxed/packaged – the “fill-finish” process. This is the part that is fundamentally different – in small molecule production, you’re much closer to the finished product when things come out of the bioreactor. In biologics, you are separating the active ingredient a lot more carefully from the other stuff you put in the soup.

Most of these drugs are made in smaller batches – they often serve more targeted populations of people than some of the small molecule blockbuster drugs of old. The exception here is antibody drugs, which are still finding themselves going after large populations. Cell and gene therapies, however, are a much different story. After all, healthcare was never going to be one-size-fits-all. If you tried to apply the old method of making drugs to this new reality, you’d realize quickly how much time you are spending cleaning the tanks after every run.

The Single-Use “Innovation”

As mentioned above, the economics of manufacturing small batches of a drug in a stainless steel tank stops making sense very quickly when you have to shut down the process afterward to follow strict sterilization protocols, using lots of water, chemicals, and energy just to be able to start the process up again using the same equipment. Fortunately, the industry has already adapted by commercializing single-use technology.

Single-Use Saves Money

Instead of cleaning out the fermenter every time you use it, you can just line it with a disposable bag made of a fancy polymer that guarantees the same level of sterility. Kind of like using a trash bag instead of washing out the trash can under your sink every time you empty it. The same goes for all of the tubing connecting each subsequent piece of equipment in the workflow, as well as the cartridges, capsule and columns within the machines themselves. After a run is over, downtime can be short – just replace everything and start over.

Turns out, at lower batch sizes, net of energy/water/sterilization costs, this can actually be significantly cheaper, both on COGS and capital investment…

…You should care because this is an easily investable trend for few key reasons: 

Durable Usage Trends: Manufacturing in biopharma is different from the R&D tools themselves – there is no “fad” factor like you might see in genomics, for example, where researchers will crowd into a hot new space and use the relevant technology until the next thing comes along. These changes can be quick and violent. You know what doesn’t change? The bag you line the bioreactor with and the tubes that connect it to the clarification system. That’s the same regardless of whether someone invents a new gene therapy, a cell therapy, an antibody, or an mRNA drug.

Companies selling this technology don’t benefit from one type of therapy – they benefit from the complexity of all therapies moving through the clinic.

Highly recurring revenue with deep moats: Once you file a drug with the FDA, a lot of things get set in stone – one of these things is the manufacturing process and the equipment that goes into it, specified down to the vendor. Recently, companies have been specifying second sources from a second vendor into these filings to deal with supply chain risks, but the fact remains that once something is “spec’d” into the process, it’s painfully difficult to remove or change it.

This discourages new entrants to the market because the only share you can win is for clinical-scale dosage for new drugs – meaning your initial “TAM” is extremely small. In bioprocessing, scale is a massive barrier to entry and the FDA is a massive barrier to scale.

2. Google I/O and the Coming AI Battles – Ben Thompson

If there is one thing everyone is sure about, it is that AI is going to be very disruptive; in January’s AI and the Big Five, though, I noted that it seemed more likely that AI would be a sustaining innovation:

The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.

To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.

My conclusion in that Article was that AI would be a sustaining innovation for Apple, Amazon, Meta, and Microsoft; the big question was Google and search:

That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.

Or maybe not. I tend to believe that disruptive innovations are actually quite rare, but when they come, they are basically impossible for the incumbent company to respond to: their business models, shareholders, and most important customers make it impossible for management to respond. If that is true, though, then an incumbent responding is in fact evidence that an innovation is actually not disruptive, but sustaining.

To that end, I take this Google I/O as evidence that AI is in fact a sustaining technology for all of Big Tech, including Google. Moreover, if that is the case, then that is a reason to be less bearish on the search company, because all of the reasons to expect them to have a leadership position — from capabilities to data to infrastructure to a plethora of consumer touch points — remain. Still, the challenges facing search as presently constructed — particularly its ad model — remain.

3. An Interview with Peter Lynch in 1996, Six Years After Retirement – Conor Mac

When you first went to Fidelity, what was the market like?

Well, after the great rush of the ’50s, the market did brilliantly and everybody says, “Wow, looking backwards, this would be a great time to get in.” So a lot of people got in in the early ’60s and in the mid-60s. The market peaked in ’65-66 at around a thousand, and that’s when I came. I was a summer student at Fidelity in 1966. There were 75 applicants for three jobs at Fidelity, but I caddied for the president for eight years. So that was the only job interview I ever took. It was sort of a rigged deal, I think. I worked there the summer of ’66 and I remember the market was close to a thousand in 1966, and in 1982, 16 years later, it was 777. So we had a long drought after that. So the people were concerned about the stock market early in the ’50s. They kept watching and watching, not investing. It started to go up dramatically and they finally caved in and bought big time in the mid-60s and got the peak..

But the market really didn’t do much between ’77 and ’82, between the beginning of that bull market, and yet your fund performed quite spectacularly. What do you do?

Well, I think flexibility is one of the key things. I mean I would buy companies that had unions. I would buy companies that were in the steel industry. I’d buy textile companies. I always thought there was good opportunities everywhere and, researched my stocks myself. I mean Taco Bell was one of my first stocks I bought. I mean the people wouldn’t look at a small restaurant company. So I think it was just looking at different companies and I always thought if you looked at ten companies, you’d find one that’s interesting, if you’d look at 20, you’d find two, or if you look at hundred you’ll find ten. The person that turns over the most rocks wins the game. And that’s always been my philosophy…

Talk about the change in ’86-87.

Well, I remember in my career you’d say to somebody you worked in the investment business. They’d say, “That’s interesting. Do you sail? What do you think of the Celtics?” I mean it would just go right to the next subject. If you told them you were a prison guard, they would have been interested. They would have had some interest in that subject, but if you said you were in the investment business, they said, “Oh, terrific. Do your children go to school?” It just went right to the next subject. You could have been a leper, you know, and been much more interesting. So that was sort of the attitude in the ’60s and ’70s.

As the market started to heat up, you’d say you were an investor, “Oh, that’s interesting. Are there any stocks you’re buying?” And then people would listen not avidly. They’d think about it. But then as the ’80s piled on, they started writing things down. So I remember people would really take an interest if you were in the investment business, saying “What do you like?” And then it turned and I remember the final page of the chapter would be you’d be at a party and everybody would be talking about stocks. And then people would recommend stocks to me. And then I remember not only that, but the stocks would go up. I’d look in the paper and I’d notice they’d go up in the next three months. And then you’ve done the full cycle of the speculative cycle that people hate stocks, they despised, they don’t want to hear anything about ’em, now they’re buying everything and cab drivers are recommending stocks. So that was sort of the cycle I remember going through from the ’60s and early ’70s all the way to ’87.

Where were you when the Crash of ’87 came?

Well, I was very well prepared for the Crash of 1987. My wife and I took our first vacation in eight years and we left on Thursday in October and I think that day the market went down 55 points and we went to Ireland, the first trip we’d ever been there. And then on Friday, because of the time difference, we’d almost completed the day and I called and the market was down 115. I said to Carolyn, “If the market goes down on Monday, we’d better go home.” And “We’re already here for the weekend. So we’ll spend the weekend.” So it went down 508 on Monday, so I went home. So in two business days, I had lost a third of my fund. So I figured at that rate, the week would have been a rough week. So I went home. Like I could do something about it. I mean it’s like, you know, if there was something I could do.

I mean there I was – but I think if people called up and they said, “What’s Lynch doing,” and they said, “Well, he’s on the eighth hole and he’s every par so far, but he’s in a trap, this could be a triple bogey,” I mean I think that’s not what they wanted to hear. I think they wanted to hear I’d be there lookin’ over – I mean there’s not a lot you can do when the market’s in a cascade but I got home quick as I could.

Why did the Crash of ’87 happen?

Well, I think people had not analyzed ’87 very well. I think you really have to put it in perspective. 1982, the market’s 777. It’s all the way to ’86. You have the move to 1,700. In four years – the market moves from 777 to 1,700 in four years. Then in nine months, it puts on a thousand points. So it puts on a thousand points in four years, then puts on another thousand points in the next nine months. So in August of 1987, it’s 2,700. It’s gone up a thousand points in nine months. Then it falls a thousand points in two months, 500 points the last day. So if the market got sideways at 1,700, no one would have worried, but it went up a thousand in nine-ten months and then a thousand in two months, and half of it in one day, you would have said “The world’s over.” It was the same price.

So it was really a question of the market just kept going up and up and it just went to such an incredibly high price by historic, price-earnings multiple load, dividend yields, all the other statistics, but people forget that basically, it was unchanged in 12 months. If you looked at September 1986 to October ’87, the market was unchanged. It had a thousand points up and a thousand points down and they only remember the down. They thought, “Oh, my goodness, this is the crash. It’s all over. It’s going to go to 200 and I’m going to be selling apples and pencils,” you know. But it wasn’t. It was a very unique phenomenon because companies were doing fine. Just, you know, you’d call up a company and say, “We can’t figure it out. We’re doin’ well. Our orders are good. Our balance sheet’s good – we just announced we’re gonna buy some of our stock. We can’t figure out why it’s good down so much.”

Was that the most scared you ever were in your career?

’87 wasn’t that scary because I concentrate on fundamentals. I call up companies. I look at their balance sheet. I look at their business. I look at the environment. The decline was kinda scary and you’d tell yourself, “Will this infect the basic consumer? Will this drop make people stop buying cars, stop buying houses, stop buying appliances, stop going to restaurants?” And you worried about that. The reality, the ’87 decline was nothing like 1990. Ninety, in my 30 years of watching stock very carefully, was by far the scariest period.

What was so scary about 1990?

Well, 1990 was a situation where I think it’s almost exactly six years ago approximately now. In the summer of 1990, the market was around 3,000, Economy’s doing okay, and Saddam Hussein decides to walk in and invade Kuwait. So we have an invasion of Kuwait and President Bush sends 500,000 troops to Saudi to protect Saudi Arabia. There’s a very big concern about, you know, “Are we going to have another Vietnam War?” A lot of serious military people said, “This is going to be a terrible war.” Iraq has the fourth-largest army in the world. They really fought very well against Iran. These people are tough. This is going to be a long, awful thing. So people were very concerned about that, but, in addition, we had a very major banking crisis.

All the major New York City banks, Bank of America, the real cornerstone of this country were really in trouble. And this is a lot different than if W.T. Grant went under or Penn Central went under. Banking is really tight. And you had to hope that the banking system would hold together and that the Federal Reserve understood that Citicorp, Chase, Chemical, Manufacturers Hanover, Bank of America were very important to this country and that they would survive. And then we had a recession.

Unlike ’87 you called companies, in 1990 you called companies and say, “Gee, our business is startin’ to slip. Inventories are startin’ to pile up. We’re not doing that well.” So you really at that point in time had to believe the whole thing would hold together, that we wouldn’t have a major war. You really had to have faith in the future of this country in 1990. In ’87, the fundamentals were terrific and it was – it was like one of those three for two sales at the K-Mart. Things were marked down. It was the same story…

Tell the story about your wife stumbling on a big stock for you in the supermarket.

I had a great luck company called Hanes. They test-marketed a product called L’Eggs in Boston and I think in Columbus, Ohio, maybe three or four markets. And Carolyn, ah, brought this product home and she was buying and she said, “It’s great.” And she almost got a black belt in shopping. She’s a very good shopper. If we hadn’t had these three kids, she now – when Beth finally goes off to college, I think we’ll be able to resume her training.

But she’s a very good shopper and she would buy these things. She said, “They’re really great.” And I did a little bit of research. I found out the average woman goes to the supermarket or a drugstore once a week. And they go to a woman’s speciality store or department store once every six weeks. And all the good hosiery, all the good pantyhose is being sold in department stores. They were selling junk in the supermarkets. They were selling junk in the drugstores.

So this company came up with a product. They rack-jobbed it, they had all the sizes, all the fits, a down they never advertised price. They just advertised “This fits. You’ll enjoy it.” And it was a huge success and it became my biggest position and I always worried somebody’d come out with a competitive product, and about a year-and-a-half they were on the market another large company called Kaiser-Roth came out with a product called No Nonsense. They put it right next to L’eggs in the supermarket, right next to L’eggs in the drugstore. I said, “Wow, I gotta figure this one out.”

So I remember buying – I bought 48 different pairs at the supermarket, colours, shapes, and sizes. They must have wondered what kind of house I had at home when I got to the register. They just let me buy it. So I brought it into the office. I gave it to everybody. I said, “Try this out and come back and see what’s the story with No Nonsense.” And people came back to me in a couple of weeks and said, “It’s not as good.” That’s what fundamental research is. So I held onto Hanes and it was a huge stock and it was bought out by Consolidated Foods, which is now called Sara Lee, and it’s been a great division of that company. It might have been a thirty-bagger instead of a ten-bagger if it hadn’t been bought out.

The beginning of the bull market in 1982 and the environment. Were you surprised?

1982 was a very scary period for this country. We’ve had nine recessions since World War II. This was the worst. 14 percent inflation. We had a 20 percent prime rate, 15 percent long governments. It was ugly. And the economy was really much in a free-fall and people were really worried, “Is this it? Has the American economy had it? Are we going to be able to control inflation?” I mean there was a lot of very uncertain times.

You had to say to yourself, “I believe it in. I believe in stocks. I believe in companies. I believe they can control this. And this is an anomaly. Double-digit inflation is a rare thing. Doesn’t happen very often. And, in fact, one of my shareholders wrote me and said, “Do you realize that over half the companies in your portfolio are losing money right now?” I looked up, he was right, or she was right. But I was ready. I mean I said, “These companies are going to do well once the economy comes back. We’ve got out of every other recession. I don’t see why we won’t come out of this one.” And it came out and once we came back, the market went north.

Nobody told you it was coming.

It’s lovely to know when there’s a recession. I don’t remember anybody predicting 1982 we’re going to have 14 percent inflation, 12 percent unemployment, a 20 percent prime rate, you know, the worst recession since the Depression. I don’t remember any of that being predicted. It just happened. It was there. It was ugly. And I don’t remember anybody telling me about it. So I don’t worry about any of that stuff. I’ve always said if you spend 13 minutes a year on economics, you’ve wasted 10 minutes.

So what should people think about?

Well, they should think about what’s happening. I’m talking about economics as forecasting the future. If you own auto stocks you ought to be very interested in used car prices. If you own aluminium companies you ought to be interested in what’s happened to inventories of aluminium. If your stock is hotels, you ought to be interested in how many people are building hotels. These are facts. People talk about what’s going to happen in the future, that the average recession lasts 2 years or who knows? There’s no reason why we can’t have an average economic expansion that lasts longer. I mean I deal in facts, not forecasting the future. That’s crystal ball stuff. That doesn’t work. Futile…

Can the little guy play with the big guy in the stock market?

There’s always been this position that the small investor has no chance against the big institutions. And I always wonder whether that’s the person under four-foot-eight. I mean they always said the small investor doesn’t have a chance. And there’s two issues there. First of all, I think that he or she can do it, but, number two, the question is, people do it anyway. They invest anyway. And if they so believe this theory that the small investor has no chance, they invest in a different format.

They said, “This is a casino. I’ll buy stock this month. I’ll sell it a month later,” the same kind of performance that they do everywhere. When they look at a house, they’re very careful. They look at the school system. They look at the street. They look at the plumbing. When they buy a refrigerator, they do homework. If they’re so convinced that the small investor has no chance, the stock market’s a big game and they act accordingly, they hear a stock and they buy it before sunset, they’re going to get the kind of results that prove the small investor can do poorly.

Now if you buy a – you make a mistake on a car, you make a mistake on a house, you don’t blame the professional investors. But now if you do stupid research, you buy some company that has no sales, no earnings, a terrible financial position and it goes down, you say, “Well, it because of the programmed trading of those professionals,” that’s because you didn’t do your homework. So I – I’ve tried to convince people they can do a job, they can do very well, but they have to do certain things…

Talk about market timing.

The market itself is very volatile. We’ve had 95 years completed this century. We’re in the middle of 1996 and we’re close to a 10 percent decline. In the 95 years so far, we’ve had 53 declines in the market of 10 percent or more. Not 53 down years. The market might have been up 26 finished the year up four, and had a 10 percent correction. So we’ve had 53 declines in 95 years. That’s once every two years. Of the 53, 15 of the 53 have been 25 percent or more. That’s a bear market. So 15 in 95 years, about once every six years you’re going to have a big decline. Now no one seems to know when there are gonna happen. At least if they know about ’em, they’re not telling anybody about ’em.

I don’t remember anybody predicting the market right more than once, and they predict a lot. So they’re gonna happen. If you’re in the market, you have to know there’s going to be declines. And they’re going to cap and every couple of years you’re going to get a 10 percent correction. That’s a euphemism for losing a lot of money rapidly. That’s what a “correction” is called. And a bear market is 20-25-30 percent decline. They’re gonna happen. When they’re gonna start, no one knows. If you’re not ready for that, you shouldn’t be in the stock market.

I mean stomach is the key organ here. It’s not the brain. Do you have the stomach for these kind of declines? And what’s your timing like? Is your horizon one year? Is your horizon ten years or 20 years? If you’ve been lucky enough to save up lots of money and you’re about to send one kid to college and your child’s starting a year from now, you decide to invest in stocks directly or with a mutual fund with a one-year horizon or a two-year horizon, that’s silly. That’s just like betting on red or black at the casino.

What the market’s going to do in one or two years, you don’t know. Time is on your side in the stock market. It’s on your side. And when stocks go down, if you’ve got the money, you don’t worry about it and you’re putting more in, you shouldn’t worry about it. You should worry about what are stocks going to be 10 years from now, 20 years from now, 30 years from now. I’m very confident.

If you had invested in ’66, it would have taken 15 years to make the money back.

Well, from ’66 to 1982, the market basically was flat. But you still had dividends in stocks. You still had a positive return. You made a few percent a year. That was the worst period other than the 1920s, in this century. So companies still pay dividends, even though if their stock goes sideways for ten years, they continue to pay you dividends, they continue to raise their dividends. So you have to say to yourself, “What are corporate profits going to do?” Historically, corporate profits have grown about eight percent a year. Eight percent a year. They double every nine years. They quadruple every 18. They go up six-fold every 25 years. So guess what? In the last 25 years corporate profits have gone up a little over six-fold, the stock market’s gone up a little bit over six-fold, and you’ve had a two or three percent dividend yield, you’ve made about 11 percent a year. There’s an incredible correlation over time.

So you have to say to yourself, “What’s gonna happen in the next 10-20-30 years? Do I think the General Electrics, the Sears, the Wal-Marts, the MicroSofts, the Mercks, the Johnson & Johnsons, the Gillettes, Anheiser-Busch, are they going to be making more money 10 years from now, 20 years from now? I think they will.” Will new companies come along like Federal Express that came along in the last 20 years? Will new companies come along like Amgen that make money? Will new companies come along like Compaq Computer? I think they will. There’ll be new companies coming along that make money. That’s what you’re investing in.

4. Roughly Right or Precisely Wrong – Ben Carlson

I have a love-hate relationship with historical market data.

On the one hand, since we can’t predict the future, calculating probabilities from the past in the context of the present situation is our only hope when it comes to setting expectations for financial markets. On the other hand, an overemphasis on historical data can lead to overconfidence if makes you believe that backtests can be treated as gospel.

In some ways markets are predictable in that human nature is the one constant across all environments. This is why the pendulum is constantly swinging from manias to panics. In other ways markets are unpredictable because stuff that has never happened before seems to happen all the time…

…Let’s say you put $5,000 into the initial S&P 500 ETF (SPY) right around when it started at the beginning of 1994. On top of that you also contribute $500/month into the fund. Simple right?

Here’s what this scenario looks like:

Not bad.

This is the summary:

  • Initial investment (start of 1994): $5,000
  • Monthly investment: $500
  • Total investments: $181,000
  • Ending balance (April 2023): $915,886

Plenty of volatility along the way but this simple dollar cost averaging strategy would have left you with a lot more money than you initially put into it.

Even though things worked out swimmingly by the end of this scenario there were some dark days along the way. You can see on the chart where the purple line dips below the blue line in 2009 by the end of the stock market crash from the Great Financial Crisis. By March of 2009 you would have made $96,000 in contributions with an ending market value of a little more than $94,000. So that’s more than a decade-and-a-half of investing where you ended up underwater.

It wasn’t prudent but I understand why so many investors threw in the towel in 2008 and 2009. Things were bleak. Everything worked out phenomenally if you stuck with it but investing in stocks can be painful at times…

…Just for fun, let’s reverse this scenario to see what would happen if you started out in 1994 with the same ending balance but now you’re taking portfolio distributions.

Like this:

  • Initial balance (start of 1994): $915,886
  • Annual portfolio withdrawal: 4% of portfolio value

An ending balance of more than $4 million while spending $1.7 million along the way from a starting point of a little less than $1 million is pretty, pretty good.

The usual caveats apply here — past performance says nothing about future performance, no one actually invests in a straight line like this, no one invests in a single fund like this, no one uses this type of withdrawal strategy in retirement nor do they invest 100% in stocks while doing so, etcetera, etcetera, etcetera.

5. ‘I can’t make products just for 41-year-old tech founders’: Airbnb CEO Brian Chesky is taking it back to basics – Nilay Patel and Brian Chesky

Lots of companies are bringing their people back to the office. The idea that, you know, people are going to be in a different house every time you see them on a Zoom call has somewhat faded. Is that still part of the bet for Airbnb? Or are you shifting to this other model?

Yeah. Let me tell you how I think it’s going to play out. And of course, we’re just all in the business of predicting the future, and the problem is it doesn’t always age well. I think that, like, pure work from home or pure remote is ending.

I generally think the future is flexibility. Here’s the calculation every CEO has to make: are you more productive having people physically in an office together and then constraining who you hire to a 30-mile or a 60-mile commuting radius to the office?

Or by allowing your team to be able to hire people from anywhere? And the truth is, it probably depends on the role. A lot of our software engineers or accountants, certain types of lawyers, we probably don’t need them physically in the office with everyone else. There’s certain creative functions or people on certain teams that we probably do want together physically quite a lot.

And then the question is, “Do we need them together 50 weeks a year?” And the answer for us is no. We actually go in spurts. We do these product releases, so we kind of need people together months at a time, and they can choose to live here, but if they want to go away for a couple months, if people want to go away for the summer, that’s possible.

I think we’re going to start to live in a much more nuanced world where the companies aren’t going to have all the people in the office. They’re going to decide that some roles are most effective being on a small team in the office, but a giant sea of desks probably isn’t the most effective thing, and many roles will be much more effective when allowing flexibility so you can have a global talent pool.

I think there’s going to be a post-pandemic equilibrium that we haven’t seen yet that’s going to play out over the coming years…

You have a lot of decisions to make. You’re obviously very thoughtful about how you make decisions and how you see the company going. How do you make decisions? What’s your framework?

Can I answer that question with a story? So, in 2011, I had my first crisis. We had our first crisis. A woman named EJ was a host in San Francisco. And one day, someone came, and they trashed her apartment. And I went on, and I wrote a letter. I published it on TechCrunch and I said, “We’ve resolved the issue.”

And then, of course, EJ said, “No, you didn’t resolve the issue.” And I was misinformed, and this crisis brewed. And then basically what happened was within days, every time I tried to communicate something, I kind of seemed to keep making it worse. And then I hired these crisis communications professionals, and I had these outside counsels, and they were giving me what seemed like good counsel.

They basically said, “Be careful about admitting fault. Be careful about this. Don’t say that. Do this, do that.” And every time I got advice and every time I tried to manage to an outcome, I seemed to make the situation worse because I think what people really wanted was authenticity. They really wanted me to, you know, just speak from the heart.

And at some point there was — this is in 2011—we were one of the first hashtags. There was #ransackgate and #ripAirbnb. I mean, people literally thought we weren’t going to recover from this because they thought we had no solution.

And at this point, I came to a conclusion that the most important decision I’m going to make would be based on principles, not on outcomes. In other words, I was going to make principle decisions, not business decisions. And the principle decision is: if I can’t figure out the outcome, how do I want to be remembered?

And I said, “Well, I don’t know how this is going to play out. Whatever I’m going to do is probably going to make the situation worse. But I’m just going to say wholeheartedly, ‘I’m sorry.’ I’m going to tell the story, and I’m going to do something crazy. I’m going to do more than what is expected of me.”

What was expected was we make it right for customers. So we ended up with this $50,000 guarantee. It started as a $5,000 guarantee. Marc Andreessen came by my office at midnight. He had just funded the company, and he said, “Add a zero.” And then suddenly we said we would provide $50,000 protection retroactively to everyone on the platform.

And it actually was one of the biggest moments in the company. And ever since then, I came to the conclusion that I’m going to try to make principle decisions, not business decisions. And then this led to another development, which is first principle thinking, which I’m sure you’re aware of. I think a lot of us think by analogy, but if you can understand the first principles of something, then you can really make a decision.

So I’ve been applying this ever since. And it all came back to us during the pandemic because, in January and February 2020, I noticed our business fell off a cliff. And within eight weeks, we lost 80 percent of our business. And on March 15th, the Ides of March, we called an emergency board meeting.

It was a Sunday, I’ll never forget it. And in this board meeting, I wrote out a series of principles about how to manage the crisis. And the first principle I set is we’re going to act decisively. The second is we’re going to preserve cash. The third is we’re going to act with shareholders in mind. And the fourth is we’re going to win the next travel season.

And I had even more detailed principles, and I said to the board, “I’m going to have to make like a thousand decisions a week, and so I can’t run every decision by you. So instead, let’s agree on the principles, and I’ll use those principles to make these decisions.” And I think a lot of people really struggle in a crisis or in times when they’re moving quickly because they don’t have data or the data’s changing.

But if you have a deep understanding of something, that’s better. My issue with A-B experimentation, for example, is that a lot of times, when people choose A or B, they don’t know why B worked. So let’s say, “Oh B works.” Well, why did B work? Because if you don’t know why B works, then you can never change it because you don’t actually have any intellectual property developed around B.

So experimentation’s fine if you know why the experiment worked and if it reinforced your understanding. So I try to make decisions based on first principles. And those first principles are based on whatever we believe in, and what we believe in might be right, might be wrong in the eyes of others, but that’s how we do it.

And you know, it really comes down to listening to people. I try to have qualitative and quantitative information, art and science. I try to balance being in the lab with being in the field. And I try to be as close as I can to decisions as possible. I try to get emotionally invested. A lot of people say if you do a layoff or fire people, don’t get emotionally invested. 

I say that’s exactly what you want. You want to understand deeply all the costs. And then if you can still make the decision, then you know you’ve made the right decision. So I generally say be principled, be as close to the decision-making as possible, and get as emotionally invested in something as you can. And then explain your thinking. The exercise of having to explain your thinking clarifies your thinking. A lot of people, they feel something, but they can’t explain their thinking. It’s a good indication that their thinking is still cloudy.

So that’s kind of how we do it. It’s first-principle oriented. It’s clear, it’s hopefully compassionate. We get as close to the decision, and as connected to emotions, as possible. It’s the head and the heart.

The last time you were on, we talked a lot about the structure of the company. 

You said that when the pandemic hit, the business had cratered 80 percent. A good quote you said, that I think about all time, is, “I stared into the abyss.” And then you restructured the company. You had a functional startup structure. Then you’d gone into a divisional structure, and you said, “You know what?

I’m pulling this back into a functional structure. We have one division. I’m going to run it all. I’m going to make sure I see everything.” You’re talking about going through customer service complaints now. Are you still in that structure? Has it worked?

Yeah. I mean, we are still in that structure. We decided, let’s go back to being a functional organization. And I actually drew inspiration from Apple around the same time that the pandemic hit is when I started talking to Jony Ive.

We brought him on board a little later. I also hired somebody who changed the trajectory of the company named Hiroki Asai. He was the creative director at Apple, and they really kind of brought me along on this methodology Steve Jobs had. Steve Jobs came back to Apple in 1997.

They were like 90 days from bankruptcy or maybe even fewer. And it was divisionalized. I think it had something like 80 products. And he did two things. He cut most of the products, and he went back to a functional organization, and that’s what we did.

And the other thing we did, which seemed crazy at the time, and it’s now totally intuitive, is we put the entire company on one road map. So for most tech companies, every executive has their own swim lanes. We said, “You have no swim lanes. Everyone works on everything together. Your only swim lane is your function.

We’re going to all collaborate.” I said, “I’m not going to push decision-making down. I’m going to pull decision-making in.” I’m the chief editor. I’m like an orchestra conductor, and I have to understand enough about each instrument to make sure it creates one sound. The other thing I said is, “We’re going to connect product and marketing together.”

Product at a company are like chefs, marketing are like waiters, and they never allow the waiters in the kitchen, or they get yelled at. And I thought, well, what if you actually have them collaborate on product? What if marketing, you know, challenges engineering and engineering inspires marketing? They could actually be connected.

And I think you can tell the health of the organization by how connected engineering and marketing are. And so we did this. We then started doing release cycles, which meant instead of doing this agile, bottoms-up AB testing, shipping continuously every minute of every day… Now we do some of that still. We said 70–80 percent of our product release is going to be done like hardware.

We’re going to ship stuff twice a year. And the reason we’re going to do that is we’re going to embrace constraints. When you ship stuff at the same time, everyone’s on a deadline. Then I meet with every single team every week, every two weeks, or every four weeks. I’m working and editing the work. I’m making sure it all fits together.

It ladders up to a cohesive product story. And then we have this function called product marketing. It’s actually outbound marketing plus product management in one role. 

This is very much like Apple, by the way. Apple has product marketing at scale.

Yes, and we took that from them because they’re really good at talking about the product.

We don’t have senior product managers at Airbnb. If you’re a senior product manager, you also have to do outbound marketing. You’re not allowed to decouple the roles. We have no pure product marketers who don’t do product management.

We don’t allow that. And their job is to keep the entire company stitched together and make sure we understand the story we’re telling, who the product’s for, and make sure everything we deliver ships to that product. So we now do two releases a year. The reason we’re talking is because we just did our summer release for May, and what we found is this: when I told people, Nilay, about this development process, the first thing everyone said is, “This is going to be horrible. No one’s going to wanna work together. It’s going to stifle innovation. It’s going to be too top-down. You’re not going to have as many ideas. It’s going to be a bottleneck,” et cetera. “I can tell you all the reasons this is a bad idea.” What we found is we ship way faster. We have now shipped 340 upgrades. We shipped over 53 upgrades today.

It creates a drumbeat for the organization, a rhythm. There is very little bureaucracy. Now we do say no to more things. There are some downsides, like you can’t do as many divergent things because everything is cohesive and integrated. But anything on the road map ships. Almost never do we greenlight something and it doesn’t happen.

So the answer to your question is we’ve been able to ship significantly faster and the paradox is that people are actually happier. As I created more constraints, as the culture got a little more top-down, as it was more integrated… Everyone, if they could have, 99 percent of people would’ve voted against this idea [at the beginning] because it doesn’t intuitively sound like something fun to work in. Almost everyone, at least people still here, seem to be happier. Now, maybe there’s a bias of the people who like it decided to stay, and the people who don’t like it decided to leave. There might be that, too. I want to acknowledge that.

But ultimately I do think the company’s much more productive, and it actually bears out financially. When we were doing this bottoms-up free for all approach, which is kind of my pejorative for it, we were basically losing $250 million in EBITDA a year. We were not profitable. Growth was slowing, cost was rising.

Last year, we did $3.5 billion in free cash flow and actually I believe, Nilay — this might be true now — for every dollar we earn, I think we earn more free cash flow than Apple, Google, or Microsoft. More than 40 percent of revenue becomes free cash flow. Now we’re not nearly the size of them.

That’s not the point. But the point is it’s extremely efficient. It helps to be a marketplace that’s capital-light, but it also helps to have one marketing department. It helps to not have a lot of waste. It helps to have one rhythm of the organization…

…There’s like an AI stack. The bottom of the AI stack is what you might call base models. And there’s like three to five base models. So Google has, like, maybe a couple of ‘em. OpenAI has one.

Anthropic has one. Microsoft Research kind of has one, though they seem to be mostly tied to OpenAI at this point. So those are the base models. Think of it like a highway. Those are infrastructure companies. They’re building the highway. We’re not going to be building base models ‘cause we’re not going to be building infrastructure.

The layer on top of that is now tuning the models. Tuning the models is going to be really important. If you and I go to ChatGPT and we ask it a question, we’re probably going to get something like the same answer. And that would be because ChatGPT doesn’t know your preferences and doesn’t know my preferences.

And for many queries getting the same answer is great. But what if you ask, like, “Hey, where should I go on vacation?” Or like, “Who’s a good person to, like, date?” Well, depends. Who are you? What do you want in your life? And so I think that there needs to be a personalization layer on top of AI, and that’s going to come from the data you have and the permission you get from customers.

Now, I think our vision is eventually one day, we’re going to be one of the most personalized AI layers on the internet.

We’re going to design, hopefully, some of the leading AI interfaces. We’re going to basically try to deeply understand you, learn about you, care about you, and be able to understand your preferences…

…Here’s one of the great things that AI does: think about it — 130 years ago, only probably a few people could use a camera, right? It was a highly technical thing. It was expensive. Most people take photos now.

Anyone in the world can basically use a camera. They’re ubiquitous, they’re on our phones. I kind of think software development’s going to be like that, that pretty soon, everyone will be able to develop software because software is just a language you have to learn. Now there’s always going to be development below the stack at the deeply technical level, but a lot of that front-end development is going to be replaced by natural language. As this happens, so many more people can develop software, and as so many more people can develop software, I think you’re going to see software in everything.

We’re going to have to create interface standards because we don’t want to ping-pong back and forth and just be totally confused. I don’t even think search is the right use case for every task. Sometimes it’s voice, sometimes it’s a conversation.

Ultimately, it would be great if interfaces understood you better, right? This is a problem with Airbnb. Every time you come back to Airbnb, we show you a whole bunch of categories. And if you’re a budget traveler, we show you lux.

And if you’re wealthy, we might show you Airbnb Rooms. We should know more about you. The way companies have tried to solve personalization is through data regression of clicks, right? So if I clicked on something in the past, then I’m going to show you that in the future. But that’s actually not a great way to understand somebody.

Like maybe I went on Amazon, I bought a bunch of alcohol, but I’m actually now a recovering alcoholic and I’m trying not to drink. And you don’t know that, and the mini bar has alcohol there because I order all the time, but I actually feel bad about it and I actually don’t want to drink.

And so I think companies developing a better understanding of you, having a sense of your personalized preferences, having that interface is going to be really important. And I actually think it allows many more people to participate in the economy because, in the past, the only people that could build software were engineers…

You’ve given me so much time. Last question. We’ve talked a lot about Apple and how inspired you are by Apple structures, by their organization, by their processes, by Steve Jobs.

You do have this long-standing deal now with Jony Ive and his agency LoveFrom. Have they shipped anything with you? What does that relationship look like? What has it accomplished for you?

In 2014, we were designing our new logo, what people know now as our logo, and I knew Jony Ive, and I sent it to him, and he basically talked to me about how you shouldn’t have flat lines, you should have this continuous curvature.

And so he and the team spent some time, and he redesigned the spleens of the curb. And so the actual logo that you see on Airbnb, the final mark, was designed by Jony Ive. I kept in touch with him, and then when I read that he left Apple, I said, “We gotta work together.” And we started talking a lot in the beginning of 2020.

Again, it happened perfectly coincidentally, with a period of time when I felt like we had a crisis, almost the size of Apple’s crisis in the late ’90s. And I turned to him, and obviously, he gave me a lot of great advice. He told me a couple things.

The first thing is we used to talk about our mission as belonging. And the problem with using the word belonging is I noticed that employees were confusing belonging with inclusion. And then they were conflating inclusion with the lack of discrimination. And then they said, “Well, our mission is to not discriminate.”

And I said, “Well, that’s a really low bar.” Of course, you shouldn’t discriminate, but when we say belonging, it has to be more than just inclusion. It has to actually be the proactive manifestation of meeting people, creating connections in friendships. And Jony Ive said, “Well, you need to reframe it. It’s not just about belonging, it’s about human connection and belonging.”

And that was, I think, a really big unlock. The next thing Jony Ive said is he created this book for me, a book of his ideas, and the book was called “Beyond Where and When,” and he basically said that Airbnb should shift from beyond where and when to who and what?

Who are you and what do you want in your life? And that was a part of the inspiration behind Airbnb categories, that we wanted people to come to Airbnb without a destination in mind and that we could categorize properties not just by location but by what makes them unique, and that really influenced Airbnb categories and some of the stuff we’re doing now. 

The third thing is he really helped me think through the sense that Airbnb is a community. You know, this is really interesting. Most people think of Jony Ive as like somebody who deals with atoms, like aluminum and glass.

But actually he said that he spent 30 years building tools. And what he realizes now is that we don’t just need more tools — we need more connections. And I thought that was a really profound thing and. He really helped us think of ourselves — this is a subtle word shift, Nilay — but going from a marketplace to a community because in a marketplace, everything’s a transaction, and in a community, everything should not be a transaction.

Otherwise, those aren’t real relationships or real connections. And so he has helped me think about how to shift from a marketplace to a community. I think some of that inspiration is what led to Airbnb Rooms, what led to the creation of the host passport. But he and the team are heads-down with me working on stuff that’s going to ship next May and next November.

One of the things Jony and I talked about is we need permission to do new things. So I’ll just use a rewind. It’s the year 2005, maybe 2006, and everyone was hoping that Apple would come out with an iPhone. And in January 2007, Steve Jobs announced it.

Now the reason we all wanted Steve Jobs to come out with an iPhone in 2006 and 2007 was because most of us loved our iPods. None of us were asking Gateway computer to come out with a phone because we didn’t love Gateway’s laptops. And so basically I think we need to have permission to do new, innovative things.

And we have permission when people love the core thing. And I came to the conclusion that we needed to focus much more on our core service. People were still complaining about pricing, cleaning fees, all sorts of things about Airbnb. And again, it comes from this disease that happens to a lot of founders or this thing that happens where we fall out of love with our core business.

And, as I told you a couple years ago, when we almost lost our business, we stared into the abyss. There’s something about almost losing something that makes you fall back in love with it. And I think maybe that happened to our core business, and we said, “Before we go on to new things, before we do whatever we’re going to do, we’re going to get back to the core, back to the basics, and really just focus on making this product something that people love.”

And so for the last few years, that’s what we’ve tried to do. We’ve tried to basically fix as many things as possible. That’s why we created a blueprint, something that Jony and others helped inspire, which is to say, “Let’s be systematic about the complaints. Let’s be systematic by how we address the feedback, and let’s tell a story to the community about all the things we’re fixing.”

And my hope is that by the end of this year, we’ll have addressed to some extent every single thing people are complaining about. They really do love the service. It feels truly delightful.

So our vision for this company is the following: that Airbnb is a marriage of art and science, that we’re a truly creatively-led company. Our two core values are basically design creativity married with technologies and then this idea of community and connection. A company with this real humanistic feel that you come to Airbnb, we ask you a series of questions.

We learn about you. We understand who you are, what you want. We design these incredibly simple interfaces, and then our job as a host is we develop these really robust matching algorithms, and then we can match you to whatever you want. 

And so if we can build this incredibly robust identity system, if we can have the most robust profiles, almost like a physical social network where we can connect people together in this community, if we can use AI to augment customer service, to deeply understand and resolve your issues within seconds, not just minutes or hours, and we can then build these incredibly simple interfaces where we match you to whatever you want in your life, that’s basically the idea of where we’re trying to go. And Jony Ive and his team, they’re working on things just in that area.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 14 May 2023:

1. Why Conscious AI Is a Bad, Bad Idea – Anil Seth

To get a handle on these challenges—and to clarify the confusing and hype-ridden debate around AI and consciousness—let’s start with some definitions. First, consciousness. Although precise definitions are hard to come by, intuitively we all know what consciousness is. It is what goes away under general anesthesia, or when we fall into a dreamless sleep, and what returns when we come round in the recovery room or wake up. And when we open our eyes, our brains don’t just process visual information; there’s another dimension entirely: Our minds are filled with light, color, shade, and shapes. Emotions, thoughts, beliefs, intentions—all feel a particular way to us.

As for intelligence, there are many available definitions, but all emphasize the ability to achieve goals in flexible ways in varied environments. Broadly speaking, intelligence is the capacity to do the right thing at the right time.

These definitions are enough to remind us that consciousness and intelligence are very different. Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.

This distinction is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware—at which the inner lights come on for them. Last March, OpenAI’s chief scientist Ilya Sutskever tweeted, “It may be that today’s large language models are slightly conscious.” Not long after, Google Research vice president Blaise Agüera y Arcas suggested that AI was making strides toward consciousness.

These assumptions and suggestions are poorly founded. It is by no means clear that a system will become conscious simply by virtue of becoming more intelligent. Indeed, the assumption that consciousness will just come along for the ride as AI gets smarter echoes a kind of human exceptionalism that we’d do well to see the back of. We think we’re intelligent, and we know we’re conscious, so we assume the two go together.

Recognizing the weakness of this assumption might seem comforting because there would be less reason to think that conscious machines are just around the corner. Unfortunately, things are not so simple. Even if AI by itself won’t do the trick, engineers might make deliberate attempts to build conscious machines—indeed, some already are.

Here, there is a lot more uncertainty. Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness, much remains unknown. My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living. If I’m right, the prospect of conscious AI remains reassuringly remote.

But I may be wrong, and other theories are a lot less restrictive, with some proposing that consciousness could arise in computers that process information in particular ways or are wired up according to specific architectures. If these theories are on track, conscious AI may be uncomfortably close—or perhaps even among us already…

…There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel…

…Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film Ex Machina. This test reframes the classic Turing Test—usually considered a test of machine intelligence—as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

This will land society into dangerous new territory. By wrongly attributing humanlike consciousness to artificial systems, we’ll make unjustified assumptions about how they might behave. Our minds have not evolved to deal with situations like this. If we feel that a machine consciously cares about us, we might put more trust in it than we should. If we feel a machine truly believes what it says, we might be more inclined to take its views more seriously. If we expect an AI system to behave as a conscious human would—according to its apparent goals, desires, and beliefs—we may catastrophically fail to predict what it might do.

2. Breach of Trust: Decoding the Banking Crisis – Aswath Damodaran

Banks with sticky deposits, on which they pay low interest rates (because a high percentage are non-interest bearing) and big buffers on equity and Tier 1 capital, which also earn “fair interest rates”, given default risk, on the loans and investments they make, add more value and are usually safer than banks with depositor bases that are sensitive to risk perceptions and interest rates paid, while earning less than they should on loans and investments, given their default risk…

…  It is worth noting that all of the pain that was coming from writing down investment security holdings at banks, from the surge in interest rates, was clearly visible at the start of 2023, but there was no talk of a banking crisis. The implicit belief was that banks would be able to gradually realize or at least recognize these losses on the books, and use the time to fix the resulting drop in their equity and regulatory capital. That presumption that time was an ally was challenged by the implosion of Silicon Valley Bank in March 2023, where over the course of a week, a large bank effectively was wiped out of existence. To see why Silicon Valley Bank (SVB)  was particularly exposed, let us go back and look at it through the lens of good/bad banks from the last section:

  1. An Extraordinary Sensitive Deposit Base: SVB was a bank designed for Silicon Valley (founders, VCs, employees) and it succeeded in that mission, with deposits almost doubling in 2021. That success created a deposit base that was anything but sticky, sensitive to rumors of trouble, with virally connected depositors drawn from a common pool and big depositors who were well positioned to move money quickly to other institutions. 
  2. Equity and Tier 1 capital that was overstated: While SVB’s equity and Tier 1 capital looked robust at the start of 2023, that look was deceptive, since it did not reflect the write-down in investment securities that was looming. While it shared this problem with other banks, SVB’s exposure was greater than most (see below for why) and explains its attempt to raise fresh equity to cover the impending shortfall.
  3. Loans: A large chunk of SVB’s loan portfolio was composed of venture debt, i.e., lending to pre-revenue and money-losing firms, and backed up by expectations of cash inflows from future rounds of VC capital. Since the expected VC rounds are conditional on these young companies being repriced at higher and higher prices over time, venture debt is extraordinarily sensitive to the pricing of young companies. In 2022, risk capital pulled back from markets and as venture capital investments dried up, and down rounds proliferated, venture debt suffered.
  4. Investment Securities: All banks put some of their money in investment securities, but SVB was an outlier in terms of how much of its assets (55-60%) were invested in treasury bonds and mortgage-backed securities. Part of the reason was the surge in deposits in 2021, as venture capitalists pulled back from investing and parked their money in SVB, and with little demand for venture debt, SVB had no choice but to invest in securities. That said, the choice to invest in long term securities was one that was made consciously by SVB, and driven by the interest rate environment in 2021 and early 2022, where short term rates were close to zero and long term rates were low (1.5-2%), but still higher than what SVB was paying its depositors. If there is an original sin in this story, it is in this duration mismatch, and it is this mismatch that caused SVB’s fall.

In the aftermath of SVB’s failure, Signature Bank was shut down in the weeks after and First Republic has followed, and the question of what these banks shared in common is one that has to be answered, not just for intellectual curiosity, because that answer will tell us whether other banks will follow. It should be noted that neither of these banks were as exposed as SVB to the macro shocks of 2022, but the nature of banking crises is that as banks fall, each subsequent failure will be at a stronger bank than the one that failed before.

  • With Signature Bank, the trigger for failure was a run on deposits, since more than 90% of deposits at the bank were uninsured, making those depositors far more sensitive to rumors about risk. The FDIC, in shuttering the bank, also pointed to “poor management” and failure to heed regulatory concerns, which clearly indicate that the bank had been on the FDIC’s watchlist for troubled banks.
  • With First Republic bank, a bank that has a large and lucrative wealth management arm, it was a dependence on those wealthy clients that increased their exposure. Wealthy depositors not only are more likely to have deposits that exceed $250,000, technically the cap on deposit insurance, but also have access to information on alternatives and the tools to move money quickly. Thus, in the first quarter of 2023, the bank reported a 41% drop in deposits, triggering forced sale of investment securities, and the realization of losses on those sales.

In short, it is the stickiness of deposits that seems to be the biggest indicator of banks getting into trouble, rather than the composition of their loan portfolios or even the nature of their investment securities, though having a higher percentage invested in long term securities leaves you more exposed, given the interest rate environment. That does make this a much more challenging problem for banking regulators, since deposit stickiness is not part of the regulatory overlay, at least at the moment. One of the outcomes of this crisis may be that regulators monitor information on deposits that let them make this judgment, including:

  1. Depositor Characteristics: As we noted earlier, depositor age and wealth can be factors that determine stickiness, with younger and wealthier depositors being less sticky that older and poorer depositors. At the risk of opening a Pandora’s box, depositors with more social media presence (Twitter, Facebook, LinkedIn) will be more prone to move their deposits in response to news and rumors than depositors without that presence.
  2. Deposit age: As in other businesses, a bank customer who has been a customer for longer is less likely to move his or her deposit, in response to fear, than one who became a customer recently. Perhaps, banks should follow subscriber/user based companies in creating deposit cohort tables, breaking deposits down based upon how long that customer has been with the bank, and the stickiness rate in each group.
  3. Deposit growth: In the SVB discussion, I noted that one reason that the bank was entrapped was because deposits almost doubled in 2021. Not only do very few banks have the capacity to double their loans, with due diligence on default risk, in a year, but these deposits, being recent and large, are also the least sticky deposits at the bank. In short, banks with faster growth in their deposit bases also are likely to have less sticky depositors.
  4. Deposit concentration: To the extent that the deposits of a bank are concentrated in a geographic region, it is more exposed to deposit runs than one that has a more geographically diverse deposit base. That would make regional bank deposits more sensitive that national bank deposits, and sector-focused banks (no matter what the sector) more exposed to deposit runs than banks that lend across businesses.

Some of this information is already collected at the bank level, but it may be time for bank regulators to work on measures of deposit stickiness that will then become part of the panel that they use to judge exposure to risk at banks…

… The conventional wisdom seems to be that big banks have gained at the expense of smaller banks, but the data is more ambiguous. I looked at the 641 publicly traded US banks, broken down by market capitalization at the start of 2023 into ten deciles and looked at the change in aggregate market cap within each decile. 

As you can see the biggest percentage declines in market cap are bunched more towards the bigger banks, with the biggest drops occurring in the eighth and ninth deciles of banks, not the smallest banks. After all, the highest profile failures so far in 2023 have been SVB, Signature Bank and First Republic Bank, all banks of significant size.

If my hypothesis about deposit stickiness is right, it is banks with the least stick deposits that should have seen the biggest declines in market capitalization. My proxies for deposit stickiness are limited, given the data that I have access to, but I used deposit growth over the last five years (2017-2022) as my  measure of stickiness (with higher deposit growth translating into less stickiness):

The results are surprisingly decisive, with the biggest market capitalization losses, in percentage terms, in banks that have seen the most growth in deposits in the last five years. To the extent that this is correlated with bank size (smaller banks should be more likely to see deposit growth), it is by no means conclusive evidence, but it is consistent with the argument that the stickiness of deposits is the key to unlocking this crisis.

3. Inside the Delirious Rise of ‘Superfake’ Handbags – Amy X. Wang

My plunge into the world of fantastically realistic counterfeit purses — known as “superfakes” to vexed fashion houses and I.P. lawyers, or “unclockable reps” to their enthusiastic buyers — began a couple of years earlier, in what I might characterize as a spontaneous fit of lunacy. It was early 2021 when, thrown into sensory overload by grisly pandemic headlines, I found my gaze drifting guiltily to an advertisement in the right margin of a news site, where the model Kaia Gerber arched her arms lovingly around a Celine Triomphe — a plain, itty-bitty rectangular prism that in no universe could possibly be worth, as further research informed me, $2,200.

I shut the tab, horrified. Having grown up a first-generation immigrant whose family’s idea of splurging was a monthly dinner at Pizza Hut, I refused to be the type of person who lusted over luxury handbags. I had always understood that these artifacts were not for me, in the way debutante balls or chartered Gulfstreams were not for me. But, days later and still mired in the quicksand of quarantine, I found myself cracking my laptop and Googling “buy Celine Triomphe cheap.” This led me to a Reddit community of replica enthusiasts, who traded details about “trusted sellers” capable of delivering a Chanel 2.55 or Loewe Puzzle or Hermès Birkin that promised to be indistinguishable from the original, and priced at a mere 5 percent or so of the M.S.R.P…

…Untangling the problem of duplication in the fashion industry is like trying to rewrap skeins of yarn. Designer houses spend billions fighting dupes, but even real Prada Cleos and Dior Book Totes are made with machines and templates — raising the question of what, exactly, is unique to an authentic bag. Is it simply a question of who gets to pocket the money? (Hermès recently mounted, and won, a trademark war against “MetaBirkin” NFTs.)…

…I spoke with Kelly, one such person, seeking to peek under the hood of the shadowy business. (“Kelly” is not her real name; I’m referring to her here by the English moniker that she uses on WhatsApp. I contacted more than 30 different superfake-bag-sellers before one agreed to an interview.) Five years ago, Kelly worked in real estate in Shanghai, but she got fed up with trekking to an office every day. Now she works from home in Guangzhou, often hammering out a deal for a Gucci Dionysus or Fendi Baguette on her phone with one hand, wrangling lunch for her 8-year-old daughter with the other. Kelly finds the whole business of luxury bags — the sumptuous leather, razor-straight heat stamps, hand stitches, precocious metal mazes of prancing sangles and clochettes and boucles and fermoirs — “way too fussy,” she tells me in Chinese. But the work-life balance is great. As a sales rep for replicas, Kelly makes up to 30,000 yuan, or about $4,300, a month, though she has heard of A-listers who net up to 200,000 yuan a month — which would work out to roughly $350,000 a year.

On a good day, Kelly can sell more than 30 gleaming Chloés and Yves Saint Laurents, to a client base of mostly American women. “If a bag can be recognized as fake,” she told me, “it’s not a worthwhile purchase for the customer, so I only sell bags that are high-quality but also enticingly affordable — $200 or $300 is the sweet spot.” Kelly keeps about 45 percent of each sale, out of which she pays for shipping, losses and other costs. The rest is wired to a network of manufacturers who divvy up proceeds to pay for overhead, materials and salaries. When a client agrees to order a bag from Kelly, she contacts a manufacturer, which arranges for a Birkin bag to roll out of the warehouse into an unmarked shipping box in a week or so.

In Guangzhou, where a vast majority of the world’s superfakes are thought to originate, experts have identified two main reasons behind the illicit goods’ lightning-fast new speeds: sophistication in bag-making technology and in the bag-makers themselves.

One such innovation in the latter is a disjointed, flat-string, hard-to-track supply chain. When the intellectual-property lawyer Harley Lewin was the subject of a New Yorker profile in 2007, he could often be found busting through hidden cellars on raids around the world. But increasingly, Lewin told me, “I’m sort of the guy in the spy novel who’s called ‘Control’ and sits in a room,” trying to sniff out “the bad guys” from screenshots of texts and D.M.s. Counterfeiting operations are no longer pyramid-shaped hierarchies with ever-higher bosses to roll: “Nowadays it’s a series of blocks, the financier and the designers and the manufacturers, and none of the blocks relate to each other,” Lewin explains. “So if you bust one block, odds are they can replace it in 10 minutes. The person you bust has very little information about who organizes what and where it goes.” Indeed, Kelly, even though she has sold every color variation of Louis Vuitton Neverfull under the sun, only handles bags in person on rare occasions to inspect quality. Sellers don’t stock inventory. They function as the consumer-facing marketing block, holding scant knowledge of how other blocks operate. Kelly just gets daily texts from a liaison at each outlet, letting her know of their output: “The factories won’t even tell us where they are.”

As for how the superfakes are achieving their unprecedented verisimilitude, Lewin, who has observed their factories from the inside, says it’s simply a combination of skillful artisanship and high-quality raw materials. Some superfake manufacturers travel to Italy to source from the same leather markets that the brands do; others buy the real bags to examine every stitch. Chinese authorities have little to no incentive to shut down these operations, given their contributions to local economies, the potential embarrassment to local ministers and the steady fraying of China’s political ties with the Western nations where savvy online buyers clamor for the goods. “They avoid taxes,” Lewin says. “The working conditions are terrible. But all of that goes to turning out a very high-quality fake at very low cost.”…

…Those whose business it is to verify luxury bags insist, at least publicly, that there’s always a “tell” to a superfake. At the RealReal, where designer handbags go through rounds of scrutiny, including X-rays and measuring fonts down to the millimeter, Thompson told me that “sometimes, an item can be too perfect, too exacting, so you’ll look at it and know something is up.” And, he added, touch and smell can be giveaways. Rachel Vaisman, the company’s vice president of merchandising operations, said the company will contact law-enforcement officials if it suspects a consignor is sending in items with the intent to defraud.

But one authenticator I spoke with confesses that it’s not always so clear-cut. The fakes “are getting so good, to the point that it comes down to inside etchings, or nine stitches instead of eight,” he told me. “Sometimes you really have no idea, and it becomes a time-consuming egg hunt, comparing photos on other websites and saying, ‘Does this hardware look like this one?’” (He asked to remain anonymous because he is not permitted to speak on behalf of his company.) He and his colleagues have their theories as to how the superfakes that come across their desks are so jaw-droppingly good: “We suspect it’s someone who maybe works at Chanel or Hermès who takes home real leathers. I think the really, really good ones have to be from people who work for the companies.” And every time a brand switches up its designs, as today’s fast-paced luxury houses are wont to do, authenticators find themselves in the dark again…

…A strange, complicated cloud of emotions engulfed me wherever I carried the bag. I contacted more sellers and bought more replicas, hoping to shake it loose. I toted a (rather fetching) $100 Gucci 1955 Horsebit rep through a vacation across Europe; I’ve worn the Triomphe to celebrity-flooded parties in Manhattan, finding myself preening under the approving, welcome-into-our-fold smiles of wealthy strangers. There is a smug superiority that comes with luxury bags — that’s sort of the point — but to my surprise, I found that this was even more the case with superfakes. Paradoxically, while there’s nothing more quotidian than a fake bag that comes out of a makeshift factory of nameless laborers studying how to replicate someone else’s idea, in another sense, there’s nothing more original.

While a wardrobe might reveal something of the wearer’s personality and emotion, a luxury handbag is a hollow basin, expressing nothing individualistic at all. Instead, a handbag communicates certain ineffable ideas: money, status, the ability to move around in the world. And so, if you believe that fashion is inherently all about artifice — consider wink-wink items like Maison Margiela’s Replica sneaker, or the mind-​boggling profits of LVMH’s mass-produced luxury items — then there is an argument to be made that the superfake handbag, blunt and upfront to the buyer about its trickery, is the most honest, unvarnished item of all.

I asked the writer Judith Thurman, whose sartorial insights I’ve always admired, about the name-brand handbag’s decades-long hold on women. Why do we yearn for very expensive sacks in the first place? Why do some buyers submit to thousand-dollar price hikes and risk bankruptcy for them? “It’s a kind of inclusive exclusiveness,” Thurman told me. “A handbag is a little treat, and it’s the only fashion item that is not sacrificial.” Clothes, with their unforgiving size tags and rigid shapes, can instill a cruel horror or disappointment in their wearers. Bags, meanwhile, dangle no shame, only delight. “There is an intangible sense when you are wearing something precious that makes you feel more precious yourself,” she theorizes. “And we all need — in this unbelievable age of cosmic insecurity — a little boost you can stick over your shoulder that makes you feel a bit more special than if you were wearing something that cost $24.99. It’s mass delusion, but the fashion business is about mass delusion. At what point does a mass delusion become a reality?”

4. Berkshire Hathaway – The World’s Greatest Serial Acquirer of Businesses – Eugene Ng

Warren Buffett and Charlie Munger were previously known to me as one of the greatest investors of all time with Berkshire Hathaway (“Berkshire”). But what became clearly evident to me after reading all 5,300+ pages of the Buffett Partnership Letters, Berkshire Shareholder letters, and AGM transcripts, is that they were not only great business builders, but also fantastic and disciplined risk managers. Through countless acquisitions over decades, Berkshire have become the world’s greatest serial acquirer of businesses.

Serial acquirers are companies that acquire wholly owned smaller companies to grow. After reinvesting, they use the surplus cash flows produced by each acquisition to buy even more companies, repeating the process, and compounding shareholder value over a very long time. Including its own acquisition back in 1964, we reckon Berkshire acquired over at least 80 wholly-owned insurance and non-insurance businesses over the last 57 years, and spent in excess of US$120bn on acquisitions over the last 20 years. Berkshire currently has 67 subsidiaries as of Apr 2023. In 2022, the operating businesses generated US$220bn of revenues and US$27bn of operating earnings before taxes, and the insurance business generated US$164bn of float.

In addition to the surplus cash flows from the operating businesses, Berkshire also uses the float of its insurance companies to invest in partial stakes of publicly listed companies worth US$350bn. This insurance float arises because customers pay premiums upfront, and the claims are typically only paid much later. This allows Berkshire to invest much more in higher yielding common stocks than low yielding bonds than most typical insurers. Coupled with a strong disciplined underwriting process and prudent risk management and acquisitions, it provided them with an ever growing insurance float to invest long-term at much higher rates of returns versus their competitors.

Over 57 years from 1965 to 2023, Berkshire has grown to become the 7th largest company in the US by revenues at US$302bn, and the 2nd largest company in the world by total shareholder equity (including banks) at US$472bn.

Berkshire has also grown its market capitalization to US$722bn (as of 28Apr23), generating ~20% p.a. CAGR shareholder returns for over 57 years from 1965 to 2022, beating the S&P 500’s ~10% p.a. hands down, placing it firmly in the “hall of fame”…

…Below is what we think is our best interpretation of Berkshire Hathaway’s flywheel that combines the disciplined, profitable and well-run businesses of the (1) insurance business (run by Ajit Jain) and (2) non-insurance operating businesses (run by Greg Abel), combining with strong culture, and letting solid managers run the businesses well respectively with strong autonomy, in a decentralised format. 

Warren Buffett, Charlie Munger are responsible for the overall oversight and capital allocation, with Todd Combs and Ted Weschler are responsible for investing ~11% / ~US$34bn of the overall US$309bn equity investment portfolio under the insurance business.

It is this duo flywheel of Berkshire’s insurance and non-insurance businesses with the insurance float and the surplus capital from operating profits, that allows Berkshire to keep investing in (1) partial ownership stakes of good companies at fair prices, and to keep (2) acquiring durable, predictable profitable, wholly owned companies with able and honest management at the right price.

5. An Interview with Chip War Author Chris Miller –  Ben Thompson and Chris Miller

To me that was one of the most — I mean there was a lot of interesting parts — but that was one of the most interesting parts of the book was your discussion about the Soviet Union and their attempts to compete in the semiconductor industry. It’s always tough because this is the part where you’ve been immersed in it sort of your entire life, so it’s always hard to summarize. But what’s the big picture history and lesson from Russia, I should say USSR, and its attempts to compete with the US in particular?

CM: The puzzle to me was the following: we knew the Soviets could produce a lot of impressive technology because they did it during the early Cold War. From atomic weapons — which granted they stole some of the designs, but nevertheless, they were the second country in the world to test an atomic bomb — to satellites, they were the first in the world to go into space largely thanks to indigenous innovation, the first person in space, Yuri Gagarin. So in the 1950s the Soviets weren’t seen as technologically backwards, they were seen as, if anything, overtaking the United States, and that made sense because if you had to ask what are some of the key ingredients to technological success of a country, you’d say, well, you probably want a pretty well-educated workforce, Soviets had that. Capital investment, Soviets had that. You want to focus on the industry, Soviets had that. And so the puzzle to me was why, given all these clear ingredients that were present in the Soviet Union, plus the pressure of Cold War competition to produce the next best defense technology, why was it that the Soviets couldn’t produce computing technology basically at all, and the entire Cold War they were copying IBM computers? That was the puzzle I initially started out wanting to answer and there’s a number of different ways you can answer the question.

I think this is super interesting, it’s super relevant. So walk me through them — what was it that was fundamentally different about, to your point, putting a man in space versus building a semiconductor?

CM: I think the common answer in the Western literature is “Well, they were an essentially planned economy, or they were dictatorship or both, and those societies can’t innovate”. I think that just doesn’t fit the historical facts. In fact, they did a whole lot of innovation in certain spheres at certain times, but there’s nothing about dictatorships that make them non-innovative, they innovate for their own reasons. But I think the problems the Soviets face were the following: First they didn’t have a consumer economy, hardly at all.

Why did that matter?

CM: That mattered because from almost the earliest days, the chip industry in the US, the computer industry in the US, grew thanks to sales to civilian markets and sales to consumers. The first chips that were produced were deployed in government systems, NASA and the Defense Department. But by the end of the 1960s, a decade or so after the first chips had been produced, it was civilian sales that were driving the industry. Today it’s 97% of chips produced that go to civilian uses, and so if you don’t have a civilian market, you can’t scale, simple as that.

I think this fits in because if you’re trying to get a man into space, you’re trying to get one man into space one time. Whereas the entire economics of chips and of the tech industry generally is 100% about scale. You have to put such massive investment upfront, and then the cost of goods sold for a chip is basically zero, and so to justify and to get a return on that investment and to provide the space for iteration, you need that massive demand to make it all worth it. If you just try to do a single shot, it’s probably not going to work out.

CM: Yeah, that’s absolutely right. The second thing that I didn’t realize is that I was under the impression when I started that nuclear bombs were hard to make, but computers were easy to make because there were a few nuclear bombs in the world and a lot of computers, and actually it’s the exact opposite. Nuclear bombs are so easy to make, even the North Koreans can do it.

(laughing) I don’t think I have any new North Korean subscribers, so no problem with that statement.

CM: I’m safe, okay. Whereas actually it’s the things that are the most widely produced, like chips, that are the hardest ones to make because you’ve got to drive down the cost, you’ve got to scale down the components on them, and that is the most complex manufacturing we undertake. I hadn’t really thought that through and I think most of us haven’t really thought through that dynamic and as a result, it has us focusing on the wrong types of complexity and the wrong types of technology and we, I think too often, overestimate the complexity in things that are done once and underestimate the complexity involved in scaling…

Tell me about the contrast between the Japanese approach to chips versus the Soviet approach. Why was Japan so much more successful in entering this US-dominated industry relative to the Soviet Union?

CM: Well, the Japanese entered the chip industry not by trying to copy illegally, which the Soviets did, but by licensing technology. They were among the earliest licensees of the transistor after it was first produced, early licensees of the first integrated circuits, and they produced them better. The first chips began to be commercialized in the early ’60s, and just 15 years later, the late ’70s, Japanese firms by all accounts were producing at much higher levels of quality than US firms.

The complaint about dumping was never really quite right. People bought the Japanese chips because they failed much less and performed much better.

CM: That’s absolutely right. You had US CEOs at the time saying, “Well, we’ve got the real technology, we’re the most advanced in terms of this and that criteria”, but actually the technology that mattered again was the scaling. Japanese firms could scale with quality to a much greater degree. But that’s also what did them in, because they didn’t do a good job of managing their capability of scaling with market dynamics and they weren’t guided by profitability or guided by market share as their goal. So Japanese firms took over the market for DRAM chips, the type of memory chip that was the most prominent chip at the time, and never made any money. Kind of shockingly they dominated the market for a decade and hardly any of them ever posted a profit.

Well, I guess to just speak about Japan for a moment, because I think it’s interesting, first, why did South Korea and then also Micron in the US surpass Japan in memory, and second why did Japan never build any strength in logic? They peaked with memory and that was sort of it.

CM: So I think on the second question, Japanese firms did try to move into microprocessors at a time when they were still a niche good in the late ’70s and early ’80s, but they were doing so well in memory or it seemed like they were doing so well in memory that it was an Innovator’s Dilemma type situation. They had huge market share in memory, they had just defeated TI and Intel in then DRAM business, so why would you switch your business model to produce this low volume type of chip that seemed pretty niche? Whereas if you were Intel in the early ’80s you had no choice, you’d just been knocked out of your primary market.

It’s very underrated. Everyone wants to talk about that apocryphal, or maybe I guess it was real, meeting with Andy Grove and Gordon Moore where they’re like, “We need to get out of memory”. But it’s under-appreciated that this was not a brilliant flash of insight, this was accepting reality and probably accepting it a couple years too late.

CM: Yep, I think that’s right. I think the other benefit that the US ecosystem had writ large was that it was more responsive to new trends in the PC industry, and just the emergence of the PC itself is something that — could it have happened in Japan? I think you wouldn’t say it couldn’t have happened, but it seems like all the ingredients were much more prevalent in the US. A bigger software design ecosystem, Bill Gates being the critical representative, plus companies that were willing to innovate more rapidly to produce PCs. At the time there were a couple of Japanese firms that were good at productizing new ideas, Sony being the best example, but Sony was the exception, not the rule. What really struck me about the PC industry is that IBM created the first PC, but then they were quickly out-competed by all the clones that emerged, which drove down the cost and drove up the prevalence of PCs.

For someone that started out saying, “I assumed that the story was free markets just being better at innovation and that wasn’t the case”, I don’t know, that sounds like the case that you’re kind of making right here.

CM: (laughing) Yeah, in this case, I think it was. The Japanese did a very good job at scaling, but here is the counterfactual: Suppose that Japanese firms had been disciplined by a need to make a profit, they would’ve focused less exclusively on simple scaling to win market share. They would’ve at an earlier date tried to ask themselves, can we make money in DRAM? Some of them I think would’ve exited DRAM because they didn’t make any money there and tried to do something else. So actually I still go back to the structure of the Japanese corporate and financial system as to why their chip firms just for far too long focused on producing unprofitable chips…

One thing you’ve said about TSMC and ASML is that, “The way to understand them is less about them being manufacturers and more about them being integrators.” So, what do you mean by that?

CM: If you want to turn to ASML, I think they’re the best example of this. They’re a company that on the one hand, manufactures the most complex tools humans have ever made, hands down, and we can dig into them. On the other hand, they’ll openly tell you that, “Their expertise is not in the tools themselves, but in bringing together such a complex supply chain.” At first when people from ASML tell me this, I was shocked. I thought they would be bragging about their manufacturing capabilities, but they were more focused on their systems integration and the ability to manage suppliers all over the world. I admit, I started the project not taking the people who manage supply chains all that seriously, but I came to develop a lot more respect for them, because them doing their jobs well is an extraordinarily difficult thing to do and when you’ve got a supply chain that does involve thousands of suppliers, you’ve got to do it really, really well…

My view on what China should do geopolitically speaking if I were giving advice to Xi Jinping is — which I’m not, to be clear, I think that’s obvious — is the U.S. wants to continue to allow China to import tools and technologies as you noted, to build trailing edge chips. I think a big impetus for this is they don’t want to destroy the business of a lot of U.S. tech companies, where 30% of their sales were to China, and so it seems like the rational response for China would be to, and I think we’re seeing indicators this is happening, is to basically try to dominate that market.

In this case, use a willingness to be unprofitable as a weapon and to actually do what we accuse the Japanese of doing back in the day, of flooding the market, driving all other trailing edge capacity out, which is basically TSMC and a bit of GlobalFoundries, but there’s bits and pieces still scattered it around. Once you build a foundry, you might as well keep it and then suddenly, the actual chips that are used, to your point, in guided missiles, and are used in cars and are used in appliances were totally dependent on China. That seems like where this is going, does it not?

CM: I think I agree completely, China’s going to build out a ton of capacity. I think there’s some uncertainty as to whether we’re going to have enough demand to meet that capacity build out or not and I think there’s still uncertainty about what our demand will be for lagging edge chips. In ten years time, people who are more bullish on demand say, “Look, every year, there’s on average twenty new chips added to a car.” No one knows how long this is going to go for, but it’s gone for a long time, etc.

And the chip that controls a window going up and down never actually has to get faster.

CM: Right, exactly. So, set aside the uncertainty about the demand picture. If China built out all this capacity, will non-Chinese firms go to Chinese foundries? I think five years ago, the answer was certainly yes. Today, it’s a lot less clear. And when you have Michael Dell on the front page of the Financial Times reporting that his customers are asking him to remove Chinese-made components from PC supply chains, that’s not the political environment that I think will send non-Chinese customers racing to take advantage of cheaper funding capacity in China…

...So what’s your — as someone, again, you’re coming in from sort of a historical perspective, but having dived deeply into this — what do you think about the long-term Chinese prospects as far as basically rebuilding the leading edge capacity? This is a subject of much debate amongst people that are deep in the weeds about it, but as you’ve been able to talk to people all over the place, what’s your takeaway? How far behind are they? Can they even catch up?

CM: First off, what does catch up mean? I think that this is really a key question, because catch up doesn’t mean catching up to 2023 levels of technology in ten years time, then you’re five Moore’s Laws behind. So I think we’ve got to define catching up as reaching 2033 levels of technology in exactly ten years time, just as the rest of the world does. That seems to me like a really tall order, because the trend in the chip industry has not been catch up, it’s been fall behind. Everyone’s been falling behind the leading edge in every single node of the supply chain.

At basically every major lithography transition, another foundry falls off.

CM: Yep, exactly. So the Chinese government’s going to put a lot of money behind it, that’s going to help. There’s the necessity of it that Chinese firms face, that’s going to help. I think the Chinese government’s going to do more to wall off the domestic market, which will give some end market for Chinese firms that will help, at least in the short run, for Chinese chip makers. But at the end of the day, if Chinese firms are selling to 20% of global GDP and TSMC is selling to 80% of global GDP, I think I know who I’d bet on.

So what are the implications of this? I mean, again, as you noted, it doesn’t necessarily make a difference for conventional weapons, if we think about today. Is this where the question of AI systems and stuff comes to bear?

CM: Yeah, I think that’s right and right now, we’re seeing a shortage of GPUs, given all the generative AI boom underway. But I guess there’s a more complex long term question, which is — is compute a real point at which the US can try to constrain China’s AI capabilities? I think we’re seeing the US test out that strategy right now.

What’s your prognostication? I’m going to put you on the spot here.

CM: I think there are people who say, “Well if China can’t get access to the most advanced GPUs, aren’t they just going to build data centers that are four times as large or eight times as large or sixteen times as large with sixteen times as many chips, and therefore scale up that way?” You can’t scale down your transistors, you scale up your data centers, is basically the strategy, and then we have to figure out — what are the inefficiencies involved in scaling up your data center? I’m sure they’re pretty substantial.

Well, this is why it’s interesting, I was actually surprised — what the chip ban really focused on was memory interconnects, or interconnects, which is actually the limiting factor in pursuing that exact strategy.

CM: Yeah. I mean, I think you can’t accuse the US strategy of being incoherent, I think that they put their homework into it. Whether it’s going to work, we’ll see. I’ve got a lot of faith in the Chinese government’s willingness to brute force things when it comes to national security, so I think we should expect them to try really hard. But at some point, I go back to one of the more interesting anecdotes from the Soviet experience was an interview of a weapons designer in the Soviet Union, who was asked to explain why it was that he didn’t use the most advanced integrated circuits in his guidance computer in his missile. And his answer was, “Well, our computing industry, sometimes it works, sometimes it doesn’t. The state’s pretty bureaucratic. It’s just hard to work with, it’s not as easy.” The implication was it’s not as easy as buying from TSMC. So I do think if you get a situation where we’re throwing a lot of sand into the gears of the Chinese computing industry, the Chinese government’s going to respond with lots of cash in response and that’s kind of the race that we’re playing out right now, our sand in the gears versus Chinese government cash.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and Taiwan Semiconductor Manufacturing Company (TSMC). Holdings are subject to change at any time.