What We’re Reading (Week Ending 31 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 31 March 2024:

1. Gold-Medalist Coders Build an AI That Can Do Their Job for Them – Ashlee Vance

Take the case of Cognition AI Inc.

You almost certainly have not heard of this startup, in part because it’s been trying to keep itself secret and in part because it didn’t even officially exist as a corporation until two months ago. And yet this very, very young company, whose 10-person staff has been splitting time between Airbnbs in Silicon Valley and home offices in New York, has raised $21 million from Peter Thiel’s venture capital firm Founders Fund and other brand-name investors, including former Twitter executive Elad Gil. They’re betting on Cognition AI’s team and its main invention, which is called Devin.

Devin is a software development assistant in the vein of Copilot, which was built by GitHub, Microsoft and OpenAI, but, like, a next-level software development assistant. Instead of just offering coding suggestions and autocompleting some tasks, Devin can take on and finish an entire software project on its own. To put it to work, you give it a job—“Create a website that maps all the Italian restaurants in Sydney,” say—and the software performs a search to find the restaurants, gets their addresses and contact information, then builds and publishes a site displaying the information. As it works, Devin shows all the tasks it’s performing and finds and fixes bugs on its own as it tests the code being written.

The founders of Cognition AI are Scott Wu, its chief executive officer; Steven Hao, the chief technology officer; and Walden Yan, the chief product officer…

…Wu, 27, is the brother of Neal Wu, who also works at Cognition AI. These two men are world-renowned for their coding prowess: The Wu brothers have been competing in, and often winning, international coding competitions since they were teenagers…

…Sport-coding—yes, it’s a real thing—requires people to solve puzzles and program with speed and accuracy. Along the way, it trains contestants to approach problems in novel ways. Cognition AI is full of sport-coders. Its staff has won a total of 10 gold medals at the top international competition, and Scott Wu says this background gives his startup an edge in the AI wars…

…One of the big claims Cognition AI is making with Devin is that the company has hit on a breakthrough in a computer’s ability to reason. Reasoning in AI-speak means that a system can go beyond predicting the next word in a sentence or the next snippet in a line of code, toward something more akin to thinking and rationalizing its way around problems. The argument in AI Land is that reasoning is the next big thing that will advance the industry, and lots of startups are making various boasts about their ability to do this type of work.

Devin does appear to be well ahead of the other coding assistants in many respects. You can give it jobs to do with natural language commands, and it will set off and accomplish them. As Devin works, it tells you about its plan and then displays the commands and code it’s using. If something doesn’t look quite right, you can give the AI a prompt to go fix the issue, and Devin will incorporate the feedback midstream. Most current AI systems have trouble staying coherent and on task during these types of long jobs, but Devin keeps going through hundreds and even thousands of tasks without going off track.

In my tests with the software, Devin could build a website from scratch in 5 to 10 minutes, and it managed to re-create a web-based version of Pong in about the same amount of time. I had to prompt it a couple of times to improve the physics of the ball movement in the game and to make some cosmetic changes on its websites, all of which Devin accomplished just fine and with a polite attitude…

…Exactly how Cognition AI made this breakthrough, and in so short a time, is something of a mystery, at least to outsiders. Wu declines to say much about the technology’s underpinnings other than that his team found unique ways to combine large language models (LLMs) such as OpenAI’s GPT-4 with reinforcement learning techniques. “It’s obviously something that people in this space have thought about for a long time,” he says. “It’s very dependent on the models and the approach and getting things to align just right.”

2. Geopolitics in the C-Suite – Jami Miscik, Peter Orszag, and Theodore Bunzel

But even though national security and foreign policy occasionally intruded on corporate America during that time, until very recently, few executives concerned themselves with geopolitics. In the post–Cold War world, with globalization on the march, the idea that national interests might be at odds with open markets and expanding trade came to seem alien to American executives.

But the changes that have roiled the geopolitical landscape in recent years have left an impression in C-suites around the United States. In a recent poll of 500 institutional investors, geopolitics ranked as the top risk to the global economy and markets in 2024…

…As governments lean on economic restrictions and industrial policies to achieve geopolitical ends, corporations have increasingly become both the objects and instruments of foreign policy…

…The centrality of economic competition to today’s foreign policy problems represents a qualitative break from the past. During the Cold War, for example, the United States and the Soviet Union hardly interacted economically: trade between them peaked at a paltry $4.5 billion in 1979; in recent years, the United States and China have generally traded that much every week or two, adjusting for inflation. In the post–Cold War era, U.S. foreign policy was focused on opening markets and reducing international economic barriers rather than erecting them. Era-defining crises such as the 9/11 attacks did little to change the relationship between U.S. policymakers and American corporations; if anything, the “war on terror” further solidified the idea that foreign policy was primarily concerned with security and military issues, not economics.

But in the background, global economic integration was transforming the playing field. In 1980, trade accounted for just 37 percent of global GDP. Today, that figure is 74 percent, and economies have become intertwined to a degree never seen in the twentieth century. Globalization is not new, of course; it has been a centuries-long process. What is new, however, is the emergence of great-power rivalry in a highly interconnected world. Military power still matters, but economic and technological competition have become the main battlefield of global politics. Under the so-called Washington consensus that dominated policymaking for decades, the question of where a semiconductor manufacturer would build its next factory or whether German auto companies would decide to throttle their investments in China would have seemed relatively unimportant to policymakers. Now, such questions are at the center of almost every major foreign policy debate.

Greater economic integration has also created a complex web of links between geopolitical rivals that policymakers now seek to leverage for strategic ends. This is especially true when it comes to financial and technological networks, where Washington holds a privileged position…

…But as great-power tensions have increased, so has the number of sectors caught in the fray of what Farrell and Newman call “weaponized interdependence.” Consider, for example, the way that G-7 countries have taken advantage of Russian dependence on shipping insurers based in the West, an industry that most foreign policymakers had probably never thought about before Russia’s 2022 invasion of Ukraine. To try to cap the price of Russian oil exports, the G-7 prevented these companies from insuring Russian crude oil cargoes unless they had been sold at a maximum of $60 per barrel.

Western powers are not the only ones playing this game. In 2010, after a Chinese fishing trawler and Japanese Coast Guard patrol boats collided in disputed waters, setting off a diplomatic row between Beijing and Tokyo, China banned exports to Japan of the rare-earth minerals that are critical components of batteries and electronics, thus raising costs and creating shortages for Japanese manufacturers of everything from hybrid cars to wind turbines…

…More recently, a number of American consulting firms have been caught in the middle of the complex U.S.-Saudi relationship, with Congress demanding details about their contracts with Saudi Arabia that Riyadh has forbidden them to provide.

All these dynamics are being turbocharged by an intensifying competition between the United States and China, the two countries with the largest and most globally intertwined economies. Both aim to dominate the twenty-first-century economy, which means gaining the upper hand in computing technologies, biotechnology, and clean energy. And the foreign policies of both countries are now driven by a shared desire to shape their economies in ways that reduce their vulnerability and increase their leverage. China calls this “self-reliance.” Washington calls it “de-risking.” For the United States, what it looks like in practice is expanded export controls on advanced semiconductors and manufacturing equipment, enhanced government screening of investments by U.S. companies in foreign markets, and major subsidies for industries such as electric vehicles and microchips, primarily through the Inflation Reduction Act and the CHIPS Act. In this brave new world, the secretary of commerce is as important to foreign policy as the secretaries of state and defense.

Washington is hardly alone in taking such steps. State-sponsored drives for greater self-reliance have taken hold in nearly every major economy, particularly after the supply-chain disruptions of the COVID-19 pandemic. The number of countries introducing or expanding investment screening, for example, jumped from three between 1995 and 2005 to 54 between 2020 and 2022. Meanwhile, a wave of industrial policies has increased trade barriers in an attempt to induce companies to reshore their supply chains. At the same time, the understanding of what matters to national security has also expanded, as countries seek to advance or protect everything from software and microchips to pharmaceuticals and foodstuffs.

Many of the complications of this new era are rooted in the difference between the way the public and private sectors view time horizons. Policymakers set bright lines with immediate operational implications—for example, suddenly forbidding companies from exporting or importing certain goods from certain countries. But companies need to make long-term investment decisions. Should a company set up another plant in China if there is market demand and doing so is currently allowed by law? Should a pharmaceutical company set up advanced R & D centers in mainland China or purchase a Chinese biotech firm, given the long-run trajectory of relations between Beijing and the West? Should a consumer electronics firm purchase Chinese-made chips if they are the most cost-efficient option? Answering these questions requires executives to forecast the outcomes of highly volatile political debates and policymaking choices over which they have little control. And yet whatever decisions they make have a significant effect on whether, for example, the United States can effectively “de-risk” its economic relationship with China.

The example of semiconductors is instructive. Washington is seeking to reshore semiconductor manufacturing, but the success of its flagship industrial policy, the CHIPS Act, depends only in part on how the Commerce Department distributes the legislation’s $39 billion in subsidies over the next five years. A much more important factor is whether the Taiwanese chip manufacturer TSMC will risk setting up facilities in the United States despite high costs and a relative scarcity of human capital, and whether Apple decides to buy slightly more expensive chips made by U.S. fabricators instead of less expensive ones produced in Asia. And the CHIPS Act is only one input in those decisions.

3. Get Smart: Chasing Nvidia? Don’t succumb to FOMO – Chin Hui Leong

Are you feeling left out because you missed Nvidia’s (NASDAQ: NVDA) massive stock rise?

Well, we have good news and bad news.

Let’s start with the bad news: that tightening in your chest you are feeling right now is the fear of missing out — or better known by its initials “FOMO”. 

And ladies and gentlemen, FOMO is real.

It’s that sneaky emotion which spurs you to buy a stock based on a feeling rather than proper research…

…But hang on, amid the hype — there’s good news too.

If you recognise that you are feeling FOMO, then congratulations — you have just taken the first step in recognising what you have to deal with: your runaway emotions.

The next step is to keep your emotions in check.

On the other side of FOMO, is its cousin FOJI — or the fear of joining in.

Like FOMO, FOJI is also a strong emotion.

That’s especially true for some investors who are bearing scars from 2022 when US stocks took a beating amid a punishing bear market.

These scars can emit another fear — FOJI — which is paralysing for investors.

The fear of looking stupid if you buy today only to watch the stock fall the very next day…

…Whether it is FOMO or FOJI, you won’t invest well if feelings dictate your actions.

Recognising the presence of both emotions is key…

…Beyond FOMO and FOJI, there’s JOMO or the joy of missing out.

Don’t feel down if you decide to give Nvidia a pass.

As Peter Lynch once said — you can’t kiss all the frogs to find out which will turn into a prince.

Unconvinced?

In Lynch book’s “One Up on Wall Street”, he wrote down the names of 65 stocks which returned at least 10 times their original price (he calls them 10-baggers).

Except that the fund that he ran owned NONE of them.

Before you start rolling your eyes, consider this point: Peter Lynch achieved a stunning 29% per year in annualized returns over 13 years, outpacing the benchmark S&P 500 index (INDEXSP: .INX) by more than two times…

…By sharing the list of missed winners, Lynch had a salient point to make: you do not need to be in every 10-bagger to deliver enviable returns.

4. How the richest woman in the world—mocked as a ‘miser’ in the press—helped bail out New York City during the panic of 1907 – Will Daniel

Hetty Green is remembered as the “world’s greatest miser” and the “Witch of Wall Street,” but these days, Green would likely be seen as an eccentric investing icon. After all, while she became famous for her frugal nature and gruff exterior, Green pioneered value investing strategies that have made billionaires out of many of today’s leading investors. And when the chips were down, when people really needed help, the whaling heiress turned independent investor, business tycoon, and world’s wealthiest woman often used her fortune to save the day…

…Over a three-week period after the panic began on Oct. 22, 1907, the New York Stock Exchange plummeted nearly 50% from its 1906 peak. And a year later, in 1908, Gross National Product (GNP), a measure akin to today’s Gross Domestic Product (GDP), cratered 12%. The problems for the banking system were so severe during the knickerbocker crisis that they spurred the establishment of the Federal Reserve System…

…As the situation deteriorated, John Pierpont Morgan, the American financier who founded what is now JPMorgan Chase, was eventually forced to call together a group of Wall Street’s best and brightest at the Morgan Library to help decide how to prop up the ailing economy and stock market. Hetty Green was the only woman who was invited to attend that meeting during the height of the panic…

…“I saw this situation coming,” she said, noting that there were undeniable signs of stress. “Some of the solidest men of the Street came to me and wanted to unload all sorts of things, from palatial residences to automobiles.”

Green said that she then gave The New York Central Railroad company a “big loan” after they came knocking, and that made her “sit up and do some thinking.” She decided to begin gathering as much cash as possible, understanding that a panic could be on the way…

…Green described how men came to New York from all over the country to ask for loans during the panic of 1907. But despite being labeled a “miser” throughout her life, she didn’t take advantage of the situation.

“Those to whom I loaned money got it at 6%. I might just as easily have secured 40%,” she explained…

…Usury, or charging excessive interest for a loan, was against Green’s moral code, which was born of her Quaker roots…

…Green would go on to lend the government of New York City $1.1 million at the peak of the 1907 panic, which is equivalent to roughly $33 million in today’s dollars…

…“On more than one occasion, when New York was running low on money, she would lend money to the city,” explained Charles Slack, the author of Green’s biography, Hetty: The Genius and Madness of America’s First Female Tycoon. “And she always did so at reasonable rates. She didn’t gouge or hold the city over a barrel.”

5. Transcript for Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416 – Lex Fridman and Yann Lecun

Lex Fridman (00:50:40) I would love to sort of linger on your skepticism around auto regressive LLMs. So one way I would like to test that skepticism is everything you say makes a lot of sense, but if I apply everything you said today and in general to I don’t know, 10 years ago, maybe a little bit less, no, let’s say three years ago, I wouldn’t be able to predict the success of LLMs. So does it make sense to you that autoregressive LLMs are able to be so damn good?

Yann LeCun (00:51:20) Yes.

Lex Fridman (00:51:21) Can you explain your intuition? Because if I were to take your wisdom and intuition at face value, I would say there’s no way autoregressive LLMs, one token at a time, would be able to do the kind of things they’re doing.

Yann LeCun (00:51:36) No, there’s one thing that autoregressive LLMs or that LLMs in general, not just the autoregressive one, but including the bird style bidirectional ones, are exploiting and its self supervised running, and I’ve been a very, very strong advocate of self supervised running for many years. So those things are a incredibly impressive demonstration that self supervised running actually works. The idea that started, it didn’t start with BERT, but it was really kind of good demonstration with this.

(00:52:09) So the idea that you take a piece of text, you corrupt it, and then you train some gigantic neural net to reconstruct the parts that are missing. That has produced an enormous amount of benefits. It allowed us to create systems that understand language, systems that can translate hundreds of languages in any direction, systems that are multilingual, so it’s a single system that can be trained to understand hundreds of languages and translate in any direction, and produce summaries and then answer questions and produce text.

(00:52:51) And then there’s a special case of it, which is the auto regressive trick where you constrain the system to not elaborate a representation of the text from looking at the entire text, but only predicting a word from the words that are come before. And you do this by constraining the architecture of the network, and that’s what you can build an auto aggressive LLM from.

(00:53:15) So there was a surprise many years ago with what’s called decoder only LLM. So since systems of this type that are just trying to produce words from the previous one and the fact that when you scale them up, they tend to really understand more about language. When you train them on lots of data, you make them really big. That was a surprise and that surprise occurred quite a while back, with work from Google, Meta, OpenAI, et cetera, going back to the GPT kind of work, general pre-trained transformers.

Lex Fridman (00:53:56) You mean like GPT2? There’s a certain place where you start to realize scaling might actually keep giving us an emergent benefit.

Yann LeCun (00:54:06) Yeah, I mean there were work from various places, but if you want to place it in the GPT timeline, that would be around GPT2, yeah.

Lex Fridman (00:54:19) Well, because you said it so charismatic and you said so many words, but self supervised learning, yes. But again, the same intuition you’re applying to saying that auto aggressive LLMs cannot have a deep understanding of the world. If we just apply that, same intuition, does it make sense to you that they’re able to form enough of a representation in the world to be damn convincing, essentially passing the original touring test with flying colors?

Yann LeCun (00:54:50) Well, we’re fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence, but that impression is false. We’re really fooled by it.

Lex Fridman (00:55:06) What do you think Alan Turing would say, without understanding anything, just hanging out with it?

Yann LeCun (00:55:11) Alan Turing would decide that a Turing test is a really bad test, okay? This is what the AI community has decided many years ago that the Turing test was a really bad test of intelligence.

Lex Fridman (00:55:22) What would Hans Marvek say about the larger language models?

Yann LeCun (00:55:26) Hans Marvek would say that Marvek Paradox still applies. Okay, we can pass-

Lex Fridman (00:55:32) You don’t think he would be really impressed?

Yann LeCun (00:55:34) No, of course everybody would be impressed. But it’s not a question of being impressed or not, it’s the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There’s a whole industry that is being built around them. They’re going to make progress, but there is a lot of things they cannot do, and we have to realize what they cannot do and then figure out how we get there. And I’m seeing this from basically 10 years of research on the idea of self supervised running, actually that’s going back more than 10 years, but the idea of self supervised running. So basically capturing the internal structure of a piece of a set of inputs without training the system for any particular task, to learning representations.

(00:56:26) The conference I co-founded 14 years ago is called International Conference on Learning Representations. That’s the entire issue that deep learning is dealing with, and it’s been my obsession for almost 40 years now. So learning representation is really the thing. For the longest time, we could only do this with supervised learning, and then we started working on what we used to call unsupervised learning and revived the idea of unsupervised running in the early 2000s with your [inaudible 00:56:58] and Jeff Hinton. Then discovered that supervised running actually works pretty well if you can collect enough data. And so the whole idea of unsupervised, self supervised running kind of took a backseat for a bit, and then I tried to revive it in a big way starting in 2014, basically when we started FAIR and really pushing for finding new methods to do self supervised running both for text and for images and for video and audio.

(00:57:29) And some of that work has been incredibly successful. I mean, the reason why we have multilingual translation system, things to do, content moderation on Meta, for example, on Facebook, that are multilingual, that understand whether a piece of text is hate speech not or something, is due to that progress using self supervised running for NLP, combining this with transformer architectures and blah, blah, blah.

(00:57:53) But that’s the big success of self supervised running. We had similar success in speech recognition, a system called WAVE2VEC, which is also a joint embedding architecture, by the way, trained with contrastive running. And that system also can produce speech recognition systems that are multilingual with mostly unlabeled data and only need a few minutes of labeled data to actually do speech recognition, that’s amazing. We have systems now based on those combination of ideas that can do real time translation of hundreds of languages into each other, speech to speech.

Lex Fridman (00:58:28) Speech to speech, even including, which is fascinating, languages that don’t have written forms.

Yann LeCun (00:58:34) That’s right.

Lex Fridman (00:58:34) Just spoken only.

Yann LeCun (00:58:35) That’s right. We don’t go through text, it goes directly from speech to speech using an internal representation of speech units that are discrete, but it’s called Textless NLP. We used to call it this way. But yeah, so I mean incredible success there. And then for 10 years, we tried to apply this idea to learning representations of images by training a system to predict videos, learning intuitive physics by training a system to predict what’s going to happen in the video.

(00:59:02) And tried and tried and failed and failed, with generative models, with models that predict pixels. We could not get them to learn good representations of images. We could not get them to learn good representations of videos. And we tried many times, we published lots of papers on it, where they kind of sort of work, but not really great. They started working, we abandoned this idea of predicting every pixel and basically just doing the joint embedding and predicting and representation space, that works. So there’s ample evidence that we’re not going to be able to learn good representations of the real world using generative model. So I’m telling people, everybody’s talking about generative AI. If you’re really interested in human level AI, abandon the idea of generative AI…

…Yann LeCun (01:35:29) I actually made that comment on just about every social network I can, and I’ve made that point multiple times in various forums. Here’s my point of view on this, people can complain that AI systems are biased and they generally are biased by the distribution of the training data that they’ve been trained on that reflects biases in society, and that is potentially offensive to some people or potentially not. And some techniques to de-bias then become offensive to some people because of historical incorrectness and things like that.

(01:36:23) And so you can ask two questions, the first question is, is it possible to produce an AI system that is not biased? And the answer is, absolutely not. And it’s not because of technological challenges, although they are technological challenges to that, it’s because bias is in the eye of the beholder. Different people may have different ideas about what constitutes bias for a lot of things, there are facts that are indisputable, but there are a lot of opinions or things that can be expressed in different ways. And so you cannot have an unbiased system, that’s just an impossibility.

(01:37:08) And so what’s the answer to this? And the answer is the same answer that we found in liberal democracy about the press, the press needs to be free and diverse. We have free speech for a good reason, is because we don’t want all of our information to come from a unique source because that’s opposite to the whole idea of democracy and progressive ideas and even science. In science, people have to argue for different opinions and science makes progress when people disagree and they come up with an answer and consensus forms, and it’s true in all democracies around the world.

(01:37:58) There is a future which is already happening where every single one of our interaction with the digital world will be mediated by AI systems, AI assistance. We’re going to have smart glasses, you can already buy them from Meta, the Ray-Ban Meta where you can talk to them and they are connected with an LLM and you can get answers on any question you have. Or you can be looking at a monument and there is a camera in the glasses you can ask it like, what can you tell me about this building or this monument? You can be looking at a menu in a foreign language, and I think we will translate it for you, or we can do real time translation if we speak different languages. So a lot of our interactions with the digital world are going to be mediated by those systems in the near future.

(01:38:53) Increasingly, the search engines that we’re going to use are not going to be search engines, they’re going to be dialogue systems that we just ask a question and it will answer and then point you to perhaps appropriate reference for it. But here is the thing, we cannot afford those systems to come from a handful of companies on the west coast of the US because those systems will constitute the repository of all human knowledge, and we cannot have that be controlled by a small number of people. It has to be diverse for the same reason the press has to be diverse, so how do we get a diverse set of AI assistance? It’s very expensive and difficult to train a base model, a base LLM at the moment, in the future it might be something different, but at the moment, that’s an LLM. So only a few companies can do this properly.

(01:39:50) And if some of those top systems are open source, anybody can use them, anybody can fine tune them. If we put in place some systems that allows any group of people, whether they are individual citizens, groups of citizens, government organizations, NGOs, companies, whatever, to take those open source AI systems and fine tune them for their own purpose on their own data, then we’re going to have a very large diversity of different AI systems that are specialized for all of those things.

(01:40:35) I tell you, I talked to the French government quite a bit, and the French government will not accept that the digital diet of all their citizens be controlled by three companies on the west coast of the US. That’s just not acceptable, it’s a danger to democracy regardless of how well-intentioned those companies are, and it’s also a danger to local culture, to values, to language. I was talking with the founder of Infosys in India, he’s funding a project to fine tune Llama 2, the open source model produced by Meta, so that Llama 2 two speaks all 22 official languages in India, it is very important for people in India. I was talking to a former colleague of mine, Moustapha Cisse, who used to be a scientist at Fair and then moved back to Africa, created a research lab for Google in Africa and now has a new startup Co-Kera.

(01:41:37) And what he’s trying to do, is basically have LLM that speak the local languages in Senegal so that people can have access to medical information because they don’t have access to doctors, it’s a very small number of doctors per capita in Senegal. You can’t have any of this unless you have open source platforms, so with open source platforms, you can have AI systems that are not only diverse in terms of political opinions or things of that-

Yann LeCun (01:42:00) … AI systems that are not only diverse in terms of political opinions or things of that type, but in terms of language, culture, value systems, political opinions, technical abilities in various domains, and you can have an industry, an ecosystem of companies that fine tune those open source systems for vertical applications in industry. I don’t know, a publisher has thousands of books and they want to build a system that allows a customer to just ask a question about the content of any of their books, you need to train on their proprietary data. You have a company, we have one within Meta, it’s called Metamate, and it’s basically an LLM that can answer any question about internal stuff about the company, very useful.

(01:42:53) A lot of companies want this. A lot of companies want this not just for their employees, but also for their customers, to take care of their customers. So the only way you’re going to have an AI industry, the only way you’re going to have AI systems that are not uniquely biased is if you have open source platforms on top of which any group can build specialized systems. So the direction of inevitable direction of history is that the vast majority of AI systems will be built on top of open source platforms…

…Lex Fridman (02:04:21) You often say that a GI is not coming soon, meaning not this year, not the next few years, potentially farther away. What’s your basic intuition behind that?

Yann LeCun (02:04:35) So first of all, it’s not going to be an event. The idea somehow, which is popularized by science fiction and Hollywood, that somehow somebody is going to discover the secret to AGI or human-level AI or AMI, whatever you want to call it, and then turn on a machine and then we have AGI, that’s just not going to happen. It’s not going to be an event. It’s going to be gradual progress. Are we going to have systems that can learn from video how the world works and learn good representations? Yeah. Before we get them to the scale and performance that we observe in humans it’s going to take quite a while. It’s not going to happen in one day. Are we going to get systems that can have large amount of associated memory so they can remember stuff? Yeah, but same, it’s not going to happen tomorrow. There is some basic techniques that need to be developed. We have a lot of them, but to get this to work together with a full system is another story.

(02:05:37) Are we going to have systems that can reason and plan perhaps along the lines of objective-driven AI architectures that I described before? Yeah, but before we get this to work properly, it’s going to take a while. Before we get all those things to work together, and then on top of this, have systems that can learn hierarchical planning, hierarchical representations, systems that can be configured for a lot of different situation at hand, the way the human brain can, all of this is going to take at least a decade and probably much more because there are a lot of problems that we’re not seeing right now that we have not encountered, so we don’t know if there is an easy solution within this framework. So it’s not just around the corner. I’ve been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. I knew they were wrong when they were saying it. I called their bullshit…

…Lex Fridman (02:08:48) So you push back against what are called AI doomers a lot. Can you explain their perspective and why you think they’re wrong?

Yann LeCun (02:08:59) Okay, so AI doomers imagine all kinds of catastrophe scenarios of how AI could escape or control and basically kill us all, and that relies on a whole bunch of assumptions that are mostly false. So the first assumption is that the emergence of super intelligence is going to be an event, that at some point we’re going to figure out the secret and we’ll turn on a machine that is super intelligent, and because we’d never done it before, it’s going to take over the world and kill us all. That is false. It’s not going to be an event. We’re going to have systems that are as smart as a cat, have all the characteristics of human-level intelligence, but their level of intelligence would be like a cat or a parrot maybe or something. Then we’re going to work our way up to make those things more intelligent. As we make them more intelligent, we’re also going to put some guardrails in them and learn how to put some guardrails so they behave properly.

(02:10:03) It’s not going to be one effort, that it’s going to be lots of different people doing this, and some of them are going to succeed at making intelligent systems that are controllable and safe and have the right guardrails. If some other goes rogue, then we can use the good ones to go against the rogue ones. So it’s going to be my smart AI police against your rogue AI. So it’s not going to be like we’re going to be exposed to a single rogue AI that’s going to kill us all. That’s just not happening. Now, there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over. There is several arguments that make people scared of this, which I think are completely false as well.

(02:10:48) So one of them is in nature, it seems to be that the more intelligent species otherwise end up dominating the other and even distinguishing the others sometimes by design, sometimes just by mistake. So there is thinking by which you say, “Well, if AI systems are more intelligent than us, surely they’re going to eliminate us, if not by design, simply because they don’t care about us,” and that’s just preposterous for a number of reasons. First reason is they’re not going to be a species. They’re not going to be a species that competes with us. They’re not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system. It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is specific to social species. Non-social species like orangutans don’t have it, and they are as smart as we are, almost, right?

Lex Fridman (02:12:09) To you, there’s not significant incentive for humans to encode that into the AI systems, and to the degree they do, there’ll be other AIs that punish them for it, I’ll compete them over it.

Yann LeCun (02:12:23) Well, there’s all kinds of incentive to make AI systems submissive to humans.

Lex Fridman (02:12:26) Right.

Yann LeCun (02:12:27) Right? This is the way we’re going to build them. So then people say, “Oh, but look at LLMs. LLMs are not controllable,” and they’re right. LLMs are not controllable. But objectively-driven AI, so systems that derive their answers by optimization of an objective means they have to optimize this objective, and that objective can include guardrails. One guardrail is, obey humans. Another guardrail is, don’t obey humans if it’s hurting other humans within limits.

Lex Fridman (02:12:57) Right. I’ve heard that before somewhere, I don’t remember

Yann LeCun (02:12:59) Yes, maybe in a book.

Lex Fridman (02:13:01) Yeah, but speaking of that book, could there be unintended consequences also from all of this?

Yann LeCun (02:13:09) No, of course. So this is not a simple problem. Designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet for which you have a mathematical proof that the system can be safe. It’s going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behave properly. Sometimes they’re going to do something that was unexpected because the guardrail wasn’t right and we’re dd correct them so that they do it right. The idea somehow that we can’t get it slightly wrong because if we get it slightly wrong, we’ll die is ridiculous. We are just going to go progressively. It is just going to be, the analogy I’ve used many times is turbojet design. How did we figure out how to make turbojet so unbelievably reliable?

(02:14:07) Those are incredibly complex pieces of hardware that run at really high temperatures for 20 hours at a time sometimes, and we can fly halfway around the world on a two-engine jetliner at near the speed of sound. Like how incredible is this? It’s just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No, it took decades to fine tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snecma or whatever that is specialized in turbojet safety? No. The design is all about safety, because a better turbojet is also a safer turbojet, so a more reliable one. It’s the same for AI. Do you need specific provisions to make AI safe? No, you need to make better AI systems, and they will be safe because they are designed to be more useful and more controllable…

…Lex Fridman (02:28:45) Well, it’ll be at the very least, absurdly comedic. Okay. So since we talked about the physical reality, I’d love to ask your vision of the future with robots in this physical reality. So many of the kinds of intelligence that you’ve been speaking about would empower robots to be more effective collaborators with us humans. So since Tesla’s Optimus team has been showing us some progress on humanoid robots, I think it really reinvigorated the whole industry that I think Boston Dynamics has been leading for a very, very long time. So now there’s all kinds of companies Figure AI, obviously Boston Dynamics.

Yann LeCun (02:29:30) Unitree.

Lex Fridman (02:29:30) Unitree, but there’s a lot of them.

Yann LeCun (02:29:33) There’s a few of them.

Lex Fridman (02:29:33) It’s great. It’s great. I love it. So do you think there’ll be millions of humanoid robots walking around soon?

Yann LeCun (02:29:44) Not soon, but it’s going to happen. The next decade I think is going to be really interesting in robots, the emergence of the robotics industry has been in the waiting for 10, 20 years without really emerging other than for pre-program behavior and stuff like that. And the main issue is, again, the Moravec paradox, how do we get those systems to understand how the world works and plan actions? And so we can do it for really specialized tasks. And the way Boston Dynamics goes about it is basically with a lot of handcrafted dynamical models and careful planning in advance, which is very classical robotics with a lot of innovation, a little bit of perception, but it’s still not, they can’t build a domestic robot.

(02:30:41) We’re still some distance away from completely autonomous level five driving, and we’re certainly very far away from having level five autonomous driving by a system that can train itself by driving 20 hours like any 17-year-old. So until we have, again, world models, systems that can train themselves to understand how the world works, we’re not going to have significant progress in robotics. So a lot of the people working on robotic hardware at the moment are betting or banking on the fact that AI is going to make sufficient progress towards that…

…Yann LeCun (02:38:29) I love that question. We can make humanity smarter with AI. AI basically will amplify human intelligence. It’s as if every one of us will have a staff of smart AI assistants. They might be smarter than us. They’ll do our bidding, perhaps execute a task in ways that are much better than we could do ourselves, because they’d be smarter than us. And so it’s like everyone would be the boss of a staff of super smart virtual people. So we shouldn’t feel threatened by this any more than we should feel threatened by being the manager of a group of people, some of whom are more intelligent than us. I certainly have a lot of experience with this, of having people working with me who are smarter than me.

(02:39:35) That’s actually a wonderful thing. So having machines that are smarter than us, that assist us in all of our tasks, our daily lives, whether it’s professional or personal, I think would be an absolutely wonderful thing. Because intelligence is the commodity that is most in demand. That’s really what I mean. All the mistakes that humanity makes is because of lack of intelligence really, or lack of knowledge, which is related. So making people smarter, we just can only be better. For the same reason that public education is a good thing and books are a good thing, and the internet is also a good thing, intrinsically and even social networks are a good thing if you run them properly.

(02:40:21) It’s difficult, but you can. Because it helps the communication of information and knowledge and the transmission of knowledge. So AI is going to make humanity smarter. And the analogy I’ve been using is the fact that perhaps an equivalent event in the history of humanity to what might be provided by generalization of AI assistant is the invention of the printing press. It made everybody smarter, the fact that people could have access to books. Books were a lot cheaper than they were before, and so a lot more people had an incentive to learn to read, which wasn’t the case before.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Alphabet (parent of Google), Microsoft, Meta Platforms, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 24 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 March 2024:

1. The future of ‘communist capitalism’ in China – Martin Wolf

What is the economic future of China? This question raises many specific issues, notably China’s persistent macroeconomic imbalances, the threat of population decline and worsening relations with important parts of the outside world, above all, an increasingly hostile US. But underneath all of these lies a deeper one: is “communist capitalism”, that seemingly self-contradicting invention of Deng Xiaoping, inexorably fading away under Xi Jinping? Will China’s regime ossify and, in the end, collapse, as the Soviet Union did?…

…Much light on this issue is shed by China’s World View, a recently published book by David Daokui Li, a distinguished Harvard-trained professor of economics, who teaches at Tsinghua University. People interested in China, be they hawks or doves, should read Li’s valuable book carefully.

Perhaps its most startling observation is that “from 980 until 1840, the beginning of China’s modern history”, income per head declined. Ancient China was in a Malthusian trap. This picture is even worse than the one shown in the work of the late Angus Maddison. Even after 1840, this grim reality did not get much brighter. Only after Deng Xiaoping’s “reform and opening up” did it change.

By freeing the private economy, relying on market forces and opening up to the world economy, Deng created the conditions for an extraordinary transformation. Yet, by repressing demands for democracy in Tiananmen Square in 1989, he also reinforced communist party control. He invented a new political economy: today’s China is the result.

Is it also sustainable? Li’s book answers a clear “yes” to this question. In essence, he argues that China’s political system should be viewed not as Soviet, but as a modernised form of the traditional Chinese imperial state. This state is paternal. It is responsible for the people, but not accountable to them, except in one fundamental way: if it loses mass support, it will be overthrown. Its job is to provide stability and prosperity. But, in doing so, it does not try to run everything from the centre. That would be crazy in so vast a country: it decentralises to local levels. The communist party should, he argues, be seen fundamentally as the national party of China.

From this perspective, the Xi regime does not represent an abandonment of the goals of the Deng era, but rather an attempt to remedy some of the problems created by its reliance on “go-go” capitalism, namely, pervasive corruption, soaring inequality and environmental damage…

…When considering the prospects for China, one should not focus mainly on the list of obvious problems — falling property prices, excessive debt, excess savings, an ageing population and western hostility. All these can be dealt with by a country with China’s human resources and growth potential, even if with difficulty.

The bigger issue is whether, in the centralising, cautious and conservative era of Xi, Deng’s move from stagnation to explosive growth is doomed to reverse back into stagnation. If people come to believe that the dynamism of the recent past has been lost for good, then there is a risk of a downward spiral of disappointed hopes. But the force of 1.4bn people wanting a better life is extremely powerful. Will anything be allowed to halt it? The answer, I suspect, is still “no”.

2. Is diversification a blessing or curse? – Chin Hui Leong

DIVERSIFICATION is good or bad for you, depending on whom you ask. Warren Buffett, the legendary investor and businessman, once said that if you know what you’re doing, it makes little sense to diversify.

But Peter Lynch, a star mutual fund manager of the 1980s, had a different approach. He believed that the more stocks you own, the better your chances of finding a winner. Lynch was famous for holding up to 1,400 stocks in his portfolio.

Here’s the surprise: They both achieved remarkable success, despite their opposing positions. What does this mean for you as an investor? Should you diversify, or not?…

…As we delve deeper into diversification, we should not lose sight of its goal to reduce risk. This is where buying businesses from unrelated industries or geographies can go wrong. In fact, investors who diversify into areas where they lack expertise are taking more risk, not less. It makes little sense to do so, says Lynch. How well you know your stocks matters more than how many sectors or regions you spread your money across.

I agree with Lynch. Diversify only if you want to boost your chances of finding more winning stocks in your portfolio.

Here is a point you shouldn’t miss: you should always be looking to learn more about new businesses and industries. As you become more knowledgeable, you can grow your portfolio with more stocks you know well, but without exceeding your limits.

Remaining humble is key. Knowing the limits of your knowledge in any new area is how you keep yourself in check. As author Carl Richards once said, risk is what’s left when you think you’ve thought of everything…

…Here’s a simple rule of thumb to help you. If you’ve been following a new company for a year, invest no more than 1 per cent of your portfolio into the stock. If it’s five years, then up to 5 per cent. You can adjust the percentage to fit your risk appetite.

The point of this strategy is to have a reference point where you can match your risk level with your knowledge level…

…Finally, investing over time helps to spread your risk over years. Don’t worry about starting small in a stock. A winning stock is only known in hindsight. Here’s the point most people miss: if a stock is destined to be a winner, the stock price rise will happen over years, if not decades…

…Here’s the final conundrum: the mark of a successful portfolio is a concentrated portfolio. How can that be? Let’s say you invested $1,000 each into 10 stocks. Each stock will make up a tenth of this $10,000 portfolio.

After five years, the first one skyrockets, increasing by 10 times and is worth $10,000, while the last one goes to zero. The other eight stocks stay the same at $1,000. Do the math and you’ll end up with $18,000 in total. The big difference is, the winning stock will comprise more than 55 per cent of the five-year old portfolio…

…As you diversify to find more winners, the best of them will naturally rise to the top – thereby concentrating your portfolio in the right set of winning stocks. That’s more than any investor can wish for.

3. China has little choice but stimulus – Ethan Wu

The near universal reaction in the west to China’s refreshed 5 per cent gross domestic product growth target: good luck with that…

…The old growth drivers — property, infrastructure and manufacturing — all face major constraints. Property’s structural decline is well known; home prices and sales keep falling. Meanwhile, infrastructure is running into the limit of high debt levels. Chinese officials were dispatched last year to prod local governments to delever. It began with easy cost cuts: withholding wages from civil servants, delaying payments to vendors, slashing city services. But more recently, the deleveraging drive has been hitting infrastructure projects already under way, as Reuters reported in January:

Increasing its efforts to manage $13 trillion in municipal debt, the State Council in recent weeks issued a directive to local governments and state banks to delay or halt construction on projects with less than half the planned investment completed in 12 regions across the country, the sources said…

…Lastly, manufacturing. Since about 2020, the credit that once flowed to the property sector has been redirected to manufacturing, especially in politically favoured sectors such as solar and electric vehicles. The year-over-year growth rate of loans to Chinese industry has risen steadily, though the level is now declining..

…This pivot back to manufacturing is “radical”, says Adam Wolfe of Absolute Strategy Research, and it has generated important victories for China. Most notably, BYD is now the world’s biggest EV maker, and China the biggest auto exporter. But it has also created an enormous oversupply of manufactured goods, which, when combined with limp demand at home, is crushing industrial margins and fuelling deflation…

…China’s manufacturing trade surplus is already huge, perhaps 2 per cent of world GDP. As Gavekal’s Yanmei Xie wrote in the FT last month, western countries sensibly fear China dumping cheap goods into export markets. A cheap renminbi heightens the threat; trade retaliation is widely anticipated. If that is right, export-led growth probably can’t be China’s escape valve.

This glum picture suggests that China may soon be forced into stimulus. Assuming the GDP target is at least somewhat binding, no sector of the Chinese economy stands ready to get growth to 5 per cent. A pick-up in consumption could do it, but we’ve heard no convincing story for why anxious consumers would suddenly become gripped by animal spirits…

…The unclear stimulus outlook has left the bulk of investors nervous, but equity outflows have at least stopped. The stock market has rallied 14 per cent since early February, but only because of ample support from the state. Value trade or value trap?

What keeps us sceptical is the fact that Chinese stocks are not loads cheaper than global stocks. After the rally, the CSI 300 trades at 13x forward earnings, versus 14x for the MSCI all-country world ex-US index. To us the risks in China stocks are much clearer than the reward.

4. Exxon Barges in on Hess Deal – Matt Levine

I, on the other hand, used to be a convertible bond investment banker, so I have somewhat more than the usual familiarity with them. I could tell you, for instance, that it is common in the US for a convertible to be done as a Rule 144A offering, meaning that the bonds are sold to large “qualified institutional buyers” (QIBs) in a private placement and then can’t be resold to retail investors. Doing a 144A deal is generally faster and cheaper than doing a public deal that is registered with the US Securities and Exchange Commission, and retail investors don’t really buy convertibles anyway.

But eventually the institutional buyers of a 144A deal will want to be able to convert their bonds into regular, publicly traded stock, so there needs to be some mechanism for turning “144A” convertibles into “registered” ones. I am old enough that, when I started as a converts banker, the way to do this was to file a registration statement with the SEC, but the modern approach is pretty much that you wait six months or a year and the convertible becomes freely tradeable as a legal matter.

As a practical matter, though, the way this works is that the bonds, when they are originally issued, have a “restrictive legend” on them saying that they can be sold only to institutional buyers, and after a year the company sends a notice to its transfer agent saying “you can take that legend off the bonds now.” And when the bonds have the legend, they can’t be freely traded; once the legend is off, they can be. Here I am pretending, as one does, that “the bonds” are pieces of paper with a legend stamped on them, but of course they are actually entries in an electronic database; what really happens is that the original bonds have a “restricted CUSIP” (the identification number that every security has), telling transfer agents and depositaries and brokers and everyone else that they can only be sold to QIBs, and then after a year the company gets them a new “unrestricted CUSIP” and they trade freely. This is not hard — it’s a phone call or an email, maybe a legal opinion — but the company has to do it…

…So for instance here is the indenture for Avid Bioservices Inc.’s 1.25% exchangeable senior notes due 2026, a convertible bond it issued in 2021.4 Section 4.06(e) of the indenture, the 94-page contract governing the bonds, says:

If, and for so long as, the restrictive legend on the Notes specified in ‎‎Section 2.05(c) has not been removed, the Notes are assigned a restricted CUSIP or the Notes are not otherwise freely tradable … as of the 370th day after the last date of original issuance of the Notes, the Company shall pay Additional Interest on the Notes at a rate equal to 0.50% per annum of the principal amount of Notes outstanding until the restrictive legend on the Notes has been removed. …

…Avid forgot, for two years, to take the restrictive legend off of its convertible. This was very understandable: Its obligation to remove the restricted legend was boring and technical and buried in Section 4.06(e) of a bond indenture that surely nobody read. It could only remove the legend a year after it issued the bonds, after everyone had stopped paying attention. And, as Avid points out, it “did not receive any notices and was not otherwise made aware” of this provision in, sure, a contract that it signed, but a very long and boring contract. (And, to be fair, the holders forgot too!) And because it completely forgot about its obligation to remove the legend, Avid also forgot to pay the 0.5% penalty interest rate for two years. And because it forgot to pay the extra interest, it created a non-curable default on the bonds: The holders can demand all of their money back, with interest, immediately, with no chance for Avid to fix the problem by removing the legend and paying the overdue interest…

…This is a bad oopsie by Avid, which probably should have put a reminder in its calendar to unrestrict the CUSIP. But it’s a clever trade by whoever this holder was: The old bonds are far out-of-the-money (that is, they’re not going to convert into stock), and Bloomberg tells me that they were trading in the high 70s as recently as a month ago (the high 80s more recently). If you had noticed Avid’s extremely technical oopsie, you could have bought the bonds at, say, 80 cents on the dollar, sent them a letter saying “we gotcha hahahaha,” and made a quick 20 points, plus interest. The holder owns “at least 25%” of the bonds (the amount required to accelerate), and there are $143.75 million of bonds outstanding; 20 points on 25% of $143.75 million is $7.2 million. Plus interest.

5. Sora, Groq, and Virtual Reality – Ben Thompson

Groq was founded in 2016 by Jonathan Ross, who created Google’s first Tensor Processing Unit; Ross’s thesis was that chips should take their cue from software-defined networking: instead of specialized hardware for routing data, a software-defined network uses commodity hardware with a software layer to handle the complexity of routing. Indeed, Groq’s paper explaining their technology is entitled “A Software-defined Tensor Streaming Multiprocessor for Large-scale Machine Learning.”

To that end Groq started with the compiler, the software that translates code into machine language that can be understood by chips; the goal was to be able to reduce machine-learning algorithms into a format that could be executed on dramatically simpler processors that could operate at very high speed, without expensive memory calls and prediction misses that make modern processors relatively slow.

The end result is that Groq’s chips are purely deterministic: instead of the high-bandwidth memory (HBM) used for modern GPUs or Dynamic Random Access Memory (DRAM) used in computers, both of which need to be refreshed regularly to function (which introduces latency and uncertainty about the location of data at a specific moment in time), Groq uses SRAM — Static Random Access Memory. SRAM stores data in what is called a bistable latching circuitry; this, unlike the transistor/capacitor architecture undergirding DRAM (and by extension, HBM), stores data in a stable state, which means that Groq always knows exactly where every piece of data is at any particular moment in time. This allows the Groq compiler to, in an ideal situation, pre-define every memory call, enabling extremely rapid computation with a relatively simple architecture.

It turns out that running inference on transformer-based models is an extremely ideal situation, because the computing itself is extremely deterministic. An LLM like GPT-4 processes text through a series of layers which have a predetermined set of operations, which is perfectly suited to Groq’s compiler. Meanwhile, token-based generation is a purely serial operation: every single token generated depends on knowing the previous token; there is zero parallelism for any one specific answer, which means the speed of token calculation is at an absolute premium…

…One of the arguments I have made as to why OpenAI CEO Sam Altman may be exploring hardware is that the closer an AI comes to being human, the more grating and ultimately gating are the little inconveniences that get in the way of actually interacting with said AI. It is one thing to have to walk to your desk to use a PC, or even reach into your pocket for a smartphone: you are, at all times, clearly interacting with a device. Having to open an app or wait for text in the context of a human-like AI is far more painful: it breaks the illusion in a much more profound, and ultimately disappointing, way. Groq suggests a path to keeping the illusion intact.

It is striking that Groq is a deterministic system running deterministic software that, in the end, produces probabilistic output. I explained deterministic versus probabilistic computing in ChatGPT Gets a Computer:

Computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing).

I’ve already mentioned Bing Chat and ChatGPT; on March 14 Anthropic released another AI assistant named Claude: while the announcement doesn’t say so explicitly, I assume the name is in honor of the aforementioned Claude Shannon. This is certainly a noble sentiment — Shannon’s contributions to information theory broadly extend far beyond what Dixon laid out above — but it also feels misplaced: while technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations; Sydney was interesting, but what about homework?

The idea behind ChatGPT Gets a Computer is that large language models seem to operate somewhat similarly to the human brain, which is incredible and also imprecise, and just as we need a computer to do exact computations, so does ChatGPT. A regular computer, though, is actually the opposite of Groq: you get deterministic answers from hardware that is, thanks to the design of modern processors and memory, more probabilistic than you might think, running software that assumes the processor will handle endless memory calls and branch prediction.

In the end, though, we are back where we started: a computer would know where the bow and stern are on a ship, while a transformer-based model like Sora made a bad guess. The former calculates reality; the latter a virtual reality.

Imagine, though, Sora running on Groq (which is absolutely doable): could we have generated videos in real-time? Even if we could not, we are certainly much closer than you might have expected. And where, you might ask, would we consume those videos? How about on a head-mounted display like the Apple Vision Pro or Meta Quest? Virtual reality (my new definition) for virtual reality (the old definition).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 17 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 17 March 2024:

1. The Ultra-Pure, Super-Secret Sand That Makes Your Phone Possible – Vince Beiser

Spruce Pine is not a wealthy place. Its downtown consists of a somnambulant train station across the street from a couple of blocks of two‑story brick buildings, including a long‑closed movie theater and several empty storefronts.

The wooded mountains surrounding it, though, are rich in all kinds of desirable rocks, some valued for their industrial uses, some for their pure prettiness. But it’s the mineral in Glover’s bag—snowy white grains, soft as powdered sugar—that is by far the most important these days. It’s quartz, but not just any quartz. Spruce Pine, it turns out, is the source of the purest natural quartz—a species of pristine sand—ever found on Earth. This ultra‑elite deposit of silicon dioxide particles plays a key role in manufacturing the silicon used to make computer chips. In fact, there’s an excellent chance the chip that makes your laptop or cell phone work was made using sand from this obscure Appalachian backwater. “It’s a billion‑dollar industry here,” Glover says with a hooting laugh. “Can’t tell by driving through here. You’d never know it.”

In the 21st century, sand has become more important than ever, and in more ways than ever. This is the digital age, in which the jobs we work at, the entertainment we divert ourselves with, and the ways we communicate with one another are increasingly defined by the internet and the computers, tablets, and cell phones that connect us to it. None of this would be possible were it not for sand.

Most of the world’s sand grains are composed of quartz, which is a form of silicon dioxide, also known as silica. High‑purity silicon dioxide particles are the essential raw materials from which we make computer chips, fiber‑optic cables, and other high‑tech hardware—the physical components on which the virtual world runs. The quantity of quartz used for these products is minuscule compared to the mountains of it used for concrete or land reclamation. But its impact is immeasurable…

…In the mid‑1950s, thousands of miles from North Carolina, a group of engineers in California began working on an invention that would become the foundation of the computer industry. William Shockley, a pathbreaking engineer at Bell Labs who had helped invent the transistor, had left to set up his own company in Mountain View, California, a sleepy town about an hour south of San Francisco, near where he had grown up. Stanford University was nearby, and General Electric and IBM had facilities in the area, as well as a new company called Hewlett‑Packard. But the area known at the time as the Santa Clara Valley was still mostly filled with apricot, pear, and plum orchards. It would soon become much better known by a new nickname: Silicon Valley.

At the time, the transistor market was heating up fast. Texas Instruments, Motorola, and other companies were all competing to come up with smaller, more efficient transistors to use in, among other products, computers. The first American computer, dubbed ENIAC, was developed by the army during World War II; it was 100 feet long and 10 feet high, and it ran on 18,000 vacuum tubes.

Transistors, which are tiny electronic switches that control the flow of electricity, offered a way to replace those tubes and make these new machines even more powerful while shrinking their tumid footprint. Semiconductors—a small class of elements, including germanium and silicon, which conduct electricity at certain temperatures while blocking it at others—looked like promising materials for making those transistors.

At Shockley’s startup, a flock of young PhDs began each morning by firing up kilns to thousands of degrees and melting down germanium and silicon. Tom Wolfe once described the scene in Esquire magazine: “They wore white lab coats, goggles, and work gloves. When they opened the kiln doors weird streaks of orange and white light went across their faces . . . they lowered a small mechanical column into the goo so that crystals formed on the bottom of the column, and they pulled the crystal out and tried to get a grip on it with tweezers, and put it under microscopes and cut it with diamond cutters, among other things, into minute slices, wafers, chips; there were no names in electronics for these tiny forms.”

Shockley became convinced that silicon was the more promising material and shifted his focus accordingly. “Since he already had the first and most famous semiconductor research and manufacturing company, everyone who had been working with germanium stopped and switched to silicon,” writes Joel Shurkin in his biography of Shockley, Broken Genius. “Indeed, without his decision, we would speak of Germanium Valley.”

Shockley was a genius, but by all accounts he was also a lousy boss. Within a couple of years, several of his most talented engineers had jumped ship to start their own company, which they dubbed Fairchild Semiconductor. One of them was Robert Noyce, a laid‑back but brilliant engineer, only in his mid‑20s but already famous for his expertise with transistors.

The breakthrough came in 1959, when Noyce and his colleagues figured out a way to cram several transistors onto a single fingernail‑sized sliver of high‑purity silicon. At almost the same time, Texas Instruments developed a similar gadget made from germanium. Noyce’s, though, was more efficient, and it soon dominated the market. NASA selected Fairchild’s microchip for use in the space program, and sales soon shot from almost nothing to $130 million a year. In 1968, Noyce left to found his own company. He called it Intel, and it soon dominated the nascent industry of programmable computer chips.

Intel’s first commercial chip, released in 1971, contained 2,250 transistors. Today’s computer chips are often packed with transistors numbering in the billions. Those tiny electronic squares and rectangles are the brains that run our computers, the Internet, and the entire digital world. Google, Amazon, Apple, Microsoft, the computer systems that underpin the work of everything from the Pentagon to your local bank—all of this and much more is based on sand, remade as silicon chips.

Making those chips is a fiendishly complicated process. They require essentially pure silicon. The slightest impurity can throw their tiny systems out of whack.

Finding silicon is easy. It’s one of the most abundant elements on Earth. It shows up practically everywhere bound together with oxygen to form SiO2, aka quartz. The problem is that it never occurs naturally in pure, elemental form. Separating out the silicon takes considerable doing.

Step one is to take high‑purity silica sand, the kind used for glass. (Lump quartz is also sometimes used.) That quartz is then blasted in a powerful electric furnace, creating a chemical reaction that separates out much of the oxygen. That leaves you with what is called silicon metal, which is about 99 percent pure silicon. But that’s not nearly good enough for high‑tech uses. Silicon for solar panels has to be 99.999999 percent pure—six 9s after the decimal. Computer chips are even more demanding. Their silicon needs to be 99.99999999999 percent pure—eleven 9s. “We are talking of one lonely atom of something that is not silicon among billions of silicon companions,” writes geologist Michael Welland in Sand: The Never-Ending Story.

Getting there requires treating the silicon metal with a series of complex chemical processes. The first round of these converts the silicon metal into two compounds. One is silicon tetrachloride, which is the primary ingredient used to make the glass cores of optical fibers. The other is trichlorosilane, which is treated further to become polysilicon, an extremely pure form of silicon that will go on to become the key ingredient in solar cells and computer chips.

Each of these steps might be carried out by more than one company, and the price of the material rises sharply at each step. That first‑step, 99 percent pure silicon metal goes for about $1 a pound; polysilicon can cost 10 times as much.

The next step is to melt down the polysilicon. But you can’t just throw this exquisitely refined material in a cook pot. If the molten silicon comes into contact with even the tiniest amount of the wrong substance, it causes a ruinous chemical reaction. You need crucibles made from the one substance that has both the strength to withstand the heat required to melt polysilicon, and a molecular composition that won’t infect it. That substance is pure quartz.

THIS IS WHERE Spruce Pine quartz comes in. It’s the world’s primary source of the raw material needed to make the fused‑quartz crucibles in which computer‑chip‑grade polysilicon is melted. A fire in 2008 at one of the main quartz facilities in Spruce Pine for a time all but shut off the supply of high‑purity quartz to the world market, sending shivers through the industry.

Today one company dominates production of Spruce Pine quartz. Unimin, an outfit founded in 1970, has gradually bought up Spruce Pine area mines and bought out competitors, until today the company’s North Carolina quartz operations supply most of the world’s high‑ and ultra‑high‑purity quartz. (Unimin itself is now a division of a Belgian mining conglomerate, Sibelco.)

In recent years, another company, the imaginatively titled Quartz Corp, has managed to grab a small share of the Spruce Pine market. There are a very few other places around the world producing high‑purity quartz, and many other places where companies are looking hard for more. But Unimin controls the bulk of the trade.

The quartz for the crucibles, like the silicon they will produce, needs to be almost absolutely pure, purged as thoroughly as possible of other elements. Spruce Pine quartz is highly pure to begin with, and purer still after being put through several rounds of froth flotation. But some of the grains may still have what Glover calls interstitial crystalline contamination—molecules of other minerals attached to the quartz molecules.

That’s frustratingly common. “I’ve evaluated thousands of quartz samples from all over the world,” says John Schlanz, chief minerals processing engineer at the Minerals Research Laboratory in Asheville, about an hour from Spruce Pine. “Near all of them have contaminate locked in the quartz grains that you can’t get out.”

Some Spruce Pine quartz is flawed in this way. Those grains are used for high‑end beach sand and golf course bunkers—most famously the salt‑white traps of Augusta National Golf Club, site of the iconic Masters Tournament. A golf course in the oil‑drunk United Arab Emirates imported 4,000 tons of this sand in 2008 to make sure its sand traps were world‑class, too.

The very best Spruce Pine quartz, however, has an open crystalline structure, which means that hydrofluoric acid can be injected right into the crystal molecules to dissolve any lingering traces of feldspar or iron, taking the purity up another notch. Technicians take it one step further by reacting the quartz with chlorine or hydrochloric acid at high temperatures, then putting it through one or two more trade‑secret steps of physical and chemical processing.

The result is what Unimin markets as Iota quartz, the industry standard of purity. The basic Iota quartz is 99.998 percent pure SiO2. It is used to make things like halogen lamps and photovoltaic cells, but it’s not good enough to make those crucibles in which polysilicon is melted. For that you need Iota 6, or the tip‑top of the line, Iota 8, which clocks in at 99.9992 percent purity—meaning for every one billion molecules of SiO , there are only 80 molecules of impurities. Iota 8 sells for up to $10,000 a ton. Regular construction sand, at the other end of the sand scale, can be had for a few dollars per ton…

…Unimin sells this ultra‑high‑purity quartz sand to companies like General Electric, which melts it, spins it, and fuses it into what looks like a salad bowl made of milky glass: the crucible. “It’s safe to say the vast majority of those crucibles are made from Spruce Pine quartz,” Schlanz says.

The polysilicon is placed in those quartz crucibles, melted down, and set spinning. Then a silicon seed crystal about the size of a pencil is lowered into it, spinning in the opposite direction. The seed crystal is slowly withdrawn, pulling behind it what is now a single giant silicon crystal. These dark, shiny crystals, weighing about 220 pounds, are called ingots.

The ingots are sliced into thin wafers. Some are sold to solar cell manufacturers. Ingots of the highest purity are polished to mirror smoothness and sold to a chipmaker like Intel. It’s a thriving multi-billion dollar industry in 2012.

The chipmaker imprints patterns of transistors on the wafer using a process called photolithography. Copper is implanted to link those billions of transistors to form integrated circuits. Even a minute particle of dust can ruin the chip’s intricate circuitry, so all of this happens in what’s called a clean room, where purifiers keep the air thousands of times cleaner than a hospital operating room. Technicians dress in an all‑covering white uniform affectionately known as a bunny suit. To ensure the wafers don’t get contaminated during manufacture, many of the tools used to move and manipulate them are, like the crucibles, made from high‑purity quartz.

The wafers are then cut into tiny, unbelievably thin quadrangular chips—computer chips, the brains inside your mobile phone or laptop. The whole process requires hundreds of precise, carefully controlled steps. The chip that results is easily one of the most complicated man‑made objects on Earth, yet made with the most common stuff on Earth: humble sand.

The total amount of high‑purity quartz produced worldwide each year is estimated at 30,000 tons—less than the amount of construction sand produced in the United States every hour. (And even construction sand is in high demand; there’s a thriving black market in the stuff.) Only Unimin knows exactly how much Spruce Pine quartz is produced, because it doesn’t publish any production figures. It is an organization famously big on secrecy. “Spruce Pine used to be mom‑and‑ pop operations,” Schlanz says. “When I first worked up there, you could just walk into any of the operations. You could just go across the street and borrow a piece of equipment.”

NOWADAYS UNIMIN WON’T even allow staff of the Minerals Research Laboratory inside the mines or processing facilities. Contractors brought in to do repair work have to sign confidentiality agreements. Whenever possible, vice‑president Richard Zielke recently declared in court papers, the company splits up the work among different contractors so that no individual can learn too much.

Unimin buys equipment and parts from multiple vendors for the same reason. Glover has heard of contractors being blindfolded inside the processing plants until they arrive at the specific area where their jobs are and of an employee who was fired on the spot for bringing someone in without authorization. He says the company doesn’t even allow its employees to socialize with those of their competitors.

It was hard to check out Glover’s stories, because Unimin wouldn’t talk to me. Unlike most big corporations, its website lists no contact for a press spokesperson or public relations representative. Several emails to their general inquiries address went unanswered. When I called the company’s headquarters in Connecticut, the woman who answered the phone seemed mystified by the concept of a journalist wanting to ask questions.

She put me on hold for a few minutes, then came back to tell me the company has no PR department, but that if I faxed (faxed!) her my questions, someone might get back to me. Eventually I got in touch with a Unimin executive who asked me to send her my questions by email. I did so. The response: “Unfortunately, we are not in a position to provide answers at this point in time.”

2. It was never about LLM performance – Justin

The LLM community is obsessed with benchmarking model performance. Mistral released their new “flagship” model this week, and immediately focused the discussion on how it performs on “commonly used benchmarks” relative to other models:

The entire blog post (I’d recommend reading it) is just a read through of how this model performs relative to other models on benchmarks, from math and coding to multilingual capabilities…

…This tendency to fixate on benchmarks is understandable – right now, it’s basically the only semi-objective way to measure how these models stack up against each other. It’s something vendors in other spaces, like data streaming, do too. But it is dangerous because it misses the point of where this whole AI thing is going, and is a textbook product marketing anti-pattern.

In a trend that we’ve seen hundreds of times in developer tooling, the underlying LLM is not going to matter within a few years. Large Language Model performance is already highly commoditized, and will continue to head in that direction. All that will matter is the experience that you build on top of these models, and what that enables for your customers.

Let’s take a look at the ChatGPT interface. Here’s a common prompt I’ve been using for testing, asking the model to summarize the contents of an external link into a tweet thread. Unrelated aside, the responses to this prompt are virtually identical across every major LLM.

Which parts of this interface are the underlying model – GPT-4 in this case – and which are an experience built by OpenAI on top of the underlying model?

The text response, minus any formatting, is what the model generated. But the:

  • Ability of the model to access and scrape content from a web page
  • Context of the prompt, including setting the system as a helpful assistant
  • Formatting the response, like changing the numbers to gray UI for typing the prompt
  • Filepicker for attaching media to the prompt
  • Prompt history
  • Model switcher / picker (this one is meta)
  • Ability to persist and share the model responses

…and more not show here

are all not GPT-4, they’re features built by OpenAI on top of GPT-4 to create an experience that is helpful and worth paying for. Some of these are harder to build than others – OpenAI’s secret sauce obviously isn’t the little arrow that scrolls down to the bottom of the response. ChatGPT would be nothing without GPT-4 – but the reverse may also be true!

The retort to this line of reasoning is that these chat interfaces are primarily for non-technical users, while the real money for these model providers comes from developer use cases, building LLMs into user-facing applications. I’ve worked closely with one of the major model compute providers, so this is not foreign to me. But experience matters to developers too!

OpenAI has dedicated significant resources to building a seamless developer experience beyond “docs for the model.” Here’s their playground for prompting GPT models – you can adjust parameters like temperature and penalties, plus change the system prompt to be any other style…

…For a closed source model provider like OpenAI, the difference between what is model and what is experience is academic – you’re paying for both. They are one thing. But where this really matters is in open source. Does the convergence of open source performance to closed source performance really matter if the experience of using that open source is bad?…

…The open source discussion has been too anchored on reaching performance parity with OpenAI models. This is a small piece of the puzzle. For developers looking to build applications with these open source models, and especially the pro-sumer chat use case, users need to consider the holistic experience that model providers offer. Integrating LLMs into your app is almost never going to be the “drop in” experience you see on marketing sites – and my concern is that the “open source is approaching parity with OpenAI!” narrative is not actually true in a meaningful way.

Folks working in AI can look to previous examples of this phenomenon in developer tools for guidance: A couple of years ago, I wrote about how underlying performance of production relational databases is becoming commoditized, and vendors are focusing much more on developer experience. It’s going to happen here too, the question is just when.

3. Aravind Srinivas – Building An Answer Engine – Patrick O’Shaughnessy and Aravind Srinivas

Patrick: [00:07:28] It’s really cool to think about the sequencing to get there. We’ve had search engines. Like you said, it’s a hack to get the answers. You’re building what I think of today as an answer engine. I type something in, it’s just giving the answer directly with great citation and all this other stuff we’ll talk about. And the vision you’re articulating is this question engine can anticipate the things that I want to learn about and give them to me beforehand.

And I’d love to build up towards that. So maybe starting with the answer engine, explain to us how it works. Maybe you could do this via the time line of how you’ve built the product or something. But what are the components? What is happening behind the scenes when I type something into Perplexity either a question or a search query or whatever? Walk us through in some detail the actual goings on behind the scenes in terms of how the product works itself?

Aravind: [00:08:13] Yes. So when you type in a question into Perplexity, the first thing that happens is, it first reformulates the question, it tries to understand the question better, expands the question in terms of adding more suffixes or prefixes to it, to make it more well formatted. It speaks to the question engine part. And then after that, it goes and pulls so many links from the web that are relevant to this reformulated question.

There are so many paragraphs in each of those links. It takes only the relevant paragraphs from each of those links. And then an AI model, we typically call it large language model. It’s basically a model that’s been trained to predict the next word on the Internet and fine-tuned for being good at summarization and chats.

That AI model looks at all these chunks of knowledge the bits of study that surface from important or relevant links and takes only those parts that are relevant to answering your query and gives you a very concise four or five sentence answer, but also with references. Every sentence has a reference to which webpage or which chunk of knowledge it took from which webpage and puts it at the top in terms of sources.

That gets you to a nicely formatted rendered answer, sometimes in markdown bullets, or sometimes just generic paragraphs, sometimes it has images in it. But a great answer with references or citation so that if you want to dig deeper, you can go and visit the link. If you don’t want and just read the answer and ask a follow-up, you can engage in a conversation, both modes of usage are encouraged and allowed. So this is what happens on Perplexity today.

Patrick: [00:09:51] What percent of users end up clicking beneath the summarized answer into a source webpage?

Aravind: [00:10:01] At least 10%.

Patrick: [00:10:02] So 90% of the time, they’re just satisfied with what you give them?

Aravind: [00:10:06] It depends on how you look at it. If you wanted to be 100% of the time, people always click on a link, that’s the traditional Google. And you want to be 100% of the time where people never click on links, that’s ChatGPT. We think the sweet spot is somewhere in the middle. People should click on link sometimes to go do their work there. Let’s say, you’re just booking a ticket, you might actually want to go away Expedia or something.

Let’s say you’re deciding where to go first. You don’t need to go away and read all these SEO blogs and get confused on what you want to do. You first make your decision independently with this research body that’s helping you decide. And once you finished your research and you have decided, then that’s when you actually have to go out and do your actual action of booking your ticket. That way, I believe there is a nice sweet spot of one product providing you both the navigational search experience as well as the answer engine experience together. And that’s what we strive to be doing…

Patrick: [00:13:54] Can you explain from an insider’s perspective and someone building an application on top of these incredible new technologies, what do you think the future might look like or even what you think the ideal future would be for how many different LLM providers there are, how specialized they get to scale the primary answer, so there’s only going to be a few of them. How do you think about all this and where you think it might go?

Aravind: [00:14:16] It really depends on who you’re building for. If you’re building for consumers, you do want to build a scalable infrastructure because you do want to ask many consumers to use your product. If you’re building for the enterprise, you still want a scalable infrastructure.

Now it really depends, are you building for the people within that company who are using your product. Let’s say, you’re building an internal search engine, you only need to scale to the size of the largest organization, which is like maybe 100,000 people. And not all of them will be using your thing at one moment. You’re decentralizing it, you’re going to keep different servers for different companies and you can elastically decide what’s the level of throughput you need to offer.

But then if you’re solving another enterprise’s problem, where that enterprise is serving consumers and you’re helping them do that, you need to build scalable infrastructure indirectly at least. For example, OpenAI. Their APIs are used by us, other people to serve a lot of consumers. So unless they solve that problem themselves, they’re unable to help other people solve their problem. Same thing with AWS.

So that’s one advantage you have of actually having a first-party product that your infrastructure is helping you serve. And by doing that, by forcing yourself to solve that hard problem, whatever you build can be used by others as well. Amazon build AWS first for Amazon. And because Amazon.com requires very robust infrastructure, that can be used by so many other people and so many other companies emerged by building on top of AWS.

Same thing happened with OpenAI. They needed robust infrastructure to serve the GPT-3 developer API and ChatGPT as a product. But once they got it all right, then they can now support other companies that are building on top of them. So it really depends on what’s your end goal and who you’re trying to serve and what’s the scale of our ambition…

Patrick: [00:19:02] And when I think about the history of the product, which I was a pretty early user of, the first thing that pops to my mind is that it solves the hallucination problem, which has become less of a problem. But early on, everyone just didn’t know how to trust these things and you solved that. You gave citations, you can click through the underlying webpages, et cetera.

I’d love you to walk through what you view the major time line product milestones have been of Perplexity dating back to its start. The one I just gave could be one example. There was this possibility, but there was a problem and you solved it, at least that was my perception as a user. What have been the major milestones as you think back on the product and how it’s gotten better?

Aravind: [00:19:41] I would say the first major thing we did is really making the product a lot faster. When we first launched, the latency for every query was seven seconds, then we actually had to speed up the demo video to put it on Twitter so that it doesn’t look embarrassing.

And one of our early friendly investors, Daniel Gross who co-invests a lot with Nat Friedman, he was one of our first testers before we even released the product. And he said, you guys should call it a submit button for a query. It’s almost like you’re submitting a job and waiting on the cluster to get back. It’s that slow.

And now we are widely regarded as the fastest chatbot out there. Some people even come and ask me, why are you only as fast as ChatGPT? Why are you not faster? And little did they realize that ChatGPT doesn’t even use the web by default. It only uses it on the browsing mode on Bing.

So for us to be as fast as ChatGPT already tells you that in spite of doing more work to go pull up links from the web, read the chunks, pick the relevant ones and use that to give you the answer with sources and a lot more work on the rendering, despite doing all the additional work, if you’re managing an end-to-end latency as good as ChatGPT that shows we have like even a superior back end to them.

So I’m most proud about the speed at which we can do things today compared to when we launched, the accuracy has been constantly going up, primarily few things. One is we keep expanding our index and like keep improving the quality of the index. From the beginning, we knew all the mistakes that previous Google competitors did, which is obsessed about the size of your index and focus less on the quality.

So we decided from the beginning we would not obsess about the size. Size doesn’t matter and index actually, what matters is the quality of your index. What kind of domains are important for AI chatbots and question-answering and knowledge workers. That is what we care about. So that decision ended up being right.

The other thing that has helped us improve the accuracy was training these models to be focused on hallucinations. When you don’t have enough information in the search snippets, try to just say I don’t know, instead of making up things. LLMs are conditioned to always be helpful, will always try to serve the user’s query despite what it has access to, may not be even sufficient to answer the query. So that part took some reprogramming, rewiring. You’ve got to go and change the ways. You can’t just solve this with prompt engineering. So we have spent a lot of work on that.

The other thing I’m really proud about is getting our own inference infrastructure. So when you have to move outside the OpenAI models to serve your product, everybody thinks, “Oh, you just train a model to be as good as GPT and you’re’ done.” But reality is OpenAI’s mode is not just in the fact that they have trained the best models, but also that they have the most cost-efficient, scalable infrastructure for serving this on a large-scale consumer product like ChatGPT. That is itself a separate layer of mode. You can build that mode, you can build.

And so we are very proud of our inference team, how fast, high throughput, low latency infrastructure we built for serving our own LLMs. We took advantage of the open source revolution, Llama and Mistral and took all these models, trained them to be very good at being great answer bots and served them ourselves on GPU so that we get better margins on our product. So all these three layers, both in terms of speed through actual product back-end orchestration, accuracy of the AI models and serving our own AI models, we’ve done a lot of work on all these things…

Patrick: [00:28:50] Can you expand on index. You’ve referenced that a few times for those that haven’t built one or haven’t thought about this. Just explain that whole concept and the decisions that you’ve made. You already mentioned quality versus size. But just explain what it means to build an index, why it’s so important, et cetera?

Aravind: [00:29:07] Yes. So what does an index mean, it’s basically a copy of the web. The web has so many links and you want a cache, you want a copy of all those links in the database, so a URL and the contents in that URL. Now the challenge here is new links are being created every day on the web and also existing links keep getting updated on the web as well. New sites keep getting updated. So you’ve got to periodically refresh them. The URL needs to be updated in the cache with a different version of it.

Similarly, you got to keep adding new URLs to your index, which means you’ve got to build a crawler. And then how you store a URL, the contents in that URL also matters. Not every page is native HTML anymore. The web has upgraded a lot, rendering JavaScript a lot, and every domain has custom-based rendered the JavaScript. So you’ve got to build parsers. So you’ve got to build a crawler, indexer, parser and that together makes up for a great index.

Now the next step comes to retrieval, which is now that you have those index, every time you hit a query, which links do you use? And which paragraphs in those links do you use? Now that is the ranking problem. How do you figure out what is relevance and ranking? And once you retrieve those chunks, like the top few chunks relevant to a query that the user is asking, that’s when the AI model comes in. So this is a retrieve part. Now the generic part. That’s why it’s called retrieve and generic.

So once you retrieve the relevant chunks from the huge index that you have, the AI model will come and read those chunks and then give you the answer. Doing this ensures that you don’t have to keep training the AI model to be up to date. What you want the AI model to do is to be intelligent, to be a good reasoning model.

Think about this as when you were a student, I’m sure you would have written an open book exam, open notes exam in school or high school or college. What are those exams test you for? They don’t test you for rote learning. So it doesn’t give an advantage to the person who has the best memory power. It gives advantage to person who has read the concepts, can immediately query the right part of the notes, but the questions required you to think on the fly as well.

That’s what we want to design systems. It’s very different philosophy from OpenAI, where OpenAI wants this one model that’s so intelligent, so smart, you can just ask it anything. It’s going to tell you. We rather want to build a small efficient model that’s smart, capable, can reason on facts that it’s given on the fly. And this ambiguate different individuals with different names or saved as not sufficient information, not get confused about dates.

When you’re asking something about the future, say that was not yet happened. These sort of corner cases handle all of those with good reasoning capabilities yet have access to all of the world’s knowledge at an instant through a great index. And if you can do both of these together end-to-end orchestrated with great latency and user experience, you’re creating something extremely valuable. So that’s what we want to build…

Patrick: [00:37:26] Do you think that the transformer architecture is here to stay and will remain the dominant tool or architecture for a long time?

Aravind: [00:37:33] This is a question that everybody asks in the last six years or seven years since the first transformer came. Honestly, nothing has changed. The only thing that has changed is the transformer became a mixture of experts model, where there are multiple models and not just a single model. But the core self-attention model architecture has not changed. And people say there are shortcomings, the quadratic attention, complexities there. But any solution to that incurs costs somewhere else too.

Most of the people are not aware that majority of the computation in a large transformer like GPT-3 or 4 is not even spent on the attention layer. It’s actually spent on the matrix multiplies. So if you’re trying to focus more on the quadratic part, you’re incurring costs and the matrix multiples, and that’s actually the bottleneck in the larger scaling.

So honestly, it’s very hard to make an innovation on the transformer that can have a material impact at the level of GPT-4 complex cost of training those models. So I would bet more on innovations, auxiliary layers, like retrievable augmented generation. Why do you want to train a really large model when you don’t have to memorize all the facts from the Internet, when you literally have to just be a good reasoning model?

Nobody is going to value Patrick for knowing all facts. They’re going to value you for being an intelligent person, fluid intelligence. If I give you something very new that nobody else has an experience in, are you well positioned to learn that skill fast and start doing it really well. When you hire a new employee, what do you care about? Do you care about how much they know about something? Or do you care about whether you can give them any task and they would still get up to speed and do it, which employee would you value more?

So that’s the sort of intelligence that we should bake into these models, and that requires you to think more on the data. What are these models training on? Can we make them train on something else and just memorizing all the words on the Internet? Can we make reasoning emerge in these models through a different way? And that might not need innovation on the transformer, that may need innovation more on what data you’re throwing at these models.

Similarly, another layer of innovation that’s waiting to happen is the architecture like sparse versus dense models. Clearly, mixture of experts is working, GPT-4 is a mixture of experts, Mixtral is a mixture of experts, Gemini 1.5 is a mixture of experts. So even there, it’s not one model for coding, one model for reasoning and math, one model for history that depending on your input, it’s getting routed to the right model. It’s not that spares.

Every individual tokened is routed to a different model, but it’s happening every layer. So you’re still spending a lot of compute. How can we create something that’s actually 100 humans in one company? So the company itself has aggregated so much smarter. We’ve not created the equivalent at a model layer, more experimentation on the sparsity and more experimentation on how we can make reasoning emerge in a different way is likely to have a lot more impact than thinking about what is the next transformer.

4. Training great LLMs entirely from ground up in the wilderness as a startup – Yi Tay

People always assume it’s simply a question/debate of accelerator choice (TPUs vs GPUs etc) and all GPU clusters are created equal. For us, this soon proved to be false. As we sampled across different service providers, we find that the variance of hardware quality differs vastly even for the same hardware, i.e., GPUs (H100s). Note that here, hardware refers to overall cluster quality and not necessarily the chips or accelerators per se. Just like a lottery. Basically:

Not all hardware is created equal. The variance of cluster quality across hardware providers is so high that it is literally a lottery pertaining to how much pain one would have to go through to train good models. In short, a hardware lottery in the era of LLMs.

More specifically, we’ve leased a few clusters from several compute providers, each with a range of hundreds to thousands of chips. We’ve seen clusters that range from passable (just annoying problems that are solvable with some minor SWE hours) to totally unusable clusters that fail every few hours due to a myriad of reasons. Specifically, some clusters have nodes that fail every N hour with issues ranging from cabling issues (where N is unreasonably small), GPU hardware errors etc. Even more surprisingly, every cluster across the same provider could also be vastly different in terms of how robust it was…

…Did I mention you’ll also get a different Model Flop Utilisation (MFU) for different clusters!? This was a non negligible amount of compute wasted if one is unlucky enough to find a provider with badly cabled nodes or some other issues. Systems with very sub-optimal file systems would have the MFU of training runs tank the moment a team mate starts transferring large amounts of data across clusters.

Every service provider also had different levels of support. These range from being polite to nonchalant, “chatgpt-style” canned responses to blaming the user for every single thing that goes wrong.

Overall, every single cluster we tried feels like they have their own vibe, struggles and failure modes. It was also almost as though every single cluster needed their own hot-fixes for their own set of issues – some more tolerable than others. That said, we’ve learned that fail safes are important, and finding fast hot fixes for any clusters could be key…

…We’re training our models on GPUs for the most part at Reka. Personally, I’ve used TPUs all my life when it comes to large language model training at Google pre-Reka life. CUDA and nccl were the most alien thing to me ever. (I only learned it’s pronounced “Nickel” from one of my coworkers who used to work at Nvidia lol)

I was completely taken aback by the failure rate of GPUs as opposed to my experiences on TPUs at Google. In fact, I don’t actually recall TPUs failing much even for large runs, though I was not sure if I was protected from knowing this just by the sheer robustness of the outrageously good infra and having a dedicated hardware team. In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month. It never failed. If this were in GPU land, it would have failed within the first few days for sure.

That said, I think this could be more about the competency of the hardware team that manages your accelerators rather than the underlying chip. The presence of having good hardware support (from your compute provider) is important. And so much hinges on them being actually competent, reinforcing the notion of the “hardware lottery”…

…It is no secret that my favourite codebase of all time is T5X and Mesh Tensorflow (named tensors ftw) but these options quickly became not viable as 1) they don’t get as much support outside Google, 2) they are kind of deprecated and 3) they are not friendly to folks on our team that are not xooglers.

We ended up going for something vanilla, seemingly stable and more popular (i.e., pytorch) that is more accessible to most people on the team (except me lol). In my first few months, I was tripping all over pip, git, docker and all these wild life stuff. Then again, I am not 100% sure about how stable or user friendly it would be to use a google codebase externally (it would have been pretty nasty I guess).

To be very frank, I would have to say the quality of codebases externally significantly lag behind those I’ve been used to at Google. Primarily because codebase within Google tends to be written by ML rockstars themselves (e.g, Noam Shazeer, Barret Zoph, Adam Roberts, Hyung Won Chung et al.) and just feel better (e.g., superior vibes) compared to those I’ve tried externally. In particular, I found myself super annoyed with the code quality when dabbling with stuff built by other companies (some way worse than others 🤗).

5. How The Interstate Highway System Changed American Industry – Lawrence Hamtil

Signed into law in 1956 by then President Dwight Eisenhower, the Federal Highway Act created the Interstate Highway System, which would become the largest and costliest public works project in history.  Measuring almost 48,000 miles in total distance, the Interstate Highway System was completed only in 1992, more than three decades after work began, and for a total cost in today’s dollars of more than $500 billion…

…Among the beneficiaries of this huge outlay were the quarry owners and aggregate miners, who provided the gravel and rock on which the interstates were laid, the heavy machinery manufacturers who provided the graders, tractors, and steamrollers that turned those rocks into roads, and the oil and gas producers and refiners who made the gasoline and diesel that fueled the project…

…As families began to set out exploring the country on the new interstate system, restauranteurs such as Ray Kroc and Howard Johnson recognized the need to provide traveling families with predictable, familiar service.  The idea of the chain restaurant was born as interstate exit ramps guided hungry motorists to McDonald’s and Howard Johnson’s.  Families would also need places to say on longer journeys, so hotels followed restaurants in the chain model as franchises like Holiday Inn became a staple of interstate exits; early ads for the hotel underlined the value of the familiar by stating, “The best surprise is no surprise.”

The logistical flexibility provided by the interstate system also gave rise to a whole new model of retailing:  big box stores began to set up in small towns offering rich variety and low prices to consumers previously left unserved by larger retailers.  Walmart’s 1975 annual report detailed just such a model…

…Whereas not quite a century before the railroads had aided in the rise of Sears, Roebuck, and Co. as the first retailer with national reach, the interstate in the 1960s and 1970s would provide the backbone of Walmart’s logistical operations, with large distribution centers situated at critical points throughout the interstate network to facilitate inventory replenishment, as Professor Jesse LeCavalier has noted on his blog. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 03 March 2024:

1. The Future of Ecommerce That Wasn’t: An In-depth Look into What Went Wrong with Wish – Speedwell Research

A good explanation is a story that is hard to vary. If we did a postmortem of WebVan (grocery delivery) or Pets.com (online specialty store for Pets), what would we say went wrong? If we did the postmortem in 2006, most likely we would have said it was a silly and unrealistic idea. But if we were to do a postmortem now, with the existence of Instacart (grocery delivery) and Chewy (online specialty store for pets), how would our understanding change?

This is not a trivial exercise. It is far too easy to be dismissive about a failing business and think it was the entrepreneur’s ill-thought-out idea or just incompetence, but this does not hold scrutiny.

Look at Apple. For how many years was Steve Jobs and his insistence on not licensing the Mac operating system seen as the impetus for their failure? And the same thing happened again when the iPhone was released: analysts thought their unwillingness to license the phone’s iOS would ultimately lead to their demise. Now though, Apple’s success is attributed to their close integration and their proprietary software is a key selling point, which wouldn’t be possible if they licensed it.

If you took over Lego in 2004 when it was nearing bankruptcy, what would you diagnose as the problem? Would you have thought that with digital entertainment kids just don’t want to play with toy blocks anymore? Or would you have thought the focus on “noncore” activities like theme parks, clothing, and video game development were the issues? Perhaps the product was good but was simply too expensive? You know that today there are vastly more digital entertainment options than there were in 2004, they still have theme parks and video games, and their products are still expensive, so what was it?

If you were appointed CEO of Crocs in 2008 when their stock dropped 98% and was on the verge of entering bankruptcy, tell us that you wouldn’t be tempted to lay the blame on the aesthetics of the shoes. It is the most ridiculed shoe design with “ugly” virtually synonymous with Crocs and yet they now sell over 150 million of them a year. Again, what some people would identify as the problem of the business turned out to be a virtue…

…So, if we are saying Wish was unsuccessful because of their focus on cheap items with slow shipping, we shouldn’t be able to point to another company that did something similar and was successful…

…We will do one final analysis to estimate churn before concluding. However, we want to note that this analysis is unnecessary to make the point we are about to. If you simply saw that they lost users despite spending >80% of revenues on marketing, or almost ~$1.5bn, is there any explanation you would accept that could convince you the business was healthy? Imagine your Chief Marketing Officer just told you they spent $1.5bn to lose 2mn buyers and grow revenues 2%. How would anyone possibly see that as a good thing? And yet, with a little bump in numbers from Covid in 2020, it was overlooked by investors in favor of the hope of buying the next Amazon at IPO.

In the S-1 they disclose the following LTV chart. They calculate “LTV” as cumulative gross profits over a period of time attributable to new buyers acquired in a given cohort, divided by total new buyers, which is most certainly not what an LTV really is.

For example, let’s say a cohort generated $15 of gross profit in year one and then another $10 of gross profit in year two. They would add those two numbers up and say the “LTV” of the customer in year 2 is $25.  Therefore, if you wanted to calculate how much each cohort generated in gross profits in a given year, you just have to take the difference between each year. In this example, this cohort generated $10 in gross profits in the second year versus $15 in the first year, suggesting ample churn. What you would want to see is each year’s incremental figure stay steady or increase.

The chart above shows cumulative gross profit by cohort. If it was a perfectly linear line, then that would mean in each period the cohort bought the same amount of goods as the previous period.

We will focus just on the 2017 cohort for simplicity. We annotated it to show how we estimated incremental gross profits. The average buyer from the 2017 cohort earns $15 in gross profits in year 1, which drops to $10 in year 2, and then to $6 in year 3. We can already see that by year 3, each cohort is generating about 1/3rd what it did in year 1, which suggests heavy churn. Remember that Wish’s payback period is about 2 years, which means it isn’t until year 3 they make that small incremental gross profit. And remember, this is just to pay back the initial marketing investment, not other S&M they spent on promotions to reengage that buyer.

Here, we can see that that the difference between the gross profit for total buyers divided by the gross profit per average active buyer gives us a churn estimate. At the end of year 1, 100% of buyers are active (by definition) and by year three that drops to 19%. That comes out to about 44% annual churn over two years. It is also noteworthy that the churn is much worse in the first year. A full 67% of buyers do not return after buying once.

Now, remember that their average payback period is under 2 years. That is rather problematic in the context of almost no one being left after 2 years! They have a thin amount of remaining users that not only need to cover all of the reengagement marketing, but also all of their G&A and R&D cost. And that’s before they can even make a profit!

This is a fundamentally broken business. Users do not stay long enough, they have to pay to get users to return, and users are not profitable…

…Earlier we said that an explanation is a story that cannot easily vary. Well, we have trouble figuring out exactly what the story is that cannot vary. There is nothing in principle wrong with an ecommerce offering catered to the consumers in the low-end of household earnings. Some would note that the low average order value would make it hard to make enough contribution profit per order, but that is essentially what Pinduoduo did in China, what Shopee is doing in Southeast Asia and Brazil, and what Temu is doing in the US. While we don’t know exactly if all of those initiatives will end up being profitable, it is hard to claim it is the idea itself that is rotten.

Clearly, Wish had a problem with both their high cost to acquire users and their ability to retain them. We know that Pinduoduo had a better customer acquisition engine piggy backing off of Tencent’s Weixin with preferred placement and the Community Group Buy model was a novel way to spur consumer sharing, free of charge. Shein had TikTok and went viral early on with “Shein hauls”, where influencers would post everything they purchased. They would later lean into influencer marketing on TikTok to much success. Amazon has Amazon Prime which helps retain users, and their optimal customer service helps keep customers satisfied at potential churn events.  Wish was lacking something in the customer acquisition and retention area, but exactly what isn’t obvious.

Perhaps it was a mix of everything that individually created customer churn events from slow shipping to “unreliable shipping”, fraud, fake listings, sub-par customer service, inadequate item selection, poor item quality, inaccurate recommendations, or perhaps even internal issues. But again, other companies have survived similar or worse issues. And the longer the list, the more it speaks to our lack of confidence in any one variable. As an investor from the outside, it isn’t apparent what the key problem was, at least not to us.

What is crystal clear though is that there were issues since at least 2019, and some red flags prior. An investor only needed the company’s IPO prospectus to see these problems brewing, and could have avoided even worrying about any potential “narrative fallacy” by just focusing on the financials.

2. Bill Ackman: Investing, Financial Battles, Harvard, DEI, X & Free Speech | Lex Fridman Podcast #413 (partial transcript here) – Lex Fridman and Bill Ackman

Bill Ackman (57:12): So this was at the time of the Financial Crisis, circa November 2008. Real estate’s always been a kind of sector that I’ve been interested in. I began my career in the real estate business working for my dad, actually arranging mortgages for real estate developers. So I have kind of deep deep ties and interest in the business. General Growth was the second largest shopping mall company in the country – Simon Properties many people have heard of – General Growth was number two. They own some of the best malls in the country…

…General Growth the company, the CFO in particular,  was very aggressive in the way that he borrowed money. He borrowed money from a kind of Wall Street – not long-term mortgages – but generally relatively short-term mortgages. He was pretty aggressive. As the value went up, he would borrow more and more against the assets and that helped the short-term results of the business. The problem was during the Financial Crisis, the market for what’s called CMBS – commercial mortgage backed securities – basically shut. And the company, because its debt was relatively short-term, had a lot of big maturities coming up that they had no ability to refinance. The market said, “oh my God, the lenders are going to foreclose and the shareholders are going to get wiped. The company’s going to go bankrupt, they’re going to get wiped out.” The stock went from $63 a share to 34 cents. There was a family, the Bucksbaum family owned I think about 25% of the company and they had a $5 billion stock that was worth $25 million or something by the time we bought a stake in the business.

What interested me was, I thought the assets were worth substantially more than the liabilities. The company had $27 billion of debt and had $100 million value of the equity, down from like $20 billion. And sort of an interesting place to start with a stock down 99%. But the fundamental drivers – the mall business – are occupancy, how occupied are the malls? Occupancy was up year on-year between ‘07 and ‘08. Interestingly, net operating income, which is kind of a measure of cash flow from the malls – that was up year-on-year. So the underlying fundamentals were doing fine. The only problem they had is they had billions of dollars of debt that they had to repay – they couldn’t repay

If you examine the bankruptcy code, it’s precisely designed for a situation like this where it’s this resting place you can go to restructure your business. Now the problem was that every other company that had gone bankrupt, the shareholders got wiped out. And so the market, seeing every previous example the shareholders get wiped out, the assumption is the stock is going to go to zero. That’s not what the bankruptcy code says. What the bankruptcy code says is that the value gets apportioned based on value, and if you could prove to a judge that the assets’ worth more than the liabilities, then the shareholders actually get to keep their investment in the company. And that was the bet we made.

So we stepped into the market. We bought 25% of the company in the open market. We had to pay up. It started at 34 cents – I think there were 300 million shares – so it was at a $100 million value. By the time we were done, we paid an average of – we paid $60 million for 25% of the business, so about $240 million for the equity of the company. And then we had to get on the board to convince the directors the right thing to do. The board was in complete panic, didn’t know what to do, spending a ton of money on advisers…

…And the key moment, if you’re looking for fun moments, is there’s a woman named Maddie Bucksbaum who was from the Bucksbaum family. Her cousin John was chairman of the board, CEO of the company. And I said – as she calls me after we disclose our stake in the company, she’s like “Billy Ackman, I’m really glad to see you here.” I met her – I don’t think it was a date – but I kind of met her in a social context when I was 25 or something. And she said, “I’m really glad to see you here and is there anything I can do to help you, call me.” I said, “Sure.” We kept trying to get on the board of the company, they wouldn’t invite us on.  Couldn’t really run a proxy contest, not with a company going bankrupt, and their advisers actually were Goldman Sachs and they’re like, “You don’t want the fox in the hen house” and they were listening to their advisors. So I called Maddie up and I said, “Maddie, I need to get on the board of the company to help.” And she says, “I will call my cousin and I’ll get it done.” She calls back a few hours later, “You’ll be going on to the board.” I don’t know what she said, but she was convincing.

Next thing you know, I’m invited to the board of the company and the board is talking about the old equity of General Growth. Old equity is what you talk about when the shareholders are getting wiped out. I said, “No, no, no. This board represents the current equity of the company. I’m a major shareholder, John’s a major shareholder, there’s plenty of asset value here. This company should be able to be restructured for the benefit of shareholders.” And we led a restructuring for the benefit of shareholders and it took let’s say eight months and the company emerged from Chapter 11. We made an incremental investment into the company and the shareholders kept the vast majority of their investment. All the creditors got their face amount of their investment – par plus accrued interest. And it was a great outcome. All the employees kept their jobs, the malls stayed open, there was no liquidation. The bankruptcy system worked the way it should. I was in court all the time and the first meeting with the judge, the judge is like “Look, this would never have happened were it not for a financial crisis.” And once the judge said that, I knew we were going to be fine because the company had really not done anything fundamentally wrong – maybe a little too aggressive in how they borrowed money.

Stock went from 34 cents to $31 a share…

…Lex Fridman (1:05:44): How hard is it to learn some of the legal aspects of this? You mentioned bankruptcy code – I imagine it’s very sort of dense language and dense ideas and the loopholes and all that kind of stuff. If you’re just stepping in and you’ve never done distressed investing, how hard is it to figure out?

Bill Ackman (1:06:05): It’s not that hard. I literally read a book on distressed investing. Ben Branch or something, on distressed investing.

Lex Fridman (1:06:12): So you were able to pick up the intuition from that, just all the basic skills involved, the basic facts to know, all that kind of stuff.

Bill Ackman (1:06:20): Most of the world’s knowledge has already been written somewhere. You just got to read the right books.

3. Why is Google killing cookies? – Eric Benjamin Seufert

What is Google’s underlying motivation in deprecating third-party cookies in Chrome? Suspicion is warranted. Google’s mission statement for its Privacy Sandbox initiative is to “Protect [user] privacy online,” across its Chrome browser and its Android operating system (Google intends to deprecate its GAID Android identifier at some point). Cookies, unquestionably, present severe data leakage risks to consumers: they allow anonymous services to observe the web activities of users with little preventative recourse. But as I point out in this piece, “privacy” is an abstract social concept, and firms – but especially multi-trillion dollar market leaders – don’t make dramatic, sweeping policy changes absent commercial benefit. Believing that a company would utterly reform the mechanics of digital advertising solely in service of increased user privacy is as absurd as believing that two firms would engage in a merger as an expression of friendship. To not assume a commercial motive in cookie deprecation is naive.

Apple’s App Tracking Transparency (ATT) privacy policy is an apt example of this. Apple launched an international PR campaign championing the privacy safeguards of the iPhone following its introduction of ATT in April 2021. Yet as I point out in this piece, Apple collects and utilizes consumer data in the ways that ATT was ostensibly designed to prevent. Apple positions its use of install and purchase data collected via consumer engagement in apps that it doesn’t own as “ads personalization” and not “tracking.” Apple claims first-party privileges over this consumer data because Apple exerts (and is stridently maintaining a firm grip on) control over iOS payments, giving it exclusive, proprietary access to that data. And in a court filing from December 2023, Apple had the following to say about the logical contortions of its privacy policies (all emphasis from the document):

The Allow Apps to Request to Track setting governs whether apps can ask to track users across apps or websites owned by other companies, as Apple’s descriptions of the setting consistently make clear … Plaintiffs also include a screen shot of the Tracking disclosure, which explains that Apple “requires app developers to ask for permission before they track your activity across Apps or websites they don’t own.” … Given Apple’s extensive privacy disclosures, no reasonable user would expect that their actions in Apple’s apps would be private from Apple.

This isn’t to say that Google and Apple don’t employ well-meaning, intelligent, and highly effective people whose efforts are centered on promoting their conceptions of digital privacy. But digital privacy initiatives from publicly traded, multi-trillion-dollar corporations must be viewed in a broader commercial context…

…So given that Google must have a commercial motivation in deprecating cookies, what is it? The most obvious is simply margin expansion: Google’s network business, which serves ads on third-party websites and apps, will almost certainly suffer if the Privacy Sandbox is less effective for targeting and measurement than cookies (and early indicators suggest it is). If the economics of buying third-party open web inventory through Google’s tools degrades, some of that demand may simply be routed to Google’s owned-and-operated channels. And these channels feature much higher margin for Google than its Network business: Bernstein estimated in December 2022 that Google’s margin on Network revenue is 10%, while it’s 15% for YouTube and 55% for Search. As I argue in this piece, because advertising budgets are deployed against absolute performance, Google will likely lose some degree of top-line revenue if its Network business unit declines. But Google doesn’t need to shift all of the revenue from Network to these channels to maintain its current bottom line given the margin differentials: $1BN in Network revenue produces the same margin as $181MM in Search revenue. 

4. Twitter thread on life and investing lessons from climbing Mt Kinabalu – Eugene Ng

1 | Hiking is a marathon, not a sprint. It is about finishing, whether or not you finish first or last it doesn’t matter, as long as you finish. There is no gold medals for the fastest, and only rescue for those who don’t. It only matters as long as you finish, and that you remain safe when you finish. Safety first, go slow.

It is the same with investing. Never be permanently wiped out, avoid all unlimited downside trades, and then you can focus on making asymmetric bets with unlimited upside…

3 | I was in awe out the scale of the human labour require for the entire operations. We saw numerous porters carrying up 20-40+ kg of fresh water, food, furniture, equipment for the lodge where we stayed at. There were also a number of porters who carried up luggage for some climbers as well. Without them, the support, the ecosystem, none of these would have been possible for us to experience the climb.

It is ever easier than ever to get data, but we cannot be lazy. We need to  learn to appreciate the ecosystem and what we have now with the internet, versus 50 years old with libraries and faxes. Use them to your advantage. Easily available does not mean everyone will actually read them. Do not confuse it.

4 | We had a fantastic mountain guide who was one of the oldest and fittest at 52 years and he has  been doing it over 32 years old since he was 20 years old. He still does this three times a week and will retire next year. He was leading us in our walk through easier path with such a controlled and comfortable pace, like he is meditating. Without him, it would have been so much more difficult. Having the right leader to help guide you really matters.

Having the right mentors, the right people around you matters. They can have the right expertise, experience to share that can help you in your journey to become better and to avoid the pitfalls…

6 | When ascending up and descending back down, it is not an individual effort but a collective team effort. The company matters. Without the right company to support you mentally on every step of the way makes so much of a difference, everyone has a role to play.

Like in investing, there are going to be ups and downs. The right people/investors to stand by you matters, and not run away at the first sight of trouble. Choosing carefully the right team to the best of your ability matters.

7 | Sometimes a member of your team is not going to be feeling very well or can be injured, it is being prepared and bringing along extra supplies, medical or food, and continuously supporting them with what you have physically and mentally. Remember that if they can’t finish, you can’t finish.

Know that the businesses that we invest in are not going to do well all the time, it is not going to be a straight line. There are going to be ups and downs, and they will zig and zag from time to time. We need to have the patience to stand by them through difficult times, and the good times, and not sell them.

8 | Run your own race at your own pace, sometimes you will overtake and sometimes others will overtake. Don’t be stressed by someone behind you trying to push you go faster. You set your own pace. If they want to overtake, just stand to the side and let them overtake, if not just chill. Separately, if you want to overtake someone slower, then just overtake on the side.

Find your own investment strategy that suits you best, that energises you.  The real race is against yourself, not against others. There will always be someone who will do better than you in any given year, so chill. It is not about being the top 10% in a year, but the top 10% after 10 or 20 years and more…

11 | Always remember to never get complacent and choose speed, or get distracted. Do a misstep over a loose rock, and you may just end up spraining your ankle (like me), and end up not finishing the climb. But thankfully, it was serious and painful, but it was still okay enough for me to complete the last 8km. It was insanely painfully with every descend as my right ankle landed on every step.

Never think highly of yourself. Stay humble, have humility. The moment you lose that, you stop listening, you stop absorbing, you stop learning, and then with a mis-step you might just result in eventual failure. Never do that.

12 | At the end, despite how much preparation, it is really willpower at the end that gets everyone through to the end. It can be so powerful, the human mind and the will power. Despite how tough it is, we were just highly focused on taking step at a time mindfully and carefully, that’s all that mattered. I sprained my right ankle horribly with 8km left, it was really painful, but I kept persisting, and my teammates were patient with me and walked slower. “Stay hard” by David Goggins was our slogan to keep us going.

Investing too is a slog, managers get paid to endure all the emotional and psychological elements with all the ups and downs. It is knowing when to keep pursuing and staying the course even especially when the going gets tough…

14 | Memories over medals. We did not finish first, but we finished in the end, and that’s what matters. To Team Endurance!

If you beat the index after 10 or 20 years, you will be in the top quartile. You want to keep staying and playing the game, and keeping doing okay and eventually you will do very well.

5. Things I Don’t Know About AI – Elad Gil

In most markets, the more time passes the clearer things become. In generative AI (“AI”), it has been the opposite. The more time passes, the less I think I actually understand.

For each level of the AI stack, I have open questions…

…There are in some sense two types of LLMs – frontier models – at the cutting edge of performance (think GPT-4 vs other models until recently), and everything else. In 2021 I wrote that I thought the frontier models market would collapse over time into an oligopoly market due to the scale of capital needed. In parallel, non-frontier models would more commodity / pricing driven and have a stronger opensource presence (note this was pre-Llama and pre-Mistral launches).

Things seem to be evolving towards the above:

Frontier LLMs are likely to be an oligopoly market. Current contenders include closed source models like OpenAI, Google, Anthropic, and perhaps Grok/X.ai, and Llama (Meta) and Mistral on the open source side. This list may of course change in the coming year or two. Frontier models keep getting more and more expensive to train, while commodity models drop in price each year as performance goes up (for example, it is probably ~5X cheaper to train GPT-3.5 equivalent now than 2 years ago)

As model scale has gotten larger, funding increasingly has been primarily coming from the cloud providers / big tech. For example, Microsoft invested $10B+ in OpenAI, while Anthropic raised $7B between Amazon and Google. NVIDIA is also a big investor in foundation model companies of many types. The venture funding for these companies in contrast is a tiny drop in the ocean in comparison. As frontier model training booms in cost, the emerging funders are largely concentrated amongst big tech companies (typically with strong incentives to fund the area for their own revenue – ie cloud providers or NVIDIA), or nation states wanting to back local champions (see eg UAE and Falcon). This is impacting the market and driving selection of potential winners early.

It is important to note that the scale of investments being made by these cloud providers is dwarfed by actual cloud revenue. For example, Azure from Microsoft generates $25B in revenue a quarter. The ~$10B OpenAI investment by Microsoft is roughly 6 weeks of Azure revenue. AI is having a big impact on Azure revenue revently. Indeed Azure grew 6 percentage points in Q2 2024 from AI – which would put it at an annualized increase of $5-6B (or 50% of its investment in OpenAI! Per year!). Obviously revenue is not net income but this is striking nonetheless, and suggests the big clouds have an economic reason to fund more large scale models over time.

In parallel, Meta has done outstanding work with Llama models and recently announced $20B compute budget, in part to fund massive model training. I posited 18 months ago that an open source sponsor for AI models should emerge, but assumed it would be Amazon or NVIDIA with a lower chance of it being Meta. (Zuckerberg & Yann Lecunn have been visionary here)…

...Are cloud providers king-making a handful of players at the frontier and locking in the oligopoly market via the sheer scale of compute/capital they provide? When do cloud providers stop funding new LLM foundation companies versus continuing to fund existing? Cloud providers are easily the biggest funders of foundation models, not venture capitalists. Given they are constrained in M&A due to FTC actions, and the revenue that comes from cloud usage, it is rational for them to do so. This may lead / has led to some distortion of market dynamics. How does this impact the long term economics and market structure for LLMs? Does this mean we will see the end of new frontier LLM companies soon due to a lack of enough capital and talent for new entrants? Or do they keep funding large models hoping some will convert on their clouds to revenue?…

What happens in China? One could anticipate Chinese LLMs to be backed by Tencent, Alibaba, Xiaomi, ByteDance and others investing in big ways into local LLMs companies. China’s government has long used regulatory and literal firewalls to prevent competition from non-Chinese companies and to build local, government supported and censored champions. One interesting thing to note is the trend of Chinese OSS models. Qwen from Alibaba for example has moved higher on the broader LMSYS leaderboards…

How much of AI cloud adoption is due to constrained GPU / GPU arb? In the absence of GPU on the main cloud providers companies are scrambling to find sufficient GPU for their needs, accelerating adoption of new startups with their own GPU clouds. One potential strategy NVIDIA could be doing is preferentially allocating GPU to these new providers to decrease bargaining power of hyperscalers and to fragment the market, as well as to accelerate the industry via startups. When does the GPU bottleneck end and how does that impact new AI cloud providers? It seems like an end to GPU shortages on the main clouds would be negative for companies whose only business is GPU cloud, while those with more tools and services should have an easier transition if this were to happen…

…ChatGPT launched ~15 months ago. If it takes 9-12 months to decide to quit your job, a few months to do it, and a few months to brainstorm an initial idea with a cofounder, we should start to see a wave of app builders showing up now / shortly.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta Platforms, Microsoft, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 25 February 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 25 February 2024:

1. Wang Chuanfu: A Name Everyone in the West Should Know – Kevin Xu

Wang Chuanfu (王传福), the founder of BYD which just beat Tesla in global electric vehicles sales, is virtually unknown in the west. Even in China, he is only well-known in the business circle and has a low profile otherwise compared to the more flashy tech entrepreneur, Jack Ma, or the more cosmopolitan investor-turned-founder, Kaifu Lee.

Whether you think China’s mass production of EVs and other renewable energy products is a net-positive for dealing with climate change, or an evil “onslaught” on the west, BYD’s global impact is hard to ignore and cannot be wished away. Its batteries have been powering millions of cell phones long before it started making cars. Its EVs can be now seen on the streets of every Chinese city, and quite a few European and Latin American cities. Its battery-powered buses are transporting commuters in Hyderabad, Bogotá, and the Los Angeles International Airport. It is also making electric SkyRails (subway in the air) that may soon appear in São Paulo’s skyline. Oh, and it supplies batteries to Tesla too.

Wang Chuanfu, the pudgy-faced chemist-turned-entrepreneur, is the main, if not the sole, reason why BYD, which meant literally nothing when the company was incorporated in 1995, became BYD, which now means “Build Your Dreams.” The late Charlie Munger called him a “genius”. Yet, there is no comprehensive biography (that I’m aware of) about the man. (Musk, on the other hand, has at least three about him.)…

…It is difficult to describe just how poor Wang’s upbringing was and how much the cards were stacked against him to amount to anything. In fact, his plan was to get into a vocational high school, not university, because it was easier in the early 1980s in China to get a job with vocational training. But the year he applied was the same year that his mother passed away, so he was affected by the loss and didn’t get in. Instead, he ended up in a normal high school that inadvertently paved the path for him to eventually attend a university. Even though he could have dropped out, his older brother insisted on supporting him financially, so he could focus on studying, get into a university, and bring the whole family out of poverty.

As the story goes, because Wang had no guidance or tutelage from his parents or anyone else, he read a lot of books on his own and developed some early muscle as an independent thinker. He had no choice. He ended up going to Central South University of Technology in the neighboring province of Henan as a chemistry major. In his own telling, Wang did not even remember applying to this school. His first choice was the Hefei University of Technology in his home province to study wireless technology, because he liked playing with radios as a kid, but he didn’t get in…

…With a 250,000 RMB loan from a cousin who worked in finance, Wang incorporated BYD in Shenzhen in 1995, where nothing was built and anything was possible. Registering a similar company in Beijing would have been a huge hassle, but in Shenzhen as a pilot SEZ, it sometimes took as little time as one day to form a company. Thus, there were a ton of companies being incorporated. In a rush, Wang chose B(比) Y(亚) D(迪) – three random Chinese characters that meant nothing together – because it was a name that wasn’t used yet. He optimized the first character’s pinyin for being earlier in the English alphabet, so the name could be seen earlier at a trade show or conference. (Jack Ma picked Alibaba for the same reason.)

Back then, the global leaders in battery manufacturing were Japanese giants – Sanyo, Panasonic, Phillips. Sanyo, in particular, was the company Wang aspired to and wanted to beat. But BYD was poor and could not afford any of the advanced equipment or assembly lines that Japanese manufacturers were using. So Wang reverse-engineered the manufacturing process, broke it down into small pieces, then hired very cheap human labor – the only advantage China had at the time – to work on each of those pieces to build cheaper batteries by hand. It was the most literal implementation of “human as a cog in a machine.”

Wang also flexed his chemistry training and caught up quickly in terms of battery technologies, from nickel-cadmium, to nickel-metal hydride, to lithium. BYD quickly caught up on all three types of batteries, while producing them at a fraction of the cost compared to its Japanese competitors. Its early investment in lithium-based batteries, along with Wang’s penchant to reverse engineer, would feature more prominently later in our story when BYD decided to make EVs…

…The company went public in 2002. That same year, Li Lu, the Tiananmen-protest-leader-turned-value-investor bought a stake with the money that Charlie Munger entrusted him to start Himalaya Capital…

…To Munger, investing in BYD in 2002 was akin to writing a VC check into an early stage startup – high probability of going to zero but with infinite upside.

Munger nonetheless admired Wang Chuanfu the person – someone he considered a “genius” with great engineering aptitude who works 70 hours a week. He would also soon learn of Wang’s independence and stubbornness, a trait that made his and Li Lu’s wager look like a terrible idea for a time, but set it on the path to become one of the best performing investments ever.

In January 2003, BYD bought a local carmaker called Qichuan Motors. Qichuan was so bad that the only worthwhile asset from that acquisition was the license it held, which BYD could now use to make its own cars.

Wang has had his eyes on the massive Chinese car market, and this was his way to move into it. His investors, however, were not pleased with this expansion. Li Lu, Munger, and just about everyone inside and outside the company opposed it. BYD’s stock price tanked by one-third during this acquisition.

But Wang didn’t care. For one reason or another, he acquired an immense confidence in his ability to reverse-engineer, vertically-integrate, then mass-produce just about anything. To learn how to make cars, he bought 50 or so second-hand cars from all the best foreign brands, took them apart, and learned how to make cars – a tale he has been fond of sharing in interviews since…

…Tesla was incorporated in July 2003, a few months after BYD bought Qichuan Motors. And Elon Musk would not come into the picture until February 2004, when he made an investment into Tesla’s Series A round using his PayPal-to-eBay acquisition winnings.

Technically, Wang was into making cars before Musk was.

Warren Buffett’s investment in BYD is a well-told story. Buying 225 million shares for $230 million dollars in 2008, when BYD was trading at barely more than $1, it is one of the best examples of Buffett’s “buy and hold” strategy working its magic. Buffett did not begin selling until 2023 – 15 years after his initial purchase. He is still holding more than half of his original stake, at the time of this writing.

However, there are two details to the Buffett-BYD love story that are less well-known and provide interesting colors to Wang Chuanfu’s personality.

First, Wang rejected Buffett’s initial overture to buy BYD, because the Oracle of Omaha wanted to buy a bigger stake than Wang was willing to give up. Despite the obvious benefits of capital infusion and stamp of approval from the greatest investor of all time, Wang stubbornly treated BYD like his baby, his kingdom, and his calling that couldn’t be so easily sold to the highest or most famous bidder. In the end, Buffett was only able to acquire about 10% of BYD…

…From 2009 to 2010, buoyed by Buffett’s investment and branding, Wang set BYD on an aggressive expansion path to make and sell as many cars as possible in China. Although Wang was an engineering and mass production savante, force-feeding BYD cars, which were not of the best quality nor had any brand premium at the time, turned out to be a terrible move. BYD had no problem pumping out tens of thousands of cheap cars. But Wang’s sales target – doubling year over year – forced its sales teams to in turn force dealerships across the country to take on more BYD inventories and higher sales targets of their own.

But not enough consumers wanted BYD cars. Demand overall was also weakening at the time when every country was, in one way or another, dealing with the aftermath of the Global Financial Crisis. Thus, major dealerships started rejecting BYD cars and severing relationships with the company in droves, from Sichuan, to Hunan, to Shandong, and beyond.

By mid-2010, “Dealership Exodus Gate” was in full swing, BYD slashed its sales guidance, implemented mass layoffs, and Wang was humbled. He realized that treating dealers like minions, while making cars with no brand value was not going to work, even with Buffett’s blessing. Unlike batteries, which few consumers know of or care about the brand or manufacturer, cars are prized possessions that convey social status and prestige.

BYD had to become a brand, not just an efficient producer of cheap, affordable cars…

…Tesla first started selling EVs in China in 2014. It commanded brand premium, conveyed social status, and produced high-performing EVs with solid range – three things BYD did not have. Tesla’s were coveted by many, but affordable to only a few, due in large part to China’s high tariffs on foreign-made cars. This barrier gave BYD and other domestic EV makers some room to survive by continuously catering to low-end, cost-conscious consumers.

All that changed in 2019, when Tesla opened its Shanghai factory. Musk’s creations could now both be made and sold in China. This meant Tesla cars could avoid the tariffs and lower prices to compete with the likes of BYD. That year, BYD sold 20% less vehicles than the previous year. Its earnings fell by almost half. Wang Chuanfu was in survival mode again…

…To fix BYD EVs’ lack of range and improve safety concerns, Wang came up with a new design concept that became the Blade Battery – a new form factor that could pack more power density and release heat faster than the standard battery pack modules. BYD’s adaptive and vertically-integrated manufacturing line quickly churned out prototypes of Lithium Iron Phosphate (LFP) Blade Battery…

…By packing more LFP-composed power into Wang’s blade-shaped design, which allowed for more density and a larger surface area for cooling, the LFP Blade Battery achieved a nice middle ground that enabled longer range than conventional LFP block batteries, a bit less range than NMC batteries, with way less heat during an accident…

…By March 2020, Blade Battery started making its way into BYD EVs. From 2020 to 2022, BYD’s sales quadrupled. The same Blade Battery is now in Tesla’s Model Y…

…What Wang will face next in order to take BYD to the next level is a geopolitical problem that has been decades in the making. It will require more words, more finesse, and less inventive chemistry composition and hardcore engineering. It is probably not the kind of wheeling-and-dealing he is naturally good at. Then again, for a peasant kid orphaned as a teenager, he is not supposed to be naturally good at anything.

Whether he succeeds or not, Wang Chuanfu, is a name that everyone in the west should know. It’s long overdue.

2. The road to investing wisdom begins with ‘I don’t know’ – Chin Hui Leong

When it comes to buying stocks, investor and mutual fund manager Peter Lynch has a simple mantra: Invest in what you know. But what does it mean to know something? How do you gauge your knowledge and skills?

Businessman and investor Warren Buffett has a useful concept for this conundrum: your circle of competence. In layman’s terms, it refers to the range of topics and fields that you can understand well.

For instance, if you are a teacher, you will have a better understanding of the education system than most people. Likewise, if you are a restaurant owner, you will know the ins and outs of the food and beverage industry.

Here is what investors miss: Knowing what you are good at is just the beginning. The real challenge is to know your limits. You need to be honest about your weaknesses and avoid investing in areas you do not understand, Buffett says. In other words, you need to know what you do not know…

…It is better to admit early that you are out of your depth than to suffer months and years later from holding the wrong stocks. Even a winning stock will be useless if you lack the conviction to hold it…

…Ben Graham, the father of value investing, used a story to explain how the stock market works: he called it Mr Market. A friendly guy, Mr. Market always tells you the price of your shares every day. But there is a catch: He is also very emotional. He often gets too excited or too depressed, and gives you prices that are too high or too low.

The trick is to know when Mr Market is wrong. That is how you beat him at his own game. Then again, while Mr Market has mood swings, he is not dumb. Even Graham admits that Mr Market can get it right sometimes, giving you a fair price for your stock based on how the underlying business is doing and its prospects.

The trick, then, is to realise that while Mr Market is not stupid, he is impatient. In the short term, he will change the price of your stocks to reflect the prevailing business news.

Over the long term, however, it is the business’ earnings growth that will determine the direction of the stock price…

…Here’s what I have noticed: Most investors do not like to admit that they need to diversify to lower their risk. They prefer to follow Buffett’s advice and put all their eggs in one basket. They would hold no more than five stocks at a time, sometimes even less.

Sadly, these same investors are just trading one flaw for another – ignorance for arrogance. Holding a few stock positions implies you have the rare ability to pick winners with atypical accuracy. Buffett, with his decades of experience, can make that claim. How about you?…

…Investor, hedge fund manager and author Seth Klarman said it best – that when you buy a stock, it is an arrogant act. You are saying you know more than the person selling the stock to you. That is arrogance.

There is no thin line between arrogance and confidence. They are both sides of the same coin. But here is the good news. You do not have to be stuck on one setting. You can be confident when you buy stocks. And then be humble after you buy the stock. You can commit to learning about the business over years, and earn your right to be confident.

3. What a Viral Post on Giraffes Says About China’s Fed-Up Investors – Li Yuan

It’s a perilous time for investors in China. Their main vehicle, so-called A shares of Chinese companies, fell more than 11 percent in 2023 and have continued their losses this year. Many investors have instead flocked to the exchange-traded funds that track foreign markets and that have been performing much better.

Putting money in stocks is inherently risky. But Chinese investors are experiencing something especially alarming: financial losses in the markets, declining home values and a government that doesn’t want any public discussion of what’s happening.

With their frustrations piling up, Chinese investors recently found a way to vent that wouldn’t be quickly censored. They started leaving comments on an innocuous post about giraffe conservation on the official Weibo social media account of the U.S. Embassy in China. They lamented the poor performance of their portfolios and revealed their broader despair, anger and frustration. The giraffe post has been liked nearly one million times since Feb. 2, much more than what the embassy’s Weibo posts usually get. Many of the comments also offered admiration for the United States, as well as unhappiness about their own country.

“The different stock markets’ performances reflect the distances between America and China in terms of national power, technology, humanity and sense of well-being,” a commenter wrote.

The comments demonstrate a growing loss of confidence by the Chinese public in the stock market, the country’s economic prospects and the Chinese Communist Party’s ability to govern…

…Another investor I spoke to, Leo, a portfolio manager at an asset management company in Beijing, has been investing in China’s stock markets for nearly a decade. In November, he started closing out his positions. Now, like Jacky, he is placing his bets on overseas markets.

Leo said he used to hope that China’s internet giants Alibaba and Tencent would become $1 trillion companies like Amazon, and that investors like him would benefit from their growth. “That dream was shattered” after the government cracked down on tech in 2020, he said. “I can only look to the overseas markets now.”

The American Embassy’s Weibo comments section once served as an online punching bag for nationalistic Chinese who blamed the United States for their country’s problems. Now it’s called the Western Wall of China’s A shares investors.

“Under the protection of the U.S. government,” wrote one commenter, “the giraffes are 10,000 times happier than the Chinese stock investors.”…

…A recent survey by the Canton Public Opinion Research Center offered a bleak picture from the southern city of Guangzhou, a metropolis of nearly 19 million people and a hub of technology, manufacturing and trade. In a 2023 survey of 1,000 residents, the center found that the city’s “economy and the society were confronted with unprecedented challenges and pressure.”

The research center’s report said residents’ assessment of the economy, because of unemployment and falling incomes, was as low as it was in 2015, when China’s markets tanked. Satisfaction with the growth of the private sector dropped below 30 percent, the lowest level since the question was first asked in 2008. Most residents said they didn’t expect their incomes to improve in 2024, and more than 20 percent said they believed they were “likely” to lose their jobs.

News coverage about the survey was censored, and the report can’t be found on the center’s website…

…Leo, who was born in Beijing in the mid-1980s, said he had grown up as a nationalistic “little pink.” The first crack in his confidence, he said, was in 2021 when the government went after internet companies. The second crack appeared when the government abruptly ended its “zero-Covid” policy in December 2022 without preparing the population with effective vaccines or medications. Then in late July, the markets and the private sector failed to respond to government measures to stimulate the economy.

Leo’s change is remarkable. He said local Beijing residents like him and the people with whom he had gone to high school were among the stoutest supporters of the Communist Party’s rule because they benefited from the city’s expansion and the country’s growth.

When a group of Leo’s classmates met up in June, he said, they couldn’t believe that two of them, a couple, were migrating to Canada…

…He said the big problems that had made him flee remained unsolved: the imploding real estate sector, enormous local government debts and a fast-aging population.

He said that he wanted the government to loosen its grip on private enterprise and disband Communist Party branches that had proliferated inside companies, and that he wanted the private sector to start to invest again. Until then, he will keep his money out of China’s markets.

And what investing advice would he give to his families and friends? “Run as fast as you can,” he said, “even at a loss.”

4. Rohit Krishnan — Demystifying AI – Jim O’Shaughnessy and Rohit Krishnan

Jim O’Shaughnessy: This large language model says, and he’s speaking to you or it is speaking to you. “In your description of AI as a fuzzy processor, you acknowledge a level of unpredictability in AI behavior. How would you balance the need for predictable AI systems with the inherent uncertainty of their fuzzy outputs in critical applications?” …

…Rohit Krishnan: So with an LLM, the fact that it’s a fuzzy processor means that you can now use it in a lot of different places where you could not have used an AI or any kind of software before, because it can effectively be a replacement for parts of different jobs that people might actually be doing. However, the problem is that, if you or I as fuzzy processors are used in those places, we can be tested. We can be evaluated. If I’m hiring someone for a job, I know that they’re not perfectly predictable. However, I can talk to them and get a sense of how unpredictable they are, and how they would actually deal with different situations, and monitor those in different ways, and ask for previous employers or references, or interview them, and create basically this cone of uncertainty, if you will. I can bound it, so I know that they’re not completely crazy. I know that they will do some things, but it’ll be within the bound of error.

Rohit Krishnan: So with LLMs and fuzzy processors, we are at the early stages for that. The inherent fuzziness is problematic only because you cannot depend on when and how it is actually likely to be fuzzy, that it might end up going in any kind of random direction. So for us, to be able to use it in any actual real-life situation, especially in critical applications, we would need to have a whole lot more confidence in how precisely it works. We would need to not it’s in the internals, in the specific nodes and weights and stuff like that. We already know it, but it’s slightly unhelpful. It’s like doing, I don’t know, neuroscience to predict behaviorism. I don’t think it is hugely valuable in and of itself. However, we do need to bound its behavior in some sense so that we know it cannot go completely off the rails when you’re trying to use it.

Rohit Krishnan: Even with that, I mean, we are speaking, what, after the latest Boeing disaster, not that long after? So when you talk about complex systems where large number of parts actually interact with each other, the possibility of something going wrong always exists. So the way we solve it in real life is by having stringent QA and multiple checks and huge levels of evaluations and large amounts of redundancies. And the exact same principle applies for fuzzy processes as well, where the only way to make a fuzzy processor function in a critical system is by having large number of evaluations so that it can bound it, creating enough structure around it so that even if it does something weird or crazy, you can actually cut off those particular probability branches of the tree, and you can direct it towards something, and having large amount of redundancies so that you can actually ensure that the output that is coming from it is effectively usable, so that even if it does something crazy or stupid, the errors are not continuously compounded over a period of time.

Rohit Krishnan: It’s like that… I don’t know whether this is apocryphal, but I remember hearing this story about Elon where they were trying to send computers up along with the Starlink satellites. And obviously, radiationshielded computers are very heavy and highly expensive. And radiation shielding is important because bit flips are more common when there is higher levels of radiation that actually hits once they’re above the atmosphere. And I think his solution, or the solution is one of his engineers in that particular apocryphal story, was to send three, and they would just vote, because chances of all three getting hit simultaneously are much lower. That’s a way to use redundancy to solve for unpredictability. And I feel like a similar kind of thesis has to exist with respect to LLMs as well…

… Rohit Krishnan: I think I’ve written about a couple of these things before, which is that, at a sufficient degree of complexity, highly deterministic systems can also show highly indeterministic outcomes. I am by no means the first person. It’s a common trope in pretty much anything to do with chaos theory or even things like sand piles and grains at a point of avalanche and cascade. And there’s a bunch of these questions which are, in my opinion, more feasible to see happen than to predict how it will happen because prediction requires you to effectively run the experiment, so to speak, and I’m fascinated by that.

Rohit Krishnan: So I think, in some sense, we in normal conversations quite often complicate indeterministic with random, or unpredictable with random, and they’re two different kind of processes. I mean, there is the common argument that people make against, things like free will, is like, everything is a physical phenomena. Physical phenomena, given a sufficiently powerful computer, might actually be able to get simulated, and therefore, you might be able to predict it. And it’s one of those things when, logically, it might hold true if and only if the computer that is predicting it did not need to actually run the simulation in order to predict it. And if it did, then from the perspective of the people being simulated, us in this instance, the outcome will still end up looking indeterministic, unpredictable, even though, theoretically, everything was as preordained.

Rohit Krishnan: I know this has vexed and driven more people mad than me, but I think there is a core kernel of truth here that just because you can’t create beautiful analog equations to predict the behavior of a particular piece of software, physical phenomena, whatever, does not mean that that is random. It just means that at a certain degree of complexity, there are way more permutations and combinations of how things can go wrong than there is feasible for us to, I don’t know, conceivably identify. And as we said in the previous, the only way to solve it is by having sufficient amount of QA and redundancy and bound the system so that you can actually be relatively sure that it does what you want it to do.

Rohit Krishnan: I mean, stock markets are a perfect example of this. I mean, the flash crash is my favorite example of this. It’s not an intended behavior of the system, but it is one chaotic outcome that could have happened. And how do you stop it? You don’t stop it by stopping each individual trader analyzing each one. You stop it at the macro level saying, “If it falls a little this much, we cut it off,” which is a macro behavior that then controls the micro behavior of each individual algo, which takes that into account. And even if it does hit, we mean that the worst-case scenario is bounded.

Jim O’Shaughnessy: And you also covered that in your book because you posit that we could have a so-called flash crash of AI. And why don’t you tell our listeners a little bit about your solution for that?…

…Rohit Krishnan: The only way to guard against it is at the macro level. You can’t go solution by solution and say, “Unless we can perfectly predict the outcome of this particular system, we will let it go off and do what it wants to do,” because if you could perfectly predict the outcome of the system, we didn’t really need the system in the first place. It’s arguing against the premise of the question in the first place.

Rohit Krishnan: The only way to guard against it is at the macro level. You can’t go solution by solution and say, “Unless we can perfectly predict the outcome of this particular system, we will let it go off and do what it wants to do,” because if you could perfectly predict the outcome of the system, we didn’t really need the system in the first place. It’s arguing against the premise of the question in the first place.

Rohit Krishnan: We will have to do something similar on the AI front as well, where if you don’t want it to do certain outcomes in a particular system, we have to go from outcome first rather than sort of algo first. You’re not going to prevent that by, I don’t know, bounding the number of flops, because even with the lower number of flops, we can find enough ways for it to screw us up, assuming there’s enough number of them that actually interact with each other. But the only way to stop that is step up a layer of aggregation, actually stop it from creating the chaos that we don’t actually want it to do…

…Rohit Krishnan: Oh, I’ll tell you one of the funny things that I’ve been working on. I created a bit of an evaluation suite for a bunch of LLMs for various reasons. And I ran it against a bunch of the Chinese LLMs because I could. I mean, there’s no reason to. So then the interesting things that come out from that is that they’re really good, first of all, I should say that. However, they’re also clearly slanted in what they’re actually allowed to say.

Rohit Krishnan: If you ask it any questions about things around geopolitics, it’s like hackles get raised a little bit, and it says specific things. If you ask it questions about economics, its hackles get raised. If you ask about politics, of course, sometimes it just refuses to answer. Don’t even mention Tiananmen Square. It is fascinating to see that it has created an actually useful tool, which is it does coding really well. And you ask it to create ASCII art of a dinosaur, it does pretty well. You ask it to name, I don’t know, planets in reverse order with different whatever, different languages for each, it does the things that you would want it to do. But it also means you cannot put it into production anywhere you need any of that judgment.

Rohit Krishnan: So you cannot use it in a financial services institution because, guess what, if you’re making an investment decision, you cannot be influenced by things that were hard-coded into you. So similarly, the only way you’re going to be convinced about which ones you are most happy using are by ease of use and latency. It has to be easy to use in front of you, fast, et cetera, et cetera. But also, you can trust the advice coming from it. If I’m thinking about investing in something, I’m not going to call my friend up from Beijing to ask their opinion on a public line. Because there’s a set of information that comes back which is clearly biased. I would ask somebody that I trust and that is the benefit here…

…Rohit Krishnan: I don’t think you are wrong. I think the only caveat or perhaps addition that I would make is centaur models work best in areas which are not directly entirely competitive with the same things that the AIs do. Unless you find joy in doing it, because then it’s a self-fulfilling kind of prophecy.

Rohit Krishnan: To me, currently, and at least for the immediate future, AI is best used in areas where you can either automate part of your own job and yourself and also use it together with you in order to make your ultimate goal better. It’s just like any tech. We are all centaurs already. We live most of our lives on digital technology connected with other human beings. We are part of some weird form of a hive mind, and we are all cyborgs. This is a fact.

Rohit Krishnan: Then the question is, how much more integration would you like in different facets so that you can actually perform some of these things better? And the answer is all of them. Now, there might be some things where, guess what? If you like drawing for fun, you’re probably still going to drawing for fun, despite the fact that if you do want to make a profession out of it, there are some things that the AI will be able to do much better.

Rohit Krishnan: And you as somebody who actually understands it and can use it better and knows the intricacies of drawing will be able to direct it and make use of it in ways that me, as somebody who doesn’t, can’t. Your knowledge and education in doing that particular thing translates to how much better you can actually do something. It’s like giving yourself a boost. Everyone gets a boost kind of question.

5. Big Risks: Catastrophic Risk in Investing and Business – Aswath Damodaran

There are a multitude of factors that can give rise to catastrophic risk, and it is worth highlighting them, and examining the variations that you will observe across different catastrophic risk. Put simply, a  volcanic eruption, a global pandemic, a hack of a company’s database and the death of a key CEO are all catastrophic events, but they differ on three dimensions:

  1. Source: I started this post with a mention of a volcano eruption in Iceland put an Icelandic business at risk, and natural disasters can still be a major factor determining the success or failure of businesses. It is true that there are insurance products available to protect against some of these risks, at least in some parts of the world, and that may allow companies in Florida (California) to live through the risks from hurricanes (earthquakes), albeit at a cost.  Human beings add to nature’s catastrophes with wars and terrorism wreaking havoc not just on human lives, but also on businesses that are in their crosshairs. As I noted in my post on country risk, it is difficult, and sometimes impossible, to build and preserve a business, when you operate in a part of the world where violence surrounds you. In some cases, a change in regulatory or tax law can put the business model for a company or many company at risk. I confess that the line between whether nature or man is to blame for some catastrophes is a gray one and to illustrate, consider the COVID crisis in 2020. Even if you believe you know the origins of COVID (a lab leak or a natural zoonotic spillover), it is undeniable that the choices made by governments and people exacerbated its consequences.
  2. Locus of Damage: Some catastrophes created limited damage, perhaps isolated to a single business, but others can create damage that extends across a sector geographies or the entire economy. The reason that the volcano eruptions in Iceland are not creating market tremors is because the damage is likely to be isolated to the businesses, like Blue Lagoon, in the path of the lava, and more generally to Iceland, an astonishingly beautiful country, but one with a small economic footprint. An earthquake in California will affect a far bigger swath of companies, partly because the state is home to the fifth largest economy in the world, and the pandemic in 2020 caused an economic shutdown that had consequences across all business, and was catastrophic for the hospitality and travel businesses.
  3. Likelihood: There is a third dimension on which catastrophic risks can vary, and that is in terms of likelihood of occurrence. Most catastrophic risks are low-probability events, but those low probabilities can become high likelihood events, with the passage of time. Going back to the stories that I started this post with, Iceland has always had volcanos, as have other parts of the world, and until recently, the likelihood that those volcanos would become active was low. In a similar vein, pandemics have always been with us, with a history of wreaking havoc, but in the last few decades, with the advance of medical science, we assumed that they would stay contained. In both cases, the probabilities shifted dramatically, and with it, the expected consequences.

Business owners can try to insulate themselves from catastrophic risk, but as we will see in the next sections those protections may not exist, and even if they do, they may not be complete. In fact, as the probabilities of catastrophic risk increase, it will become more and more difficult to protect yourself against the risk…

…When looking at how the market prices in the expectation of a catstrophe occurring and its consequences, both these human emotions play out, as the overpricing of businesses that face catastrophic risk, when it is low probability and distant, and the underpricing of these same businesses when catastrophic risk looms large.

To see this process at work, consider again how the market initially reacted to the COVID crisis in terms of repricing companies that were at the heart of the crisis. Between February 14, 2020 and March 23, 2020, when fear peaked, the sectors most exposed to the pandemic (hospitality, airlines) saw a decimation in their market prices, during that period.

With catastrophic risk that are company-specific, you see the same phenomenon play out. The market capitalization of many young pharmaceutical company have been wiped out by the failure of blockbuster drug, in trials. PG&E, the utility company that provides power to large portions of California saw its stock price halved after wildfires swept through California, and investors worried about the culpability of the company in starting them.

The most fascinating twist on how markets deal with risks that are existential is their pricing of fossil fuel companies over the last two decades, as concerns about climate change have taken center stage, with fossil fuels becoming the arch villain. The expectation that many impact investors had, at least early in this game, was that relentless pressure from regulators and backlash from consumers and investors would reduce the demand for oil, reducing the profitability and expected lives of fossil fuel companies.

While fossil fuel pricing multiples have gone up and down, I have computed the average on both in the 2000-2010 period and again in the 2011-2023 period. If the latter period is the one of enlightenment, at least on climate change, with warnings of climate change accompanied by trillions of dollars invested in combating it, it is striking how little impact it has had on how markets, and investors in the aggregate, view fossil fuel companies. In fact, there is evidence that the business pressure on fossil fuel companies has become less over time, with fossil fuel stocks rebounding in the last three years, and fossil fuel companies increasing investments and acquisitions in the fossil fuel space.

Impact investors would point to this as evidence of the market being in denial, and they may be right, but market participants may point back at impact investing, and argue that the markets may be reflecting an unpleasant reality which is that despite all of the talk of climate change being an existential problem, we are just as dependent on fossil fuels today, as we were a decade or two decades ago:

Don’t get me wrong! It is possible, perhaps even likely, that investors are not pricing in climate change not just in fossil fuel stocks, and that there is pain awaiting them down the road. It is also possible that at least in this case, that the market’s assessment that doomsday is not imminent and that humanity will survive climate change, as it has other existential crises in the past.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon, Tencent, and Tesla. Holdings are subject to change at any time. 

What We’re Reading (Week Ending 11 February 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 11 February 2024:

1. Sal Mercogliano on What’s Happening in the Suez Canal Right Now – Tracy Alloway, Joe Weisenthal, and Sal Mercogliano

Joe: (02:45)

Let’s just start really big picture with how extraordinary and unusual is the scale of this disruption happening right now?

Sal: (02:55)

You’re looking at about 11% of the world’s trade goes through this vital maritime choke point, the Bab-el-Mandeb. This is the Gate of Tears, this is the very southern end of the Red Sea. This is the connection between Europe and Asia, but it’s much more than that. You’re looking at trade that goes not just between those two areas, but actually kind of like a hub and spoke system kind of radiates out around the planet.

And this attack by the Houthis, which started off very small scale, you saw a helicopter assault onto a ship. The Galaxy Leader back on November 19th has now escalated. And what we’ve seen is not just container ships that have started to divert around, but now liquified natural gas carriers, liquified petroleum gas carriers, tankers, and even bulk vessels are now moving around.

And as you mentioned, this adds 3,500 miles. But the biggest thing is it creates massive delays and disruptions. And for the Houthis, which are a small player, you know, one part in a three-way civil war in Yemen, they have created more disruption of global trade than you — you almost have to go back to the world wars to find something similar to this.

Tracy: (04:06)

So Sal, you mentioned that the Houthis have sort of expanded their repertoire of attacks, I suppose to now include LNG and bulk vessels and things like that. What is their strategy here and how has it evolved over time? Because we have seen an escalation since November, but there were sort of isolated attacks happening [even] before then. So what is changing here?

Sal: (04:30)

Yeah, so initially they were focusing on ships connected to Israel. I mean, the root of this issue is the Houthis’ kind of solidarity with the Palestinians and Hamas in the Gaza Strip. And then again, it goes all the way back to the Israel-Hamas issue.

But we’ve seen the Houthis attack ships prior to this. Go back to 2016, 2017, we saw attacks on UAE vessels. We saw an attack on a Saudi frigate. We even saw an attack on a US Navy destroyer. But this effort recently is focusing on Israeli-owned Israeli flagships.

So we saw ships of ZIM and other Israeli companies immediately divert, but then the Houthis expanded. They started targeting vessels, they said [were] connected to Israel, either through their ownership, so for example, Mediterranean Shipping Company, what [is] the largest container liner in the world. They started targeting their ships because the owner’s wife, of Mediterranean Shipping Compan,y has dual citizenship — Switzerland and Israel.

And then we saw attacks that really had no connection at all to Israel, but they would try to make those attacks. And what these attacks are doing aren’t really so much damaging vessels. We’ve seen ships hit and we had a very dramatic one just the other day with a ship called the Marlin Luanda, which caught fire.

But what they’re doing is raising the cost to sail through this area by escalating war risk insurance. And we saw a very similar thing happen in the Red Sea between Russia and Ukraine. But by escalating war risk insurance, the added insurance you need to sail through an area, you make very expensive ships such as container ships, which have a value of between a quarter to a half a billion dollars, cost-prohibitive to sail through. We saw the war risk, for example, jump from 0.02% the value of the ship up to 1%. And when you start doing the math on value of vessels, the very expensive vessels find it more costing economical to sail around Africa.

Joe: (06:27)

Wait, wait. This is interesting. How is war risk insurance assessed? And when you say, like, 1%, is that per trip? How do those… talk to us a little bit more about these deals and the math there.

Sal: (06:44)

Sure. So shipping insurance is done by a group of companies called clubs. And they get together and literally there’s a committee in London that puts together areas of war risk. They identify the areas that there are confrontations and basically whether or not you need this added insurance, kind of like flood insurance for your house. If you don’t have it and your house is damaged by a flood, your normal insurance wouldn’t cover it.

So they identify the area in and around the Red Sea as a potential war risk, initially down to that 0.02%. But as the Houthis attacked and then increased their level attacks, they have ratcheted up that war risk. We’re seeing right now, for example, up in the Black Sea, that war risk is right around 1.25%. That’s come down from about 3%. And so this committee will assess that. And if you want to sail through these regions and they specify latitude and longitude and the distance, you pay it for that one time voyage.

So let’s assume you have a ship of a hundred million dollars, both the value of the ship and the cargo, then you have to pay a million dollars to go through there. And now, you start weighing that against, well, it’s about a half a million to go through the Suez, but it’s going to cost me over a million dollars in extra fuel to go around. What’s the cost-benefit here for doing it?

And what we saw is on the higher end ships, the container ships, the LNG and LPG carriers, then they were weighing as like ‘Okay, it’s much more economical and safer for me to go around Africa than to take this risk.’…

…Tracy: (19:13)

So we have seen [spot] shipping rates go up recently, but my impression is that a lot of the shipping rates are, you know, the shipping rate is sort of pre-agreed, contractually agreed some time ago. And yet we have seen this increase in costs. You described how the wartime insurance rate goes up, it seems [it does so] fairly quickly. You have captains that are presumably wanting additional compensation for taking on this risk. How quickly and how much could shipping rates actually rise from here?

Sal: (19:46)

So, you know what we saw during the height of the supply chain crisis, you see all those charts, that was the spot [rate], right? That’s the rate you pay if you don’t have a long-term commitment in place. Most shipping, most containers, for example, are on long-term charters. And so those, you know, about 70% of the cargo that’s moved is on long-term.

But ironically, the route between Europe and Asia was up for renegotiation as of January 1st. So right when this was taking place, we saw that happen. But even if you have a long-term shipping route agreement, there are charges that can be imposed on top of that surcharges for extra fuel for port stays.

And so a lot of companies that were shipping goods all of a sudden started getting notices like ‘Well, my container’s going to be a thousand dollars more than I thought it was going to be.’ Well, that’s because the company sat there and said ‘Well, I had to stop in South Africa and buy really expensive fuel. Plus we’re not going to the port initially, we were going to drop your container off in, so we’ve got to drop it off in a sub port and it’s got to be moved over there.’

And so we saw the prices begin to escalate because the shipping companies tend to pass that cost on. And what you’re seeing now is even the long-term rates are seeing readjustments because of that. Plus the shipping companies have to readjust their schedules. You know, if you had a container ship that was going through the Suez and stopping in the Med, that’s not happening now. And now you’re seeing ships stop at other terminals dumping their containers and reshuffling them. So the ports at the entrance to the Med, Tangier and Angier and Algeciras, are getting a lot of business because you have to reshuffle containers.

And so now the, the freight rates are changing. If you look at the freight rate charts right now, they kind of peaked and they’ve kind of dipped down and now they’re starting to stabilize at this point. But we’re also seeing impacts in other ways.

So for example, the US freight rates get negotiated by May 1st, but we’re seeing freight rates increase to the United States. Why? Because a couple of factors, if you are shipping containers from Asia to Europe, I mean to Asia, to North America, for example, well you may be shipping it, you know, I don’t want to go to LA and Long Beach anymore because of the issues with LA and Long Beach that happened a couple of years ago. So I’m going to put my containers on these new Neopanamax ships.

They go through the big lane of the Panama Canal that opened in 2016. But, [it’s not] like we don’t have enough choke point issues. Panama Canal’s at low water levels. We’ve seen a two thirds reduction in the number of ships going through there. So now you’ve got this fully loaded Panama, Neopanamax ship, it arrives on the Pacific side of the Panama Canal and they can’t get through because it draws too much water.

Now I’ve got to take 3,000 boxes off rail them across Panama and meet them on the other side. That’s a cost I didn’t plan on. That ship comes to the United States offloads. But instead of going back the way it came, because it doesn’t want to take a passage through the Panama Canal, it’s now going to head back to Asia through the Mediterranean and the Suez Canal. But wait a minute, the Houthis are there. Now I’ve got to head around Africa. And so what you’re seeing is a lot of surcharges and extra charges and most importantly, delays in the movement of goods that were not planned on…

…Tracy: (37:33)

How long until the slowdown, and I guess the additional complexity that you’ve been talking about, how long until that makes its way to US supply chains? Because so far, you know, most people are talking about this as a Europe or Asia specific problem, but as you point out, it just takes some time to reverberate.

Sal: (37:53)

Well, I mean, you’re seeing that right now in Europe. You’ve had a very kind of high visibility [companies], some manufacturers, Tesla and a few others, had to shut down production because they’re waiting to get parts to them. And you’re seeing the impact of that also in the fact that ‘Well, we’ll just throw them on airplanes and send them over.’ Well, 30%-33% of the world’s aviation fuel goes through the Suez Canal and now it’s being diverted. And so now even aviation has issues associated with it.

It tends to be weeks. And we’re going to see it as right after the beginning of February, because what has happened here is a lot of empty containers — which is the most unsexy topic you can talk about is empty containers — empty containers have not been re-positioned back to Asia in time to be reloaded and put on ships to leave Asia before the Chinese New Year, before the second week in February.

Which means that goods that should have been sailing across this week and next week aren’t going to be there. Which means now you’re going to see them about a month later. So we’re going to see some delays. And again, we’re not going to see shortages, we’re not going to have the great toilet paper run that we had during 2020. But what you will see is a little bit of a spike in inflation in terms of transportation costs. A lot of disruptions.

One of the things that we did learn from 2020 and a lot of freight forwarders and smart people who went with companies that do this professionally did, was diversify how their goods come in. So there was a lot of companies who saw what was happening with the Houthis and sat there and said ‘Hang on, let me get my goods on a container ship and I’ll go into LA and Long Beach right now, because even though I hate it, I’ll go in there because I know they’re going to arrive. And I can get them in there and I’ll pay that rail because rail is looking for cargo right now.’

So a lot of people began to make movements, but some didn’t. And the ones who didn’t see this coming ahead of time, they’re the ones who are going to see it. We’re already seeing backlogs of ships, for example, start to pile up off of Savannah and some of the East Coast ports.

2. The risks to global finance from private equity’s insurance binge – The Economist

Adecade or so ago private equity was a niche corner of finance; today it is a vast enterprise in its own right. Having grabbed business and prestige from banks, private-equity firms manage $12trn of assets globally, are worth more than $500bn on America’s stockmarket and have their pick of Wall Street’s top talent…

… Core private-equity activity is now just one part of the industry’s terrain, which includes infrastructure, property and loans made directly to companies, all under the broad label of “private assets”. Here the empire-building continues. Most recently, as we report this week, the industry is swallowing up life insurers.

All of the three kings of private equity—Apollo, Blackstone and kkr—have bought insurers or taken minority stakes in them in exchange for managing their assets. Smaller firms are following suit. The insurers are not portfolio investments, destined to be sold for a profit. Instead they are prized for their vast balance-sheets, which are a new source of funding…

…Firms like Apollo can instead knowledgeably move their portfolios into the higher-yielding private investments in which they specialise…

…Yet the strategy brings risks—and not just to the firms. Pension promises matter to society. Implicitly or explicitly, the taxpayer backstops insurance to some degree, and regulators enforce minimum capital requirements so that insurers can withstand losses. Yet judging the safety-buffers of a firm stuffed with illiquid private assets is hard, because its losses are not apparent from movements in financial markets. And in a crisis insurance policyholders may sometimes flee as they seek to get out some of their money even if that entails a financial penalty. Last year an Italian insurer suffered just such a bank-run-like meltdown…

…As private assets become more important, that must change. Regulators should co-operate internationally to ensure that the safety-buffers are adequate. High standards of transparency and capital need to be enforced by suitably heavyweight bodies. The goal should not be to crush a new business model, but to make it safer.

3. Mark Zuckerberg’s new goal is creating artificial general intelligence – Alex Heath and Mark Zuckerberg

No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive.

“I don’t have a one-sentence, pithy definition,” he tells me. “You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”

He sees its eventual arrival as being a gradual process, rather than a single moment. “I’m not actually that sure that some specific threshold will feel that profound.”

As Zuckerberg explains it, Meta’s new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn’t think that the ability for it to generate code made sense for how people would use a LLM in Meta’s apps. But it’s still an important skill to develop for building smarter AI, so Meta built it anyway.

“One hypothesis was that coding isn’t that important because it’s not like a lot of people are going to ask coding questions in WhatsApp,” he says. “It turns out that coding is actually really important structurally for having the LLMs be able to understand the rigor and hierarchical structure of knowledge, and just generally have more of an intuitive sense of logic.”…

…The question of who gets to eventually control AGI is a hotly debated one, as the near implosion of OpenAI recently showed the world.

Zuckerberg wields total power at Meta thanks to his voting control over the company’s stock. That puts him in a uniquely powerful position that could be dangerously amplified if AGI is ever achieved. His answer is the playbook that Meta has followed so far for Llama, which can — at least for most use cases — be considered open source.

“I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated,” Zuckerberg says. “Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”

Without naming names, he contrasts Meta’s approach to that of OpenAI’s, which began with the intention of open sourcing its models but has becoming increasingly less transparent. “There were all these companies that used to be open, used to publish all their work, and used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it.’”

While Sam Altman and others espouse the safety benefits of a more closed approach to AI development, Zuckerberg sees a shrewd business play. Meanwhile, the models that have been deployed so far have yet to cause catastrophic damage, he argues.

“The biggest companies that started off with the biggest leads are also, in a lot of cases, the ones calling the most for saying you need to put in place all these guardrails on how everyone else builds AI,” he tells me. “I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy.”

Zuckerberg has his own motivations, of course. The end result of his open vision for AI is still a concentration of power, just in a different shape. Meta already has more users than almost any company on Earth and a wildly profitable social media business. AI features can arguably make his platforms even stickier and more useful. And if Meta can effectively standardize the development of AI by releasing its models openly, its influence over the ecosystem will only grow.

There’s another wrinkle: If AGI is ever achieved at Meta, the call to open source it or not is ultimately Zuckerberg’s. He’s not ready to commit either way.

“For as long as it makes sense and is the safe and responsible thing to do, then I think we will generally want to lean towards open source,” he says. “Obviously, you don’t want to be locked into doing something because you said you would.”

4. Famed Short-Seller Jim Chanos says this is the CHEAPEST thing in the Stock Market (transcript here)- Dan Nathan, Guy Adami, and Jim Chanos

Jim Chanos: I think all things being equal, yeah. But I would actually deflect the question and say one of the things that by 1999 could have told you you were getting in the later innings of the tech bubble in the late 90s, was when you began to see a big drop off in the quality of the earnings of the big tech guys like Lucent and Cisco, whatever. And a number of these companies got into the business of not only doing barter transactions, but also having venture arms invest in companies who then bought their products.

Guy Adami: You’re seeing that around the edges now.

Jim Chanos: I was going to say you’re beginning to see people are beginning to report on – which I think is a good thing – the fact that some of these companies now have reasonably large venture operations under the corporate umbrella and are investing in companies that are turning around and buying their products. I would also point out too, a couple of small companies like Microsoft and Google, who are increasingly capital intensive because of their data centers, who are cutting their depreciable lives, which is a one time thing that will help earnings for a while. But the longer this goes, if we start to see more and more big-cap tech companies begin to use more and more fun and games to make their earnings estimates, then the parallel with ‘99, 2000 is going to be hard to miss…

…Guy Adami: If you’re fine, we’ll play another game, as I mentioned earlier. Over the weekend we heard from a couple United States senators, Lindsey Graham, John Cornyn, both said effectively – I’m paraphrasing – “bomb Tehran” or something of that effect. That was out there. I am shocked that the reaction of the market was as muted as it was. So my question is concerning geopolitical risk, which is seemingly as bad as it’s been, I want to say, in the last 30 or so years, yet no impact whatsoever on the broader market.

Jim Chanos: Middle east strife hasn’t made an impact on markets since ‘73, ‘74. So for people that are looking Middle East issues, most investors just go, “it’s a mess, we’re going to be there, kind of, there’s going to be terrorism.” It doesn’t factor in. I do think that something happening in the Pacific, would be a much bigger thing.

Guy Adami: What is that thing that happens in the Pacific? Our relationship with China is probably the worst it’s been in 50 years. You can debate it. I happen to believe that’s the case. Obviously the saber rattling in terms of Taiwan. When President Xi was in San Francisco in the beginning of December, it came out three weeks later that he said – and again I’m paraphrasing – “We will take Taiwan by whatever means necessary.” That came out in the press I think in mid-December. So that’s out there as well. I mean, nobody seems to be focused on it. Maybe again, they think it’s just rhetoric. What are your thoughts on that?

Jim Chanos: I think that the real risk, and we’ve been saying this for a while, is that he gets more aggressive in foreign adventures to distract people from what’s going on domestically in the economy. And the fact of the matter is they cannot get the domestic economy going, because of all the things that we’ve discussed down through the last 15 years and that the model is a bad model and it’s coming to the end of its useful life and they don’t want to address the realities of changing their economic model, which is based on investment in property. And so I don’t know what he does, but boy, the rhetoric is not good and he has made threats. And the curtain dropping there would be something, I think.

Guy Adami: So I brought this up and actually the people agree, disagree. I’m curious about your thoughts. A lot of people think that because of the weakness in China, it makes them less inclined to do something with Taiwan. My pushback would be it makes them more inclined, I think for the reasons you decided, sort of taking your eye off the ball as to what the problems are and then creating sort of a bit of a divergence for lack of a better word.

Jim Chanos: To Xi Jinping, the deal with the citizenry was, “Don’t get involved in politics, the Communist Party knows best, but we will give you prosperity.” In the last five to arguably 10 years, the prosperity engine has slowed down and sputtered, and now it’s becoming, “Support us nationally, in nationalism and patriotism and the greater China.” And that’s a change. That’s a big change. I think that the economy struggling makes the risks worse, not less.

Dan Nathan: Jim, you’ve been making a fairly bearish case about China…

Jim Chanos: Yeah, you might say so.

Dan Nathan: …for a decade. I’m looking at the Shanghai Composite. It’s really trading where it was a decade ago. And then if you think about US companies and all the excitement over this last decade about access to a Chinese consumer that is growing at a scale that we’ve never seen, but then if you look at really how the Chinese consumer has been exposed to risk assets, it’s been in the very thing that you’ve been warning about for a decade, and that is commercial real estate and residential real estate. So they’ve had much more exposure to real estate, both commercial and residential, than they have to the stock market.

Jim Chanos: Much greater.

Dan Nathan: Okay, so when you see a headline like we saw last week, that the Chinese are going to command the SOEs to repatriate maybe $300 billion and put it into the stock market, the stock market rallied, and then it sold back off. Wouldn’t they have much better use of putting that to kind of stem – we saw the China Evergrande story and stuff like that. Is this finally coming undone right now?

Jim Chanos: I don’t know that it’s coming undone. I think you’re just seeing the flaws in the model, which is the Chinese stock market, when we did our bear call on China, the FXI was $41. I think it’s $22, so it’s almost been cut in half since 2009. But if you actually look at the market cap of the Chinese stock market, it’s up now. So what’s the paradox? The paradox is they’re diluting the hell out of you. There’s so much agency risk in China, it’s not funny. And who’s the patsy? Western investors are the patsy. They’ve provided capital over and over again through the VIE structure, which we’ve talked about till we’re blue in the face, which is a complete fraud. And because China sold them on this growth and you want to be part of our growth and whatever, meanwhile, you’ve done nothing but basically provide capital for them to do other things. Having said all that, the problem, the property market dwarfs everything. After US treasuries, it’s the most important asset class in the world, Chinese property. It doesn’t get the attention it should. And that’s where China has its savings. That’s where the Chinese populace is counting on the price of their flat to provide for their retirement and their kids. And if that doesn’t happen and if that doesn’t pan out, then you’re going to have political issues.

5. A beginner’s guide to accounting fraud (and how to get away with it), Part V – Leo Perry

Back in 2017 I ran a job ad that read:

“In 2016 a police investigation of the collapse of a business closed with a public prosecutor recommending more than a dozen individuals be charged with fraud. One of those named as a suspect is the CEO of a UK PLC with a current market capitalisation of several hundred million. A formal indictment could be handed down any day. Name the company.”

The answer was internet of everything stock Telit Communications. In 2015 Avigdor Kelner, founder and former Chairman, had been sentenced to two years in jail for bribing politicians in Israel. He’d previously been arrested in 2007 in relation to alleged insider trading involving several investments made by Polar Investments, including Telit, although no charges were brought.

But that wasn’t the cat I had in mind in the ad, it was then current CEO Oozi Cats. And the 2016 investigation wasn’t his biggest problem. Italian newspaper Il Fatto Quotidiano reported later in 2017 that he was a fugitive from US justice, having done a runner from the country after being indicted for wire fraud in the 1990s!

Oozi and his wife had allegedly been charged for their part in a land flipping scam. Co-conspirators Wayne Weisler and Susan Taylor pled guilty to operating a scheme designed to defraud mortgage companies by inflating the apparent value of a property through a series of related party transactions, and then borrowing against this artificially high value. The scheme used a Massachusetts entity named Dolphinvest. The company’s Articles of Organization show it was incorporated by Weisler, Taylor and one Uzi Katz.

Now I’m guessing Oozi (or is it Uzi again now?) would probably say he was wrongly charged. That may be so. I’m not casting any aspersions here. I don’t know and I don’t care. What matters for us is that this particular story from his past wasn’t easy to find, even though there were details available online. Partly that was because he had changed how he spelt the anglicized version of his name (from Uzi to Oozi, which was obviously odd to an Israeli friend). But mostly because searching for Uzi Katz on Google brought up dozens of websites about people in Boston. There were sites with titles like “Professor of English Literature from Boston – Uzi Katz”, “Uzi Katz, civil engineer from Boston” and just plain “Uzi Katz of Boston”. My own favourite was “Uzi Katz, Boston Dancer”.

The websites seemed like pretty obvious fakes. As in, they did not appear to be about real people. I mean the blogspot for one linked to a Google+ profile with a photo – which Google Images showed was a picture of Ravi Ramamoorthi, Professor of Computer Science at the University of California.

Maybe, just maybe, these websites were designed to create a smokescreen. The fact that the registered contact for one had the email address reputation@seo-properties.com (seo being short for search engine optimisation) didn’t exactly dispel this impression. Or perhaps it was all just a coincidence and there really are a lot Uzis in Boston. Whatever way it happened, the result was the same. Stories about the Uzi Katz in Boston that got charged with wire fraud were buried way down the search rankings, behind all the dancing professors.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 February 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 04 February 2024:

1. China Wants To Ditch The Dollar – Zongyuan Zoe Liu

Ganzhou also hosts the Ganzhou Rare Metal Exchange, where China’s renminbi currency is used to quote prices for spot trading of tungsten, rare earth products and critical minerals like cobalt that are essential to the clean energy transition.

The metal exchange, established in 2019 with the approval of the State Council, now operates as a subsidiary of China Rare Earth Group. It is China’s second mineral exchange, which was established to use the renminbi to price and trade minerals and rare earth products.

The first such exchange, the Baotou Rare Earth Products Exchange, which started operating in 2014, is jointly owned by 14 major Chinese rare earth suppliers and was explicitly set up, at least in part, to increase China’s overall role in pricing rare earth products. To that end, China also launched two renminbi-denominated exchanges — oil futures in 2018 and copper futures in 2020 — on the Shanghai International Energy Exchange.

By establishing commodities exchanges across its industrial cities, China aims to boost the use and power of the renminbi in global commodities pricing to establish an alternative global financial system that is less reliant on the almighty dollar. This effort also involves regional cooperation with China’s neighbors and non-Western multilateral partnerships to develop regional currency arrangements and enhance the use of local currencies in international trade and investment.

In China’s telling, these strategies are less about offense — trying to dethrone the U.S. dollar or replacing it in the global system with the renminbi — and more about defense: strengthening China’s financial security and reducing its geo-economic vulnerabilities within the existing dollar-dominated global economic and financial system. Beijing wants to minimize its exposure to a potential dollar liquidity crunch and ensure its continued access to global capital markets even during times of geopolitical crisis.

No Chinese leaders have publicly expressed an intention to dethrone the dollar despite escalating geopolitical and trade tensions between the U.S. and China beginning in 2018. However, as those tensions persist, Chinese financial regulators and scholars have explicitly expressed concerns about Beijing’s vulnerabilities and urged government officials to step up efforts to protect the financial system…

…Since President Xi Jinping came to power in 2013, he has repeatedly emphasized worst-case scenario thinking to “prevent macro-risks that may delay or interrupt the process of the great rejuvenation of the Chinese nation.”

From Xi’s vantage point, China’s state-owned financial institutions and enterprises must inoculate themselves against potential international sanctions in the event of a military conflict with the West over Taiwan. That concern has only grown more urgent after China witnessed the collective sanctions imposed by the West on Russian entities and individuals to punish President Vladimir Putin for his war against Ukraine.

The West’s decision to freeze Russian foreign exchange reserves has caused particular consternation in Chinese policy circles. Chinese economist Yu Yongding described such a move as “a blatant breach of…trust” and proof of the United States’ “willingness to stop playing by the rules.”…

…Since President Xi Jinping came to power in 2013, he has repeatedly emphasized worst-case scenario thinking to “prevent macro-risks that may delay or interrupt the process of the great rejuvenation of the Chinese nation.”

From Xi’s vantage point, China’s state-owned financial institutions and enterprises must inoculate themselves against potential international sanctions in the event of a military conflict with the West over Taiwan. That concern has only grown more urgent after China witnessed the collective sanctions imposed by the West on Russian entities and individuals to punish President Vladimir Putin for his war against Ukraine.

The West’s decision to freeze Russian foreign exchange reserves has caused particular consternation in Chinese policy circles. Chinese economist Yu Yongding described such a move as “a blatant breach of…trust” and proof of the United States’ “willingness to stop playing by the rules.”…

…At the September 2022 SCO Summit, Xi explicitly proposed expanding the use of local currencies in trade settlement to promote regional integration, strengthening the development of local-currency cross-border payment and settlement systems and promoting the establishment of an SCO development bank to help shepherd such changes. SCO members agreed on a “roadmap” to accomplish these goals.

In a December 2022 address to the China-Gulf Cooperation Council (GCC) Summit, Xi emphasized his hope that China and members of the GCC should increase the use of renminbi for oil and natural gas trading and settlement through the Shanghai Petroleum and Natural Gas Exchange (SHPGX) in the next three to five years.

Since Xi’s speech, Chinese national oil and gas companies have accelerated initiatives to use the renminbi, instead of the U.S. dollar, in their international fossil fuels transactions through SHPGX. In March 2023, an important step towards the de-dollarization of energy trading occurred when China National Offshore Oil Corporation — known as CNOOC, China’s largest offshore oil and gas field operator — used the renminbi to complete the transaction of importing 65,000 metric tons of liquefied natural gas (LNG) from TotalEnergies SE, a French multinational oil and gas company, through SHPGX. The LNG was produced in the United Arab Emirates, a member of the GCC, carried by a Liberian-flagged LNG tanker Mraweh, and finished unloading in May at the CNOOC Guangdong Dapeng LNG receiving station.

This transaction was the world’s first cross-border LNG trade settled using the renminbi. Since then, CNOOC has executed more renminbi-settled transactions using the renminbi through SHPGX. In October, PetroChina, the largest oil and gas producer and distributor in China, settled a purchase of one million barrels of crude oil using the digital renminbi through SHPGX, marking the first cross-border oil transaction using the country’s central bank’s digital currency…

…Among SCO members, China has since signed bilateral currency swap agreements with Uzbekistan, Kazakhstan, Russia, Tajikistan and Pakistan. China also now has swap agreements with SCO observer and dialogue partner countries Mongolia, Turkey and Armenia; last March, Saudi Arabia committed to joining as a dialogue partner and a full member in the near future. While China doesn’t have an agreement with Kyrgyzstan, which is in the SCO, Kyrgyzstan’s national bank signed a letter of intent in September 2015, stating it aims to work with the PBoC toward establishing a bilateral currency swap.

China’s support for the expansion of SCO and BRICS over the last two years to include major commodities-exporting countries like Iran, Saudi Arabia, the United Arab Emirates, among others, suggests it is eyeing new opportunities to accelerate renminbi use in commodities trading.

The expansion has also given SCO and BRICS added significance as political forces in the shaping of commodity markets. SCO members include major hydrocarbon and minerals exporters in Central Asia like Kazakhstan and Uzbekistan, Russia and its newest member as of last year, Iran.

SCO also includes major commodities importers like China and India. In this context, as a non-Western group of countries, SCO potentially represents a potent coalition of exporters and importers of commodities centered around using the renminbi to finance the entire commodities lifecycle from production to trade to consumption…

…Chinese economists have argued that the ultimate goal of renminbi internationalization should be to have central banks and major international financial institutions worldwide willingly hold large amounts of renminbi for international transactions so that China’s currency can become an international reserve currency alongside the U.S. dollar and the euro.

Since then, the Chinese government has put resources into developing a renminbi-based financial infrastructure for cross-border settlement. In 2015, it launched the Cross-Border Interbank Payment System (CIPS) to improve the convenience of using the renminbi in international transactions by providing onshore renminbi clearance and settlement services.

CIPS allows global banks to clear cross-border renminbi transactions onshore instead of through offshore renminbi clearing banks, providing a one-stop alternative to the combination of the SWIFT system — a secure messaging system used by major banks to send financial information to one another — and the New York-based Clearing House Interbank Payments System.

However, CIPS is not a complete departure from SWIFT and still uses SWIFT’s standards to connect with the global system. It has adopted the ISO 20022 international payments messaging standard to be interoperable with other payment systems as well as with correspondent banks around the world.

By adopting existing cross-border messaging standards, China aims to make CIPS a critical piece of the world’s existing financial infrastructure to promote international use of the renminbi. By 2023, CIPS’s annual business transaction volume reached 123 trillion renminbi (roughly $17.3 trillion), according to data on the CIPS website. CIPS now has 139 direct participants and 1,345 indirect participants worldwide, most of which are foreign branches of Chinese banks.

The Chinese government has also used subtle but strategic initiatives to increase the global appeal of its currency and deepen the market depth of renminbi-denominated assets. Despite hesitations around liberalizing China’s capital account to allow capital to move freely in and out of the country, Chinese authorities have worked to broaden international acceptance of renminbi bonds as collateral.

In March 2021, the International Swaps and Derivatives Association (ISDA), a New York-based group composed primarily of the world’s largest banks, together with the China Central Depository and Clearing Corporation, the Beijing-based central depository for all Chinese government bonds, released a whitepaper detailing the usage of Chinese government bonds as an initial margin in derivatives contracts.

This past September, the Hong Kong Exchanges and Clearing (HKEX) and London Stock Exchange started to study the use of Chinese government bonds as eligible collateral for derivatives contracts as a way to reduce Asia’s heavy reliance on cash for margins on derivatives trades. Chinese institutions have also teamed up with leading resource-rich economies to make renminbi-denominated assets more attractive for international investors…

…China’s promotion of an alternative financial system is not about cheering on the demise of the U.S. dollar, but rather about creating an alternative financial system without a dominant currency in which the renminbi is accepted without bias. China has a strong incentive to prevent the dollar’s collapse because it would likely be the largest financial loser should the dollar depreciate. The majority of China’s over $3 trillion in foreign exchange reserves are invested in U.S. bonds and the lion’s share of Chinese sovereign fund portfolios are tied to dollar-based Western markets.

2. TIP602: Same As Ever w/ Morgan Housel – Clay Finck and Morgan Housel

[00:09:27] Clay Finck: He states, risk is what’s left over after you’ve thought of everything. And I just absolutely love this chapter. It’s Everyone wants to know what’s going to happen. What’s the stock market going to do? Interest rates, the Fed. You state the biggest risk and the most important news story of the next 10 years will be something nobody’s talking about today.

[00:09:45] Clay Finck: No matter what year you’re reading this book, that truth will remain.

[00:09:49] Morgan Housel: Yeah one way I think about this is I wrote Psychology of Money, my first book. I wrote most of it in late 2019. So obviously that was weeks or months from COVID completely throwing our life upside down, everybody’s life upside down.

[00:10:02] Morgan Housel: And I and everybody else had no clue about it. We were completely oblivious to what was going on. Staring at us in the face of that at that point, and I think what you can say, what is the biggest news story globally of 2023 of last year? It was, I think most people would say it was Israel Hamas, which is another thing.

[00:10:17] Morgan Housel: If you go to January of 2023, nobody was talking about that. No one was putting that on their radar. Even the day before it happened, virtually no one was talking about it, thinking about it, forecasting it. So it’s always been like that. You can say that for this year. The biggest news story of 2024 is something that you and I are not talking about today that we cannot see coming.

[00:10:35] Morgan Housel: Someone, I just saw this on Twitter just a couple hours ago. I thought it was really good. I’m paraphrasing it, but it was like, if you are making a decision tree or like a list of probabilities and you say, there’s a 20 percent chance of this happening and a 30 percent chance of that happening. If you’re just going through probabilities like that seems like a smart thing to do.

[00:10:51] Morgan Housel: But if all of your probabilities add up to a hundred, then you’re doing it wrong. Because what you are implicitly saying is that every potential possible outcome that there’s going to be. So I think the best you can do in any of these is if your known probabilities and you can think of should add up to 80 or something like that.

[00:11:07] Morgan Housel: Maybe it’s 90. You should always, you always have to leave a percentage chance for something could happen that I cannot even fathom. That I can’t even, no matter how creative I try to get, there can be a risk out there that I cannot even envision. And of course you should do that because that’s how it’s always been.

[00:11:21] Morgan Housel: The biggest news stories of modern times are things like the Great Depression, Pearl Harbor, World War II, 9 11, Lehman Brothers going bankrupt, COVID of course. And the common denominator of all of those is that you could not have seen them coming, at least in their specific nature of how they arrived and what they did, until they happened.

[00:11:37] Morgan Housel: And so it’s always going to be like that. It’s very uncomfortable to come to terms with that, to come to terms with how uncertain and unpredictable the world can be. But I think if you study history, you can’t come to any other conclusion.

[00:11:48] Clay Finck: What I find so fascinating about this is the biggest sort of disasters are those that no one expects, no one forecasted, no one projected.

[00:11:57] Clay Finck: You mentioned in your book that it seems that zero economists predicted the Great Depression. It’s no wonder it was so bad. No one was prepared for it. No one expected it to come. And COVID is very similar. And you’ve lived through the great financial crisis, likely an investor at that time.

[00:12:13] Clay Finck: Did it feel like no one was saw it coming and it was just a total disaster and much worse than anyone could have ever imagined.

[00:12:20] Morgan Housel: See, that’s a little bit different. That’s different than 9 11 because as recently, as re as early as 2003, there were people who are ringing alarm bells about how fragile the economy was and over leverage and whatnot.

[00:12:32] Morgan Housel: So that it’s not, nobody saw it coming. That’s not quite, that’s not quite true, but a lot of the people who quote unquote, saw it coming. When it did happen, it happened for reasons that they could not fathom. So for example, a lot of people, I won’t name names, but as in 2005, 6, 7, they said a giant recession is coming and it’s going to be caused by hyper, it’s going to lead to hyperinflation and interest rates are going to go to double digits.

[00:12:53] Morgan Housel: The exact opposite happened. So what do you do in that situation where they saw trouble coming, but it happened for the exact opposite reason than they saw it coming? That they envisioned. It’s there’s all these weird nuances there where it’s not black and white. There’s also what really sent the financial crisis into hyperdrive was Lehman Brothers going bankrupt.

[00:13:09] Morgan Housel: But there’s all these alternative histories of a lot of people forget that as Lehman Brothers is going down, Barclays was like hours away from buying it. And that deal fell through and Lehman Brothers went bankrupt. But there’s this alternative history of what if Barclays had bought Lehman Brothers and we escaped all of that.

[00:13:23] Morgan Housel: And the economy just zoomed to recovery after that. There’s all these different possibilities. And I think the takeaway from that is you couldn’t have seen it coming. Even if you saw trouble, you saw brewing in the financial crisis. Nobody in their right mind could have known exactly how it was going to play out.

[00:13:35] Morgan Housel: And I think that’s true, even not only during, but after the financial crisis, it was so common if you were an investor in 2009. To say, look, stocks are still overvalued. We’re in the quote unquote, new normal of low growth. That was a phrase that was always thrown around. The CAPE ratio is still just still too high.

[00:13:52] Morgan Housel: Expect lower returns. That was what virtually everybody was saying. I’m not going to say everybody. Of course, there are some people who saw it differently, but that was the very common narrative. And it made sense. If people were saying that you’re like, yeah, that makes a lot of sense. But what happened?

[00:14:04] Morgan Housel: The stock market tripled over the next three years. It was a, ended up being like the best three year period to be an investor in modern times. And so that, that’s a very common story throughout history too, is that the narrative at the moment, that makes sense that the majority of people cling to in hindsight looks ridiculous.

[00:14:19] Morgan Housel: And so we see that a lot across and it’ll be like that going forward whenever the next recession is, of course…

…[00:39:37] Clay Finck: I also wanted to tie in here, the emotional side of investing. The Buffett quote is be greedy when others are fearful and fearful when others are greedy. But you talk about in your book how this is much easier.

[00:39:51] Clay Finck: said than done. And it’s just so hard to put ourselves mentally fast forward in that type of situation when stocks have fallen, it’s the time to buy. And another problem is that when stocks fall, there’s usually a good reason why they’re falling. And I go back to March, 2020, and I had friends calling me at work, telling me how much money they’re making by shorting the market.

[00:40:10] Clay Finck: And yeah, this coronavirus going around and just the emotions just flood in. And it’s just so hard to act rational when those emotions are at play. You write in your book, hard times make people do and think things they’d never imagine when things are calm. And March 2020 was the complete opposite of calm.

[00:40:29] Clay Finck: Talk to us about how our views and goals can quickly change when our environment’s changing.

[00:40:35] Morgan Housel: I think if I today, right now, when the economy and the stock market are pretty strong and prosperous, if I said, Clay, how would you feel if the market fell 30%? Most people would say, I’d view that as an opportunity.

[00:40:45] Morgan Housel: That’d be great. The stocks that I love would be cheaper. I’d be a buying opportunity. That’d be great. Okay. And for some people that really is the case. But then if I said, Hey, Clay, the market falls 30 percent because there’s a pandemic that might kill you and your family and your kids school to shut down and you have to work from home and the government’s a mess, it’s going to run a 6 trillion deficit to try to figure this out.

[00:41:04] Morgan Housel: How do you feel in that situation? You might be like, most people will say, Oh, in that world, or once they experienced that world feels very different. Or if I said, Hey, the market fell 30 percent because there was a terrorist attack on nine 11. And all the experts think that was just scratching the surface of what’s to come.

[00:41:19] Morgan Housel: Do you feel bullish now? A lot of people will say no, they don’t feel. So once you add in the context of why the market fell, most people will realize that it’s much easier to quote Buffett than it is to actually be somebody like Buffett. I experienced this myself. I had some of the smartest people who I knew in March and April of 2020.

[00:41:35] Morgan Housel: Some of the conver I remember two specific conversations. One was somebody who said, Hey, look, there’s about 2 trillion of capital in the entire banking industry. You do not need to be creative to imagine how all of that’s going to be wiped out. The entire capital of the entire banking industry is going to be wiped out.

[00:41:48] Morgan Housel: And I remember thinking that and being like, yeah, no, you don’t. If the entire economy is shut down for three months, all of that capital is gone. The entire banking sector is insolvent. Obviously that did not happen. But when I heard that, I was like, no, that actually makes sense. I don’t know if that’s the base case scenario, but that’s not far fetched.

[00:42:03] Morgan Housel: I also remember during that period of COVID when it was like, no one’s really going to be making their mortgage payments when everyone is on lockdown that people are like, look, the entire non banking lending sector, non bank lending mortgage sector is all going to collapse. And that’s 80 percent of the originations market.

[00:42:17] Morgan Housel: 80 percent of mortgage originations are going to be out of business in two weeks. And I remember piecing that together and be like, that makes sense too. That didn’t happen either. But during this period, Which was in hindsight, and even at the time, you could have seen look, this is the opportunity of a lifetime.

[00:42:30] Morgan Housel: The market fell 50 percent in a short period of time. This is gonna be a great opportunity. When you add in the context, both the health consequences and the potential economic consequences. It’s a much different situation. I would even say to finish this up, this is maybe the most important part about, I think it was in early February, 2020, Warren Buffett went on CNBC and they’re talking about, Hey, there’s all these rumbles about a virus.

[00:42:51] Morgan Housel: Like, where do you get it? Like the market’s starting to fall. What’s going on? And Buffett said, I don’t want to, I’m paraphrasing here. This is not a direct quote, but He said, I don’t know how I’m going to invest in the next month, but I guarantee you, I’m not going to be selling. That’s what he said.

[00:43:03] Morgan Housel: Two weeks later, he dumped every airline stock that he owned. So even in this situation, somebody like Buffett, the originator of the phrase be greedy when others are fearful. When he added in the context of what was happening to the airline industry during the lockdown, he determined, and I think in hindsight, it was probably the right decision to sell those stocks.

[00:43:19] Morgan Housel: And some people have pointed out too that part of the reason that he sold him is that the government could not have bailed out the airlines if he was the largest shareholder, so people asked him to sell those stocks. It is a complicated thing, but once you add in the context of why the market’s falling, most people realize that their risk tolerance is actually much less than they thought.

3. What I Learned When I Stopped Watching the Stock Market – Jason Zweig

I’m back at my regular post at The Wall Street Journal after being away on book leave. That long hiatus disengaged me from the daily hubbub of markets so I could frame investing ideas in a longer historical and broader psychological perspective…

…When my last regular column ran last May 26, the S&P 500 was already up 10.3% in 2023—right in line with the long-term average annual return of U.S. stocks. “Let’s just call it a year right here,” I recall muttering to myself.

That was the last thing I remember. From that day to this week, I tuned out the daily noise of fluctuations in stocks, bonds, commodities and economic indicators…

…It’s a good thing the market gods ignored me, as they always do. Even though I thought a 10.3% return in five months was plenty for an entire year, the S&P 500 finished 2023 up more than 26%, including dividends.

When you don’t watch the market every day, you can finally see with unquestionable clarity that what you would have expected to happen didn’t. The unexpected did.

Had you told me war would break out in the Middle East in October and last for months, I would have been sad but unsurprised. Had you added that crude oil would—after a fleeting surge—finish 2023 at a lower price than the day I left, I would have been amazed…

…You probably can’t disappear for seven months, but you can pretend you did. Hal Hershfield, a psychologist at the University of California, Los Angeles and author of “Your Future Self: How to Make Tomorrow Better Today,” urges investors to “use the tools of mental time travel to escape the tyranny of the present.”

He means that envisioning how you will feel about your actions tomorrow can help prevent you from overreacting today…

…For general templates of such letters, see the “Future Self Tool” at consumerfinance.gov.

Research suggests this technique can help you avoid making decisions you might later regret—and can reduce the anxiety stirred up by negative news.

I’ve long thought financial advisers should encourage this approach to help clients make deliberate and durable decisions. Now I think it’s worth trying on yourself, too.

4. An Interview with Arm CEO Rene Haas – Ben Thompson and Rene Haas

RH: Yeah, two things are key. It’s very, very small, it takes longer to fabricate. So what may have taken you 12 weeks to put a product into production now may take you 26 weeks. That’s a big, big jump in terms of the lead required.

Then you look at these SoCs that are using Arm, back in the day, if we were putting 12 to 16 CPUs into an SoC, that was considered a lot. You now look at some of the recent chips being announced, just look at the Microsoft Cobalt, their recent CPU that they announced using Arm, 128 CPU cores into that SoC. That is a lot of work for someone building an SoC to figure out how those 128 CPUs are going to work together. What’s the cache coherent network look like? What does the interconnect look like? What does the mesh look like?

So what we felt was if we can provide more of that solution, i.e., stitching together all of the system IP, the GPU, the CPU, an NPU, anything around the processor fabric, we could fundamentally allow the customers to get to market much, much faster.

So we started this initiative towards what we call compute subsystems, which was really about developing the overall platform, which not only helps us in terms of getting an SoC to market faster, but it also allows us to work more quickly in terms of the software ecosystem. We can start to think about what gets product in the hands of developers sooner, or what gets the products in the hands of people who are developing the application software, people who are doing the OS work.

So for a myriad of reasons, it just made a ton of sense for us to go up and do that and we’ve started that with our hyperscalers, with our cloud compute, but we see it applicable to almost all the markets, whether it’s a cell phone, whether it is a laptop, whether it is an automotive ADAS [Advanced Driver-Assistance System] system. The same rules of apply, these are complex compute subsystems. The chips take a long time to build. If you can shave off any amount of development time that helps the people get the chips out faster, that’s huge value. What we are seeing is in some cases where it may have taken two years to get a chip to tape out, we’ve cut that in half. One customer came back and said, “Look, you’ve saved us 80 man-years in terms of efforts.” So across the board, we’ve seen pretty strong validation that this is the right thing to do.

Just to get to the nuts and bolts a little bit, you mentioned the savings in terms of design time and things getting smaller and how long it takes to fab a chip. Is this also just a matter of, there’s a lot of reports about interference and stuff like that, particularly when you’re getting even down to 3 nanometers or 2 nanometers in particular. Is that a real driver as well? Would this opportunity be presenting itself absent the real challenges that are coming along in terms of smaller and smaller size chips and the increased design challenges that are coming with that, particularly around interference?

RH: You follow technology for a living, so you know this well. Like everything with these type of things, a number of things need to come together at once. When you have these long cycle times to build the chips, the complexity in closing timing loops, you’re trying to drive the maximum power efficiency, you’re trying to maximize the ultimate work you’re doing with the libraries. Again, with these subsystems, we will not only handle everything in terms of validation and verification, but we’ll do the tuning for the process. So if folks want to make sure they can get that ultimate last mile of performance, that’s just a lot of work that needs to be done that if Arm is doing it with a platform that we control — because it’s our IP, right?

Right.

RH: At the end of the day, it is around the compute subsystem and computer architecture that we’ve delivered, it’s highly beneficial. So again, in the old days you could kind of throw all this stuff over the wall and people could just pull it together and make it work, but the world has changed a lot in terms of just the complexity of these chips, and one thing that’s not relenting is people want to get products out fast. The markets will move really, really quickly and I think actually we’re seeing them moving even faster now. When you look what’s going on with generative AI and everything relative to these multimodal models and large language models, you have so many moving parts relative to what it takes to develop a product. It’s really, really critical to maximize on efficiency of time to market…

I’m curious because on one respect, a lot of consolidation at one part of the value chain would potentially increase the opportunity for competition in others. You could see how consolidation would play well to say, RISC-V prospects in that regard, open source is as modular and open as you could get. On the other hand, is that sort of drive in a value chain of consolidation at one point, driving modulation the other, is that just overcome by the complexity involved, such that there’s rooms for multiple highly consolidated aspects of this chain? TSMC is pretty centralized as far as things go, and you just feel optimistic that at this point in time, the ecosystem from a software perspective, Arm is x86, 15 years ago on the PC side, and even if you theoretically want to do something different, there’s so much software to build, your lead is just going to be much larger.

RH: I think so, but it’s a very, very good question because it’s a little difficult I think to look backwards to kind of predict the future. One of the things we’re seeing around the future innovation that’s really a gate to innovation is this massive capital investment required, and it’s not just in building a chip, not just in building a fab, it’s not who has enough money to go off and buy new ASML EUV machines.

Let’s take a look, for example, at foundation models and everything going on with generative AI and training. Right now, Nvidia is an amazing position because of just the pure access to GPU technology and how expensive it is now, how scarce it is. That in and of itself, starts to lead to an area of, “Well, you’ve got all kinds of interesting open source models and people in the open source community working with things like Llama. But if people can’t get access to the GPUs and actually training, then who wins?” So, right now you’ve got the Big King.

We saw an excellent example of that in the last few months. He who controls the GPUs controls the world, at least for now.

RH: Correct. So, then when you start to think about — I like to think that Arm is an amazing place because any one of these application areas, we’ve got a huge, huge installed software base and we’ve got a very, very powerful position on power efficiency. So, when I think about where the puck is going relative to an alternative architecture, let’s say, you’ve got to look at either, “Do I have a 10x advantage in performance or a 10x advantage in terms of cost?”. And right now, I think in the areas where Arm is really good at, people would have to look at it really hard and say, “Is it worth the investment to go port everything I’ve got to an alternative architecture? What is the ultimate benefit that I get to the application space?”.

I think what’s really fascinating about everything going on with generative AI right now is I think you’re just seeing huge amount of resources coming into all kinds of development around training and inference, that will drive the growth here, so I think that’s actually where the growth is going to come. I think Arm is in a great place there. Obviously I’m biased but I think when you think about everything that’s going on with generative AI training, all those inference workloads are pretty good for the CPU, and history has sort of shown us that over time, as you add, and we saw this, whether it’s floating point heading into the CPU, or vector extensions, the CPU start to add more and more of the base functionality it allows with some of the workloads and I think you’ll see that in this space…

…Fast forward to where I am now, I don’t spend a lot of time when I talk to people inside of engineering or product groups about, “Hey, who’s catching us from behind?”, I try to think far more about where the world’s going to be in five to ten years. If you think about where the world’s going to be in five to ten years and you focus as much as you can, you’re not going to get it specifically right, obviously, but you want to be directionally investing in that area so when things land in your space, you’re going to be in a good spot.

Take case in point, predicting what the mobile phone is going to look like in 2034, ten years from now, and trying to make sure I do everything defensively to make sure that we’re in a great position is kind of nutty because if you go back to 2008 when the smartphone was invented, folks who were trying to think about protecting what the future phone looked like would have been out of position. Where I really focus on, Ben, is just where are things going and where do we need to invest?

Again, I know this AI drum that gets, people at times try to think, “Oh my God, how many times you got to hear the word ‘AI’?” — obviously, on one level AI is not new, anything that was going on relative to voice recognition or data translation, obviously that was all AI. I think AGI and everything around generative AI that can think and reason, that’s a pretty compelling place, and whether that takes place in five years, ten years, fifteen years, I don’t think anyone can argue that an investment in that space isn’t going to provide huge benefit down the road. I think Arm is a compute platform, I want to be sure that we’ve got everything correct from either an infrastructure standpoint, instructions at architecture standpoint, everything around the subsystem to be able to capture that.

Do you feel pretty confident? I think that you’re trying to sort of tie all of your stuff together to a greater extent, but companies could bring their own neural processor. Google obviously does that at a very sort of small-scale example, scales as far as terms of numbers, not scale in terms of importance of AI obviously, but is this really core to your thesis that as opposed to you needing to bring up a super competitive sort of NPU that, to your point, about the extensions and floating point, which I think is a great analogy, this is all going to be built into the CPU, so regardless, you’re going to need — even if the mobile phone market doesn’t grow just because of the number or it’s limited to this number of humans on earth, that is still going to be a significant opportunity or do you think you have to bring up additional separate IP? Or is this idea of it all being separate meaningless in the long run?

RH: It’s not going to be a one-size-fits-all kind of situation. Today, there’s a lot of investment going on training these very, very large models on highly networked GPUs. Even when you start talking about inference in the cloud, what matters is compute, but less around the interconnect between all of those systems, which is why CPUs over time in the cloud may find themselves to be very, very good solutions without having a GPU necessarily connected to it.You’re going to have a CPU in the cloud no matter what, so at some point it probably makes sense to consolidate.

RH: While Grace Hopper is a fantastic design from Nvidia, there’s a lot of people who I know are asking for, “Just give me Grace and don’t give me the Hopper when I go off and run inference.”…

…But when you see announcements like Microsoft Cobalt — after Graviton was announced, we had a lot of folks saying, “Well, they did it for their own reasons, and there’s not going to be much level of scale.” I would say continue to launch these very, very significant product announcements from company moving to the Arm architecture and I think you’ll see more and more of those over the next 12 to 18 months. And really, those are indicators, whether it’s in the automotive space, in the AI space, look for things, and I can’t tease this out too much, but my example of floating point instructions moving into the CPU, watch for those things on the AI front because that’ll tell you the direction of travel that says, “Yeah, this is moving that way.” I can assure you that we’re not going to stop doing these subsystems, and you’re going to see more and more announcements coming out on those.

5. A beginner’s guide to accounting fraud (and how to get away with it): Part IV – Leo Perry

In September 2011 Quindell (at the time trading as Quindell Portfolio PLC) acquired a business called Quindell Solutions Limited (QSL).

QSL had previously been a subsidiary of Quindell Limited, which in turn reverse merged into Mission Capital in May 2011, to form the then listed business called Quindell. All clear?

In 2009 Quindell sold QSL to its CEO Rob Terry in for a pound. Companies House filings show it had a slightly negative book value at the end of 2009 and again in 2010; sales, costs and cashflow were all nil. So QSL was an empty shell.

In 2011 Quindell re-acquired QSL for two hundred and fifty grand in cash (£251,000 to be precise). At the time it was wholly owned by Quob Park Limited, which was in turn wholly owned by Rob Terry. Quob Park’s previous name was Quindell Portfolio Limited.

If you’re still following, Rob and his wife Louise Terry were both Directors of Quindell (the PLC) and of QLS / Quob Park when the acquisition took place. But it wasn’t disclosed as a related party transaction. I mean this was AIM after all.

So it looks a lot like Quindell made a large undisclosed payment to its CEO, for an empty shell company that it had sold to the same CEO two and a half years earlier – for a pound. Perhaps, surely, there was some great innovation at QSL in the mean time that justified the cost. But there didn’t have to be. And in our case there won’t be…

…I’ve seen a few listed companies stretching the limits of credulity between actual and maintenance capex. But none ever topped a marketing firm I was short about 15 years ago. Beginning in 2005, a little before it listed in London (yes, on AIM) the company went on an investment binge. Capitalised spending rose from only a few percent of sales to over half, and stayed around 20% for the next 5 years.

There are a few ways an investor can get some insight into that number, without even looking to see what it’s spent on. One is to compare it with similar companies. If you looked at the kind of business this management wanted you to believe they were competing with – mobile ad networks like, say, Millennial Media – you were in for a surprise. Millenial spent about 3% of sales on capex.

But you didn’t even need to get that specific. A fifth of sales going into capex is a lot for almost any established business. So another way of making sense of it is to ask what kind of operation needs that level of investment? Before you go and search for the answer, have a guess at what the most capital intensive companies are (I went for transport infrastructure, things like toll roads and airports). If you did screen for companies that had capex over 20% of sales back then you ended up with a pretty short list (leaving out start-ups with little or no revenue, which were mostly biotechs and junior miners). The list was more utilities and telcos than transport as it turned out, but there were a few airports (and some airlines as well). Shockingly no advertising agencies made the cut.

The best clue that the investment was bogus, though, was what the company stated it was spending it on. Software, sure. But not code they developed themselves, this was programming bought off the shelf. When I asked management to break it down the best example they could give was Microsoft licenses!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 January 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 28 January 2024:

1. Mark Dow – A Behavioural Macro View – Eric Golden and Mark Dow

Eric: [00:08:06] I appreciate you making the arguments here. The notion of the narratives, ones that I want to get into was the Fed. Since 2008, it feels like — the Fed was a topic of discussion. It seems to come and go and be more important to people. And it seems to be one of the central narratives that people go to. I’m curious on your view. You’ve talked in the past about the Fed might not have as much control as people give it credit for, but it sure gets a lot of the headlines as if it does have some complete control over which way the markets go every day.

Mark: [00:08:37] Yes. Well, first thing, it helps with content production. There are a lot of — CNBC and Bloomberg and a lot of people who produce content for a living, they need to say something. So the Fed is kind of the explanation of last resort, sometimes the first resort, but you always can point to something the Fed did and make a plausible argument that that’s what’s driving things. The second is when people are wrong, it’s much easier to say, no, listen, I would have been right in my bearish call, but the Fed cheated. They printed money or they did this.

Even now over the course of this year, as the Fed has shrunk its balance sheet and raised the interest rates by 500 basis points, people are trying to argue., well, this component of the balance sheet is changing, whether it’s the treasury, a general account or the reverse repo facility or whatever it happens to be, they want to say this is liquidity-driven because in a sense that exculpates their fundamental analysis that didn’t play out.

So it creates a lot of emphasis on it. But the big story this year and the one I’ve been talking about in our Behavioral Macro a lot, my subscription Twitter feed, is monetary policy didn’t turn out to be as powerful as everyone thought. And that’s where you made your money.

And there are three main reasons why monetary policy hasn’t been as powerful as people thought. There’s the behavioral reason, the secular reason and the cyclical reason. Just from a cyclical standpoint, we kind of know the story by now. We didn’t a year ago. And I got a lot of pushback on Twitter when I was talking about it.

But now I think everyone has recognized that the initial conditions matter a lot. The quality of the balance sheets in the household sector and the corporate sector and in particular, the financial sector were in much better shape than they were, the GFC, which is kind of the most recent memory people have, right?

So people have kind of been waiting for us to cyclically reproduce that cascading, deleveraging process that we had back then because that’s our PTSD, that’s in our memory. But the initial conditions were a lot better, and therefore, it didn’t happen.

And the recessions tend to happen when it’s about speed, much further level. So if you can deleverage — so basically, recessions happen when people get too far out over their skis taking risk because they got overly optimistic. And then for some reason, people say, oh, wait, I’m really out over my skis now, and things may not be playing out exactly what I thought, and they need to retrench.

So if they’re really out over their skis, they have to cut back on their investments rapidly, they have to fire people rapidly, households have to tighten their budgets, financial entities have to sell assets. All these things happen at once. And when they happen at once, it becomes self-reinforcing.

So if you go from unemployment of three to unemployment of six in two months, it panics people and leads to more layoffs at a faster rate. People go further and they don’t feel like they have the time or the luxury. And the same thing when they’re reducing the average on their balance sheets or households to cut back on their budget, that happens fast.

And then demand has cut back and then more people need to be laid off and it feeds on itself. It’s a self-reinforcing process until it burns out. If you go from 3% unemployment to 6% unemployment over two years, then it’s a more orderly process.

And you don’t get the fire sales and you don’t get the panics. That self-reinforcing feedback loop is a lot weaker, a lot less likely to get into recession. So that’s kind of how I look at this. And since our initial conditions were pretty strong, I didn’t think getting into one of those really aggressive feedback loops was very likely.

The secular reason is the Fed controls a lot less of money supply, if you want to call it that, than it used to. Over the past 30 years, we’ve had global financialization, global financial deepening, however you want to refer to it. And way back in the day, basically, the monetary system was the Fed and banks.

So the banks would issue credit and give you a deposit on the other side. So they would issue the money in the form of deposits into existence. That was the primary form of money creation, not the Fed, but the banks and the Fed supervised this process. They wanted to make sure that the banks were staying within regulatory parameters. They also had other objectives that they needed to fill.

What’s happened over the past 30 years, 40 years is we’ve had an explosion of things like repo and euro dollars. If you want to call them chattel banks or what have you, these guys also create money. When you do a repo, you’re basically liquefying an asset. You’re taking the asset on your books and you’re making it liquid by borrowing against it, and you take that money, and you spend it. That’s money creation.

Euro dollar is the same thing. Fed does not control these processes, not nearly as much as it controlled the system back when it was just kind of the Fed and banks. So a lot more of the system is beyond the reach of the Fed. And if you followed me at all on Twitter, you know that the price incentives using interest rate is a really, really blunt tool, right? It doesn’t always work that well.

And great examples of this are we have the biggest or at least the highest valuations in the stock market in my lifetime, probably ever, in 1999 and 2000 in the dot-com bubble when the Fed funds rate was 5% and 10-year was 7%, and we couldn’t even spell QE.

And we had the nastiest vintages of mortgages extended 2005, 2006, 2007, when the Fed funds rate was also around 5%, and we didn’t have QE. So it’s not as interest rate sensitive as people think. Because if you think you’re going to make 200% in a year or 300% over five years, whatever it happens to be, the difference between borrowing at 3% and 6%, it’s the same number for you. And that’s what happens. And also, when people are very fearful they become price insensitive too.

So just raising the interest rate — unless you go to really, really high levels, obviously, like Paul Volcker did. And within reasonable levels, it just doesn’t turn the dial that much on lending. Lehman had 33 turns of leverage when the Fed funds rate was at 5%. From a secular standpoint, the Fed just doesn’t control the money creation process nearly as much as it used to.

And then behaviorally, we’re all kind of conditioned to think that lower interest rates and higher interest rates have a really big effect. So we were kind of bracing for it. So everyone saw the Fed raised rates aggressively and that led people to say, okay, well, recession is coming for sure.

And we kind of had a precession where everybody saw the rates going up, they expected a recession was going to come in, but it wasn’t coming, but they started a little by little paring back and investing a little bit less, hiring a little bit less. We saw that the back orders for labor declined, all that kind of stuff.

So ultimately, that means people are less out over their skis as the process plays out. So you’re kind of deleveraging in a gradual sense. So we’re kind of braced for it because people will believe deeply that that’s what would happen, and then it didn’t happen.

Eric: [00:15:24] The way you think about the world, it feels clear. I think that for other people, it kind of breaks their brain a little bit that when you say in your example of the monetary policy of unemployment going from 3% to 6%, and the difference is it happens fast or slow.

If you had told people in advance — I think this is always still funding about hard markets are or how humbling they are. The Fed is going to raise rates. This is what’s going to happen. Many people assume this would be disastrous. It would cause a recession. The housing market would break. Like all these bad things would happen and here we are coming into the end of ’23, and I think every asset class is up in the face of that. So that kind of breaks people’s brains. Why do we get that so wrong?

Mark: [00:16:06] Well, I think these are the reasons I was just talking about, people overestimate the power of monetary policy. And they thought the inflation that we had was much more monetary than it wasn’t monetary at all. It was obvious entirely COVID. It was fiscal but people — they have this — we were trained. Milton Friedman said that kind of mindset, people think the high-powered money and the loanable funds model, none of which only worked that way.

Like I said, banks issue money into circulation via deposits. That’s how the bulk of money gets created. In exceptional circumstances, the Fed can expand and contract its balance sheet because there’s a demand for liquidity. So whenever it wants to deleverage, banks need a lot of dollars for settlement. They need to settle with each other.

So the demand goes up a lot like back in the day when we were more agrarian economy and the Fed around harvest time had to produce big boxes of money and send them out to the hinterland so that transactions could get done. The Fed provides the elasticity in the monetary system for rapid expansion and contraction of demand for cash. That’s kind of their role and the banks are supposed to issue the dollars into circulation via credit.

Most people don’t get that. Once you get it, once you understand endogenous credit, then things make a lot more sense to you, but the wrong model has been drilled into people’s heads so thoroughly and it makes a much intuitive sense that people — even though it’s wrong, that it’s hard for people to get past it.

Eric: [00:17:26] So if people are looking at — if they’re looking at the wrong way, when you get something like quantitative easing, quantitative tightening, this idea that the Fed could impact the market and even more powerful, the irony is what you’re saying is so not a consensus, which is why I love it that coming out of 2008, it was the Fed was even more powerful than they’ve ever been before and that they have the implication on the market. So if the QE isn’t really whatever one thinks it is, is that because it’s really just moving money between the banks and the Fed

Mark: [00:17:58] Yes, basically. I mean, the way the mechanics work, you can call it printing money, and I think that leads to a lazy thought process because some people kind of take it literally unless you press them to go, oh, it’s not literally printing money, but really, it’s just an asset swap.

So think about it in the simplest way possible. You have a 60-40 portfolio. Just — to make it simple, your 60% is in an S&P ETF, and your 40% is in T-bills. So you got your risk money, your 60%, and your risk-free money, you’re ballast as it were. So the Fed comes in and they buy all your T-bills, and they give you a deposit at the Fed, which yields roughly the same thing.

Are you going to change your 60-40 allocation because of that? Are you going to go out and buy Tesla stock with that money? Not unless something else changes in your mind and you think you need to take more risk, which is possible, but it’s a totally separate decision.

But that’s what really happens. The Fed liquefies the system and allows for settlements to take place amongst themselves. Now the way QE is supposed to work in theory, and I think a lot of the people who put it in place way back when, if they were to review what happened now, they would say, okay, it didn’t hurt, but it was a lot less powerful than we thought it might be. It’s supposed to work in two ways.

One is what they call the portfolio rebalancing effect. And that is Fed buys bonds, takes them out of circulation. The people who own those bonds, say, I should probably replace that duration in my portfolio, so maybe I’ll buy some government-backed mortgages. So same risk temperature more or less, but it’s a little bit — just a hair out the risk spectrum. And some guys might say, well, yes, I’ll buy some high-grade debt, some high-quality corporate debt, a little bit further out the risk spectrum.

But you also have to keep in mind that most of the people from whom the Fed is buying these bonds, it’s not in their mandate to go out and buy equities, right? They’re buying it from like PIMCO. They’re buying it from fidelity bond funds, and they’re buying it from these guys who have a very clear mandate, and they don’t buy equity.

So that kind of effect is not that strong, first of all. I mean it’s kind of there, but it’s indistinguishable from the natural process of people moving out the risk spectrum with time. And we can talk about that later because it’s a super important point, how risk appetite works over the course of a cycle. This marginal effect is really indistinguishable from this bigger effect, I think, of people over time moving out the risk spectrum during a cycle until we get to that point where we are too far out over their skis, and we have to bring it back.

The second channel is the idea that by taking duration out of the system, it will lower the yields on bonds and that will stimulate lending and things like that. But as I like to say, you can lead a banker to liquidity, but you can’t make it lend. You need the risk appetite for people to lend.

And it’s not even clear how much QE lowered interest rates because we know from a flow standpoint, during QE1, QE2, QE3, every time these things got rolled out and the Fed was buying bonds, the yields were going higher, and they were going higher primarily because of the placebo effect. People believed that the Fed intervening was protecting the downside on the economy.

And therefore, they said, okay, I’m selling bonds and I’m buying equities. That behavioral effect based on perceived change in economic outlook was much more powerful than the mechanistic buying from QE. And this is why I was saying back in September, I said as soon as we get a whiff of slowdown in the economy and inflation cools off, all this talk about supply and fiscal unsustainability is just going to disappear, which is exactly what happened.

So it’s not that these effects don’t matter. It’s — they get swamped by changes in demand triggered by changes in the economic outlook, our perception of growth. So from a flow perspective, it didn’t work. In fact, it worked the other way. Yields tended to go higher when the Fed did QE.

So does it work from a stock perspective? If they buy enough bonds and take them out of circulation, it drives interest rates down? Maybe a little bit. But it’s really hard to say. Look at what’s happened now. We’ve — the Fed balance sheet is down by $1.3 trillion.

So whatever people say about repos or the TGA, the Fed owns $1.3 trillion less of securities, of bonds and mortgages. And we’ve raised rates 500 basis points. And for sure, the 10-year treasury has not gone up 500 basis points. It’s gone up by a lot less than that.

So it’s hard to argue that anything other than economic expectations is the primary driver, yields further out the curve. So it was an experiment worth doing, and a lot of people don’t get this. The first QE was really about the plumbing. They were trying to make sure that the pipes worked, that markets didn’t get gummed up, that things could work smoothly. It wasn’t about, at least the first two-thirds of it, wasn’t about trying to boost economic demand or activity, that came later.

But what QE unambiguously does is in times when there’s a surge in demand for transactional balances like back in the agricultural days when they shipped out those boxes, the Fed provides the elasticity to make sure the payments can flow through the system and the dry cleaner in Cedar Rapids can make payroll. That’s kind of how it’s supposed to work.

But it’s just not very powerful when compared to the changes in economic outlook. This is why supply rarely — and people talk about bitcoin and fixed supply. What matters is demand. Demand is really what swings and supply is rarely the issue. That’s why QE and Q2 are — this is my second time through it.

So I still have the scars from 2008 telling people that QE wasn’t going to cause inflation, and people looked at me like I had 3 eyes, and I remember I was working at hedge fund, we lost a client because of that and a couple of prospects.

I remember one guy telling after having a meeting — I used to get sent to a lot of the meetings because I was good at explaining the economics and a lot of the guys I worked with were flow traders who are maybe not as articulate and kind of had intuition and good risk management, but couldn’t explain things as well to clients.

And I remember explaining the things to a particular client, and afterwards, I heard from the owner of the hedge fund, he came back and he said, “This is what he said, Mark,” he was laughing about it. He said, “Mark is really smart, but he’s just going to get you guys killed with his view on inflation,” and they ended up not investing with us largely for that reason.

But anyway, I’ve been through this a couple of times, and I just retweeted a tweet today that I sent out back in August. You know the depth of the market in August of 2022, saying, if we get to all-time highs anytime within the next 12 months or by the end of 2023, we can eliminate, for sure, the QT effect that so many people fear. And that was kind of peak fear of QT because the market was going down and a lot of people ascribed it to QT. It just doesn’t have that kind of effect.

Eric: [00:24:32] The thing that I remembered was during ’08 when it was going down and being so close to the center of it all, you realized how bad it was. And the reason why — I have two parts here. One is I do remember when QE first happened, it did feel like — and maybe the Fed still has this power that when the system seizes up, it really is the only thing that can reliquefy the system that it has this power to say — if the Fed didn’t step in, in a way, I felt like it would have been significantly worse.

But then after it happened — this is the second part of that question. I remember something like 40 of the greatest investors of all time because I was early in my investing career. I think I’ve been there for about three or four years. All of a sudden, we see the world collapse and then they unleashed this thing, which it felt like it’s saved everything.

It truly felt like it worked, but nobody knew the ramifications. And the smart money — there was this Wall Street Journal article, where 40 of the top hedge funds said, we’re going to go into inflation because of this, because we just unleashed like Pandora’s box. So why are you so confident at that moment.

Mark: [00:25:34] Yes. It was 2010, and I looked at the list, and there are a lot of prominent economists on there and investors and I remember Cliff Asness was on there and Jim Chanos and other names that guys on the Street would recognize. And I was confident because I knew how it worked.

And my time at the IMF, I climbed into so many different central banks and economies, I got to understand the plumbing in a way that most theoretical guys don’t and most Wall Street guys don’t. They kind of gloss over this and they said they had someone summarize Milton Friedman for them, and they think they understand monetary policy. This has been an eye-opening experience for a lot of people.

But if your balance sheet is broken, it doesn’t matter how much liquidity the Fed provides. You’re not going to lend it out, you’re going to fix your balance sheet first. That’s just common sense. And the mechanics don’t work that way anyway. That Fed doesn’t give you money and you lend it out.

Like I was saying earlier, the way it works is the bank makes a loan and then gives you a deposit and their limits are governed by the regulatory framework. They have capital requirements. They have liquidity requirements. They have leverage requirements. They have to stay within those.

But the banks are chartered to issue money, create money through deposits. That’s how it started in 1863 with the National Banking Act. The Fed came in 1913 and started to supervise the process because it became clear that the banks weren’t very good at it either. The banks aren’t going to be taking risk in creating deposits and issuing money if their balance sheets are busted. That for me was just the easiest.

And listen, everybody talks about the power of interest rates, but we had 0 interest rates, and we had ZIRP and QE for four, five years before people started taking any risk at all because you have to fix your balance sheet first. Maybe this is the right moment to talk about it, but risk appetite is driven much less by the price of money than it is by the other factors.

And the two factors can really boil it down to something easy to communicate. The two factors are — I call JPMorgan’s famous quote where he says, “Nothing so undermines a man’s financial judgment as seeing his neighbor get rich.” That means once your situation is okay and you see people around you making money, you say I’m going to make money, too. And then you end up with a stripper in Florida that owns five homes with Megan Mortgages, and the system blows up.

So that’s how it tends to work. We look around — prices go up a little bit and we look around, we see other people making money, then we take a little bit more risk and we see prices go up more. This is why, as I said earlier, nothing brings out the buyers like higher prices. That’s really how Wall Street works. Now from a macro standpoint that matches that is Hyman Minsky’s financial instability hypothesis, and it’s basically stability breeds instability.

So it’s kind of the same thing. Everyone starts making money. The banks look around their [indiscernible] to keep up with Goldman Sachs, and they start underwriting riskier mortgages and everybody starts doing it. It’s not because they think the Fed is going to bail them out or anybody is going to bail them out. No one is going to make a loan apart because I think the Fed is going to come in at $0.40 on the dollar and bail them out. No one wants to take that loss.

What happens is the optimism and the greed blinds people to downside risk. Anybody who’s been in the room, and I have been in these rooms, right, over my career with the risk committees and how people are making these risk decisions, it’s not because they miscalculated the downside, but I think they’re protected somehow since they’re ignoring it. Their greed and their competitive pressure leads them to take too much risk.

I remember Stanley Mack from Morgan Stanley shortly after the Global Financial Crisis was being interviewed on a Bloomberg forum. He told the story of a client of his who’s a good friend and a long-term client, called and asked for a loan and Stanley Mack said, I can’t do that. It’s just not responsible. It’s too much leverage or whatever the reasons were. He said it wasn’t the right thing to do.

And this was the guy whom he had a really good relationship, long term, both personally and professionally. And he said, as soon as I hang up the phone, I knew he was going to Merrill Lynch to get that loan, and he did. So it’s really the competitive pressures and being blinded by greed that leads people to take all the risks, not because they think their downside is protected.

And Hyman Minsky kind of says, all these things end up — when you’re stable, people start taking a little by little more risk, and then you end up — it ends up bringing instability because in a capitalist system, we take things too far. And we should. That’s how we get innovation. We’re supposed to be taking risk, and we’re supposed to be failing.

The Fed’s job is not to stop bubbles and keep us from doing it. The Fed’s job is to make sure that the guardrails of the regulatory system are in place so that collateral damage on to innocent people doesn’t happen.

And this is what they did when you were saying earlier, they stepped in and they flooded the system with a settlement liquidity so that everyone’s transactions could clear and so that you and I didn’t have to go out in our pajamas at three in the morning, waiting in front of an ATM machine in line, hoping that there’d still be money in there when we get up in the front of the line.

2. This Is What’s Driving the Big Surge in US Oil Production – Tracy Alloway, Joe Weisenthal, Stacey Rene, and Javier Blas 

Javier (03:15):

We had record levels, and it’s just an incredible number. As Tracy said, if you look just at what we say is ‘crude oil,’ it’s more than 13 million barrels a day. But if you add on top of that number other things that go into the oil, liquids streams or condensates and NGLs (natural gas liquids), a bit of ethanol, etc., etc. — we are well above 20 million barrels a day of oil production that compares to a hundred million worldwide.

So you put everything together, the US is producing one in five barrels of oil consumed. That is just an incredibly high number. And it doesn’t seem to be stopping. Probably it’s going to slow down a bit in 2024, but it’s going to continue to go up.

Tracy (04:04):

Okay, where is all that new oil actually coming from? Because it’s been a while since I’ve brought up the rig count chart. But if you look at the rig count chart, this is such a fun one because you can see the big humps of the early 2010s and then the big slide into 2015, and now it seems kind of flat. So there’s been some increase between 2020 and 2022. The number of new rigs being drilled has gone up, but it’s not like we’re seeing a boom in new gas rigs and new explorations. So where is all this oil coming from?

Javier (04:43):

Well, it’s coming from the very same places that it was coming about 10 years ago, but it’s coming in some way, and for lack of a better word, better. So it’s coming from Texas, it’s coming from New Mexico, and it’s coming a bit from North Dakota, Oklahoma, etc., etc. It’s coming from the shale regions of the United States.

But if we were to say ‘where’ in just one single or two, or in this case, three single words, it’s Texas and New Mexico. That’s where the new oil is coming. And you are right Tracy, the recount is not significantly up. Actually, you look at [it] from a loan perspective, it’s lower than it was during the previous booms of shale.

But it’s just that the oil companies in Texas and New Mexico have [gotten] very good at extracting more oil from those rigs, from those wells that they’re drilling. And they’re also doing much longer wells. If you think about how a shale oil well looks like, it first goes down vertically and then it just turns around 90 degrees and it goes horizontal for a while. At the beginning, those horizontal wells were relatively short. Perhaps a quarter of a mile, half a mile at most. Now they’re going as much as three miles horizontally. They can get a lot more oil than they were able to do a few years back…

Joe (07:04):

So that really held up well. So what’s changed since 2016 Javier? Tech?

Javier (07:09):

Technologically-wise, we can drill longer, particularly the laterals. We can pump fracking fluids at a higher pressure. And companies are also very good at doing this super quick. Previously our well could have taken 30 days — now it takes 10. Companies and the crews have gotten very good at doing it. And that means that they can do it cheaply. And that’s the funny part of the whole boom of 2023 and 2024, a difference of the previous ones. Companies are making money and investors are making money. So everyone is loving it. This is the first time, and this is what really terrorized OPEC, that shale oil is growing and making money at the same time. And that’s a big problem if you are in Saudi Arabia.

Tracy (07:56):

Definitely want to get to the possible response from OPEC. But just in terms of technology, one of the things, and the reason I brought up that story, was the idea of standardization. So,before you used to have all these bespoke custom fittings for oil rigs or platforms or whatever. But then, I think there was actually an industry-wide effort or attempt to start standardizing some of these things so you didn’t have to order a bespoke component for every single oil project that you were doing. And that seems to have helped make things go faster — to Javier’s point and also brought down costs. Javier, how much of a big deal is that in the industry?

Javier (08:36):

It is a big deal. It has happened everywhere in the oil industry. Let me give you my favorite anecdote of a standardization in the oil industry. So you are working on a North Sea oil platform, this is offshore outside Norway and the United Kingdom. You need to paint a lot of the stuff yellow, kind of yellow [for] danger, very visible etc., etc. Very stormy areas of the wall. The North Sea fog, it’s not the kind of place that you really want to spend an evening in winter there.

So every company has their own shade of yellow. There were 19 different kinds of yellow to paint things in the North Sea. Each company has their own shade with their own specification, and it was just ridiculous. So at one point, a few engineers in the industry got together and said ‘Well, this is a bit ridiculous. I mean, can we not just do a yellow North Sea?’

And so they got together and everyone decided this is the shade of yellow that we’re going to use. And now everyone is painting everything that they need to paint in yellow with the same shade. That at a much bigger scale has happened across the oil industry. Everything has got a standard. And companies within themselves, they like to do everything bespoke. They really, in some way, gold-plated a lot of projects. So each well was a bit different to the other one. Now, companies are designing one single design. And when they have really thought ‘Okay, this is it. This really works very well, now copy and paste for the next 25, 50, 100 wells’ — that has cut costs significantly…

…Joe (10:39):

Yeah. They’re all looking at the different Pantone shades, but got to do so in a legal way. All right, let’s talk about the capital markets aspect because it did seem like, you know, the way people thought about it was that the industry had to face a choice. Would it be pursuing volume or would it be pursuing profitability? And as you’ve just said, there seems to be this very weird situation in which volume is ramping and productivity is sustained. How is that happening and how sustainable is that?

Javier (11:06):

Well, to the question of how long and how sustainable — I’m going to be honest, I don’t know. I thought that production growth would have a slowdown in 2023 and it never happened. It did the opposite, it accelerated. You look [at] every oil executive, if you look at the forecasters of the industry, everyone is saying it’s going to slow down in 2024. But also they said the same for 2023, and they were wrong.

So we’ll see what happens, really. But yes, I mean the industry went into this new era thinking about profitability. So everyone cut CapEx, everyone tried to get more efficient. And everyone thought that production growth was going to slow down because the focus was profitability. The fact that they were able to grow quite strongly came [as] a bit of a surprise to the industry. And then everyone kind of celebrated it.

But here there is a very important question. If OPEC has not cut production to make room for all this new shale oil from the United States, prices will have come down. And then the industry would have faced the same kind of dilemma of the past. You are producing too much, then the prices come down, your profitability comes down, and then you have a problem. So a lot of these that we are putting based on efficiency, it’s true. But if not for OPEC cutting production and keeping prices above $70 a barrel, then shale companies will be in trouble.

Tracy (12:47):

One thing I’m really curious about is who is actually funding production now versus, say, in the early 2010s.

Joe (12:56):

And just to add onto that a little bit, is there any difference between private and publicly-traded domestic US players?

Javier (13:02):

Okay, so let’s in parts. On Tracy’s question, who is funding this? Well back 10 years ago, five years ago, it was Wall Street. It was a mix of equity and credit markets which were funding all of this growth through different instruments. I mean sometimes it was just issuing fresh equity. Sometimes it was bonds, high-yield bonds, reserve lending where a bank is lending to an oil company based on the reserves underground; more or less like a mortgage rather than a house. You mortgage the oil reserves that they’re underground.

And a lot of that is still there, but a lot of the money now needed for the expansion and to finance all this new growth is coming from cash flow generation. It’s the internal cash flow of these companies. They generate enough cash to pay for all the new drilling that they’re doing to pay for all the capital investment that they need to do alongside new pipelines, etc., etc. And to pay the shareholders.

These companies now for the very first time are paying dividends. And that sounds like — well, publicly-listed companies should be paying dividends that’s like normal. Well, that was not the case a few years back. But now they generate enough cash to do all of the above.

And in terms of is there a difference? Yes. Publicly listed companies have been a bit more cautious, they have been trying to. They have the shareholders, they have Wall Street on top of them, and they have to really try to focus as much as possible on paying dividends and buying back shares. Publicly-owned [companies] don’t have that pressure, that super strong pressure. So they have done a bit more growing. And there is a suspicion in the industry that a lot of that growth was to try to maximize the amount of production that you are doing so you can sell yourself to a big player, say ExxonMobil or Chevron. And perhaps that’s not as sustainable as it looks like…

Joe (21:02):

I think his name — he even wrote a Bloomberg Opinion column on March 20th, 2020 – Ryan Sitton was his name. The railroad commissioner who called on OPEC to coordinate with the US in constraining supply.

I want to pivot for a second and talk about the Red Sea. And we talked about it a couple weeks ago in the context of container freight. What [causes] the rising tensions there? We recently saw the US strike at Houthis assets. What does the rising tension there mean from an oil perspective?

Javier (21:34):

Well, it’s more or less a binary situation. As long as the strait of Hormuz, which is the big outlet from the Persian Gulf for countries like Kuwait or Saudi Arabia into the open markets, as long as that remains open, what’s happening on the Red Sea is of less importance. Yes, it’s going to mean an increase in cost because a lot of the oil tankers and also the LNG carriers, these are liquefied natural gas carriers. They’re going to have to divert, avoid the Red Sea and go around Africa. That adds from the Persian Gulf into Europe probably a good 10 to 15 days extra. So it is not small and it could really increase the cost of shipping, but it’s not the end of the world. And that’s why the oil market is taking it quite relaxed.

I mean, prices have barely increased over the last few days. But then you could think ‘Well, that is basically on a scale of one to 10, probably a two, maybe a three.’ What is the other scenario? Well the other scenario is the open fight with Iran, not with his proxies — the Houthis in Yemen — but actually with Iran and the strait of Hormuz somehow gets in trouble. Shipping is more difficult though it probably is not completely closed, but things get really bad. And that on a scale of one to 10, that’s probably 25. And that’s the problem. That’s what I say, it’s a bit of a binary situation at the moment. So far not so bad…

Javier (27:19):

I think that you are putting it absolutely right. I mean, the fact that the US is exporting so much oil, and when you count crude and refined products, many weeks, the US on a gross basis is exporting more than 10 million barrels a day. Obviously at the same time, its importing a bit. So on a net basis, about 2 million barrels a day.

But the fact that the US has oil to export on a net basis more than it consumes and it can export is just mind blowing. And particularly, you know, I have been writing about this industry for 25 years. If even 10 years ago you had told me that the US was going to be exporting the amount of crude that it’s doing today, I would have said absolutely not. No way. No way this is happening…

Javier (30:16):

Well, it’s particularly about how we trade electricity. And you think about a few years back — and by that I mean five, six years ago — a lot of the electricity market in Europe was controlled by the typical names that we all knew. The utilities that have been privatized, but used to be state-owned companies, big names like EDF (Électricité de France), RWE, etc., etc.

And the market was quite sedated. Prices were not really moving much. There was not much volatility. There were very few of the independent traders really making money trading electricity. And a few years back, in the middle of nowhere, Denmark, in a town called Aarhus, it’s a big university town in rural Denmark, a group of companies kind of started to plot how we can make money out of this market.

And they were really driven by two things that were happening in Europe. It was the liberalization of the markets. There was a lot more cross-border electricity trading in Europe. And there was also a lot more volatility in the supply of electricity in Europe because of wind and solar.

You cannot predict how much wind and solar power you’re going to get more than five days, perhaps 10 days [out]. But you know, meteorologists have a limit of how strongly the wind is going to blow or whether it’s going to be cloud covering one area of the continent or not for solar, etc., etc.

So that variability created a lot of price volatility, particularly in the very short [end] of the short-term market. I mean, electricity used to be traded one year in advance, one month in advance. And these companies kind of specialize in trading the next 30 minutes of the electricity market. You know, mid-morning, what is going to be the demand for electricity by lunchtime? That’s what they specialize in.

But, you know, the five or six top of these companies were making perhaps $100 million combined. So not a lot. And they were in the rather of the industry, but not that … In 2022, they made $5 billion. The return on equity in many names of the industry went well above 100%. In some cases, well above 250%. So let me put it this way — the companies that were making a couple of million dollars were making $10, $25, $30 million.

The guys who were making $25, $30 million before were making a couple of hundred million dollars. And the guys who were making a hundred, they just went to a billion. It was just one of the biggest booms in commodity trading profitability I have ever seen. And the piece is about these names, which outside of the industry, basically no one really knows about.

3. Lessons From the Bear Market – Michael Batnick

We did a podcast in December of 2022 at the Nasdaq MarketSite in Times Square with our friends from the On The Tape podcast. At the time, things were…not great. Inflation was skyrocketing and the fed was chasing after it to slow down consumer prices.

The stock market was cratering. And the ones getting hit the hardest are the ones everyone owned. Amazon was 55% off its high. No really, 55%. Meta was worth just one-third of what it was in the previous year. Fear was everywhere.

I asked the audience, how many of you expect a recession in 2023? Every hand in the room went up. Then I asked, how many of you think the stock market bottomed in October? Crickets.

It’s easy to say “Be greedy when others are fearful.” It’s hard to actually do it…

…It’s easy to overestimate your ability to deal with downside risk when stocks are going higher. You only discover who you really are as an investor in bear markets.

Ben and I were getting dozens of emails about triple-leveraged ETFs in 2021: “I know it’s risky but I have a long time horizon.”

I don’t think we saw a single one of those messages hit our inbox (personal emails, personal responses) in 2022…

…I, like many of you, just kept buying over the last two years. It’s not because I’m a genius, and it’s definitely not because I was bullish with every purchase. I bought in my 401(k) every other week and in my brokerage account every month because it happens automatically. Out of sight out of mind.

If I had to physically log on and execute these trades, I’m sure that I wouldn’t be as consistent as I have been. You mustn’t let your emotions determine when you buy. Like Nick first said back in 2017, Just Keep Buying.

4. A beginner’s guide to accounting fraud (and how to get away with it): Part III – Leo Perry

If you’re looking for a role model for how to serially raise capital for a business that really shouldn’t even exist in the first place, you could do a lot worse than Avanti Communications. Avanti was a startup satellite broadband operator that issued half a billion odd of equity, and about as much again in debt, in just five short years. No one seemed to care much that the business case was flawed from the start. That revenue kept falling short and customers got less and less substantial. And, of course, it didn’t matter that the accounts read like a Stephen King novel. Because you can’t get a bigger addressable market than space, can you?

How did I know Avanti was always bound to end up failing its shareholders, even before it took off? The company said so. In black and white. It told me and every other investor that bothered to look at its 2009 annual report. I know corporate filings aren’t exactly gripping but it helps if you read them. Lucky for us, it doesn’t seem like many people do.

When I‘d asked directly, management had flat out refused to disclose what the Mb capacity of its first satellite (called Hylas-1) would be “for commercial competitive reasons”. But they soon went ahead and gave it away in a press release anyway, stating that 320Mb was about 10% of the total. From there it was simple to estimate build cost per Mb, which came in around £35mn. The problem was Eutelsat, which was about to launch its own broadband satellite over Europe too. This bird, KA-SAT, was 15 times as big but only cost about 3 times as much to build. It’s unit cost was more like €4mn per Mb. That was going to be an issue for Avanti.

Don’t take my word for it, take the company’s. In its 2009 annual report Avanti already states that Hylas-1 “will be full with around 200,000 – 300,000 end user customers”, with the higher number only possible if it delivered a lot of them something not much better than a Netscape dial-up service (as in 0.5Mb per second). But even then Eutelsat had a 3.6Mb per second product in the market, for €17 a month wholesale. And had said publicly it would be keeping that price point once KA-SAT was up, but for a 10Mb per second service. Hylas-1 was going to be competing with that, but at those speeds it would be able to serve a lot less customers. It’s not quite a straight line calculation because of contention – basically congestion from other users – but it wasn’t good news for Avanti. Commercial wholesale revenue for Hylas-1 would end up end peaking at around €15mn while sell side consensus was still “modelling” four times that.

5. Data Update 3 for 2024: A Rule-breaking Year for Interest Rates – Aswath Damodaran

As you can see, while treasury rates, across maturities, jumped dramatically in 2022, their behavior diverged in 2023. At the short end of the spectrum, the three-month treasury bill rate rose from 4.42% to 5.40% during the year, but the 2-year rate decreased slightly from 4.41% to 4.23%, the ten-year rate stayed unchanged at 3.88% and the thirty-year rate barely budged, going from 3.76% to 4.03%. The fact that the treasury bond rate was 3.88% at both the start and the end of the year effectively also meant that the return on a ten-year treasury bond during 2023 was just the coupon rate of 3.88% (and no price change). 

I noted at the start of this post that the stock answer than most analysts and investors, when asked why treasury rates rose or fell during much of the last decade has been “The Fed did it”. Not only is that lazy rationalization, but it is just not true, and for many reasons. First, the only rate that the Fed actually controls is the Fed funds rate, and it is true that the Fed has been actively raising that rate in the last two years, as you can see in the graph below:

In 2022, the Fed raised the Fed funds rate seven times, with the rate rising from close to zero (lower limit of zero and an upper limit of 0.25%) to 4.25-4.50%, by the end of the year. During 2023, the Fed continued to raise rates, albeit at a slower rate, with four 0.25% raises.

Second, the argument that the Fed’s Fed Funds rate actions have triggered increases in interest rates in the last two years becomes shaky, when you take a closer look at the data. In the table below, I look at all of the Fed Fund hikes in the last two years, looking at the changes in 3-month, 2-year and 10-year rates leading into the Fed actions.  Thus, the Fed raised the Fed Funds rate on June 16, 2022 by 0.75%, to 1.75%, but the 3-month treasury bill rate had already risen by 0.74% in the weeks prior to the Fed hike,  to 1.59%.

In fact, treasury bill rates consistently rise ahead of the Fed’s actions over the two years. This may be my biases talking, but to me, it looks like it is the market that is leading the Fed, rather than the other way around.

Third, even if you are a believer that the Fed has a strong influence on rates, that effect is strongest on the shortest term rates and decays as you get to longer maturities. In 2023, for instance, for all of the stories about FOMC meeting snd the Fed raising rates, the two-year treasury declined and the ten-year did not budge. To understand what causes long term interest rates to move, I went back to my interest rate basics, and in particular, the Fisher equation breakdown of a nominal interest rate (like the US ten-year treasury rate) into expected inflation and an expected real interest rate:

Nominal Interest Rate = Expected Inflation + Expected real interest rate

If you are willing to assume that the expected real interest rate should converge on the growth rate in the real economy in the long term, you can estimate what I call an intrinsic riskfree rate:

Intrinsic Riskfree Rate = Expected Inflation + Expected real growth rate in economy…

…That said, it is remarkable how well the equation does at explaining the movements in the ten-year US treasury bond rate over time. The rise treasury bond rates in the 1970s can be clearly traced to higher inflation, and the low treasury bond rates of the last decade had far more to do with low inflation and growth, than with the Fed. In 2023, the story of the year was that inflation tapered off during the course of the year, setting to rest fears that it would stay at the elevated levels of 2022. That explains why US treasury rates stayed unchanged, even when the Fed raised the Fed Funds rate, though the 3-month rate remains a testimonial to the Fed’s power to affect short term rates.

It is undeniable that the slope of the yield curve, in the US, has been correlated with economic growth, with more upward sloping yield curves presaging higher real growth, for much of the last century. In an extension of this empirical reality, an inversion of the yield curve, with short term rates exceed long term rates, has become a sign of an impending recession. In a post a few years ago, I argued that if  the slope of the yield curve is a signal, it is one with a great deal of noise (error in prediction). If you are a skeptic about the inverted yield curves as a recession-predictor, that skepticism was strengthened in 2022 and 2023:

As you can see, the yield curve has been inverted for all of 2023, in all of its variations (the difference between the ten-year and two-year rates, the difference between the two-year rate and the 3-month rate and the difference between the ten-year rate and the 3-month T.Bill rate). At the same time, not only has a recession not made its presence felt, but the economy showed signs of strengthening towards the end of the year. It is entirely possible that there will be a recession in 2024 or even in 2025, but what good is a signal that is two or three years ahead of what it is signaling?…

…If there are lessons that can be learned from interest rate movements in 2022 and 2023, it is that notwithstanding all of the happy talk of the Fed cutting rates in the year to come, it is inflation that will again determine what will happen to interest rates, especially at the longer maturities, in 2024. If inflation continues its downward path, it is likely that we will see longer-term rates drift downwards, though it would have to be accompanied by significant weakening in the economy for rates to approach levels that we became used to, during the last decade. If inflation persists or rises, interest rates will rise, no matter what the Fed does.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 January 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 January 2024:

1. Learning from significant investment statistics of the past – Chin Hui Leong

Outfoxed by rising interest rates

Interest rates are another favourite among forecasters. In 2022, the US Federal Reserve raised interest rates from zero to between 4.25 and 4.5 per cent.

As you know, the stock market suffered one of its worst performances during this period.

When two trends coincide with one another, it is tempting to put two and two together and conclude: Interest rates rose, and therefore, that is why the stock market fell. But we cannot say the same about 2023. Last year, rates were hiked again by another percentage point to between 5.25 and 5.5 per cent. This time, the stock market staged a rally.

The contrast between 2022 and 2023 is a timely reminder that correlation is not causation. Given yourself time to learn the right lessons – preferably over multiple years, rather than the past 12 months. If you learn the wrong lessons from past events, then you will be doomed to repeat them in the future.

Outfoxed by GDP growth

Speaking of trends, China’s gross domestic product grew from around US$493 billion in 1992 to an astonishing US$18 trillion in 2022, said the World Bank. The annualised GDP growth rate for this period is almost 12 per cent.

But the same cannot be said about China’s stock market returns. The MSCI China index has recorded negative gains from its inception at end-1993 until end-2023. In other words, the rapid GDP increase did not translate into positive stock market returns even after more than 30 years.

What is the reason for this disconnect?

The underlying earnings per share for the Chinese businesses within the index barely grew for much of this period. Over the long term, stock market returns depend on business growth. Without it, you end up with flat to negative returns, as you see today…

Invest for the long term

When you hold stocks for the long term, you will occasionally record negative returns. It is the price you pay for a positive outcome. And yet, staying invested for the long haul gives you the best chance of success.

Since 1928, there has never been a 20-year period where the S&P 500 produced negative returns, noted Carlson. What is more, if you missed the five best days (read: positive returns) in 2023, the index’s gain would almost halve from over 24 per cent to 12.6 per cent. Miss the best 15 days, and the returns would be negative.

To put these figures into context, let us assume there are 252 trading days every year. Miss five of the best days or 2 per cent of trading days and your returns could be vastly lower. Miss 15 days, or less than 6 per cent, and you could be sitting on losses.

The good news is: You do not have to do anything to stay invested for the long term. Surround yourself with like-minded friends. As the saying goes: If you want to travel fast, go alone. If you want to travel far, go together.

2. Gucci is cheap and eggs are pricey in Russia’s surreal economy – Kate de Pury

As Russia enters 2024, and the campaign for President Vladimir Putin’s inevitable re-election heats up, the regime is keen to tell a good story about the country’s ability to withstand the war. It can muster a surprising amount of evidence to support this case.

The Russian economy has not collapsed under the unprecedented sanctions of 2022, as some predicted. Oil and gas sales to the West plummeted, but higher energy prices eased the pain and the government found new buyers in Asia. The rouble depreciated sharply in 2023, but has stabilised since. Vast public spending on the war has meanwhile created jobs. Inflation remains stubborn, and a slowdown is expected in 2024 as the central bank keeps interest rates high to fight it, but Putin was able to boast last year, not implausibly, that the economy had grown by more than 3%.

Russia still has to import many products, which a weakened rouble makes more expensive. But those who aren’t poor seem able to absorb the price increases, at least for now. There were initial supply hiccups when Russian banks were first cut off from international transfer systems. But middle-class Muscovites found workarounds, and can now buy Western brands over the internet with little difficulty. usmall, an online marketplace, lists iPhones and Ralph Lauren children’s clothes priced in roubles, which can be bought from third-party suppliers with Russian bank cards.

Moscow shops are well stocked with designer goods. Most Western luxury brands stopped shipping to Russian stores in 2022, but when I visited tsum, the Russian equivalent of Harrods, just before Christmas, a sales assistant was proudly showing customers the newest handbags from Gucci, Chanel and Louis Vuitton. Bought in Europe and carried back to Russia in the luggage of a “personal shopper”, there weren’t many of these new-season items on the shelves, but just enough to justify the sign “2023-24 Collection”.

Some of the items on display were second-hand. The sales assistant showed off an app the store has developed to make it easy for Russian clients to re-sell unwanted luxury goods. Even a used Gucci bag isn’t exactly cheap, but because it’s priced in roubles, fluctuations in exchange rates can make it become, by the tortuous logic Muscovites follow, a bargain in euro terms. “A good deal for the Russian shopper,” the assistant said snippily…

…Elsewhere there are signs that the invasion of Ukraine may have disrupted the Russian economy more severely than the frothy party scene suggests. The Olivier salad, a mayonnaise-drenched confection of root vegetables, sausage and boiled eggs, is a staple at every table during the holidays. This winter the price of eggs suddenly rocketed (no one is quite sure why, but it may have been because farms were short of labour since so many workers have been conscripted or left the country). In some regions people cannot afford a box of six eggs and have to buy them individually. One pensioner even raised this with Putin during the president’s annual end-of-year call-in with the public. Putin promised to look into it.

3. Claudia Sahm: it’s clear now who was right – Robert Armstrong, Ethan Wu, Claudia Sahm

Unhedged: It’s true, unemployment’s great. The most relevant signals of inflation are within spitting distance of 2 per cent. But no matter how you cut it, wage growth is around 4 per cent. Is that a potential problem?

Sahm: I have not, and do not now, subscribe to the view that the inflation we have been living through since 2021 is primarily demand-driven, like Larry Summers and my friend Jason Furman did. Those folks thought we put too much money into people’s pockets and there was too much pent-up demand. If you were in that camp, you thought we needed to jack up rates and see wage growth come down.

Wages are rising at a pace that’s better than before the pandemic, which was a very good time for the economy, but we’ve moved out of the very acute labour shortages. And obviously, we want to get workers off the sidelines. To do that, you’re going to have to pay them more!

I look at inflation and say that’s because of disruptions from Covid and the war in Ukraine. And because those will eventually work out in some way, inflation will come down. That leads to very different policy prescriptions to fight inflation. And it leads to very different views on the things like whether the $1.9tn American Rescue Plan was a good idea; or whether waiting to raise rates was a good idea. If it’s all demand, then you’ve got to destroy demand. But I don’t think it’s all demand.

On wages, too, we have seen some good productivity numbers. If you’re more productive, you get paid more. And that’s coming after the crap productivity growth we had after the Great Recession. If we’re getting better productivity growth, we should not be using pre-pandemic wage growth as the baseline…

Unhedged: You’ve called for banning the Phillips curve, the economic model positing a trade-off between inflation and unemployment. Now that we have a bit more hindsight, what’s your retrospective on the Phillips curve in this cycle? And if we ban the Phillips curve, what replaces it?

Sahm: This fundamentally goes back to a view about how much of inflation is demand versus supply. If you think it’s demand-driven inflation, you can fight that with the Fed’s tools. But how do you know how much monetary tightening to do, how much unemployment you need to get inflation down? So then you march off to the Phillips curve. There are more sophisticated versions of the Phillips curve that incorporate supply shocks. No one brought those out. The versions of the Phillips curve that were brought out in policymaking circles went back to the 1950s or 1960s — essentially just inflation versus unemployment.

The Phillips curve was used by the same people denouncing the American Rescue Plan to make statements like, “We need five years of 6 per cent unemployment.” But it goes back to why did inflation spike, demand or supply? It’s clear now who was right: it was largely supply. It was completely valid to argue in 2021 that when inflation took off, it was demand. The American Rescue Plan was big, it came after two very big fiscal relief packages and the Fed had been adamant about not raising rates. But the fact this year that inflation has notably come down and unemployment has stayed low only happens if it was mostly supply-driven.

In terms of what other model to use, backing off from the Phillips curve would have been a good idea. And then the thing that economists need to think harder about is how we think about supply shocks. Most of the effort in macroeconomic research goes into thinking about demand disruptions. The [industry gold standard] New Keynesian dynamic stochastic general equilibrium model has wedged into it a Phillips curve that can do supply shocks. But we don’t really know how to calibrate [these sorts of models].

A lot of this is art, not science. The academic stuff looks like science, but what actually is useful in the real world is much more judgment-based. But you ought to have tools that at least don’t do damage. The Phillips curve has done damage.

Unhedged: There’s a lot of worry now about excessive debt and deficits. Olivier Blanchard is saying we need to get r minus g, the real interest rate paid on debt minus the growth rate, on a sustainable trajectory. What’s your perspective on debt sustainability?

Sahm: First off, Olivier is adorable, what a great way to frame it. My view is that it’s completely misguided to have a discussion about the size of the federal debt. The entire conversation about r minus g, while maybe useful for macroeconomists to think about, ignores that it matters what we spend on. If we are on a path for higher productivity growth after the pandemic, the American Rescue Plan, the infrastructure act, the Chips act, the Inflation Reduction Act — they all get a piece of that pie.

4. A beginner’s guide to getting away with accounting fraud, part two – Leo Perry

Now normally the way it works when you sell something is you then get paid. You get cash in return. That’s what money is after all, credit to buy more stuff in return for what you sold (which, most often, is yourself). You might not get paid right away. You might allow a few weeks for your customer to cough up (which is giving actual credit). But in the end you get your money. Otherwise you’re not really selling, you’re a charity (or a slave).

But of course we’re never going to collect on the lemons we invoiced for. And that’s going to start to show on our balance sheet, thanks to the beautifully simple logic of double entry bookkeeping. This says that for every action there has to be an equal and opposite reaction (OK that’s Newton’s Third Law but it’s close enough). Booking a profit increases the value due to owners of the business. And that’s a liability because these shareholders will want to get paid one day (good luck with that). So there must be an asset to match.

In our case that asset is definitely not going to be cash. What we get instead is more and more payments receivable from our customers.

Unattainable cash proved the undoing of Bio-On. In 2019 it was one of Italy’s only tech unicorns. This was back when being a unicorn was a good thing, because there weren’t any adults in the room…

…One thing Bio-On was great at was announcing licensing deals. Collecting on them, not so much, which is why we can learn from it.

By way of example, in July 2015 it put out a press release on a deal with French sugar co-operative Cristal Union (in fact the tie-up was with a joint venture between the two companies, B-Plastic). Bio-On’s 2015 accounts show it booked €3.25mn of license revenue from this JV — and collected none of it in cash. By the end of 2017, €2.75mn was still due from B-Plastic but the accounts for the JV show no liability, or cash to pay it with.

Oddly, Bio-On accounted for its stake in the JV with a €1mn book value at the end of 2015. But it then removed the item the following year, writing off the investment but not restating its 2015 accounts. So Bio-On appeared to have invested €1mn in the JV then written that off — while collecting license fees worth, at most, only half the money it put in.

Not collecting on sales made Bio-On a pretty obvious target for investigation. In the three years to the end of 2018 it reported €65mn revenue. Receivables were €60mn.

A few months after my visit to Bologna, the activist short seller Quintessential Capital published a report that highlighted a few issues at the company. In October 2019, Bio-On’s founding CEO and chair Marco Astorri was arrested on suspicion of accounting fraud and market manipulation, shortly before the company was declared insolvent…

…I first spoke to Quintessential’s principal Gabriel Grego back in 2015, after he published a report on another business I was short, Globo. This was a UK company but it was all Greek to me. Management claimed it had a hugely successful bring-your-own-device app, GO!Enterprise, which allowed you to use your own mobile securely at work.

And apparently this had hundreds of thousands of paying users. I was a bit sceptical because its Google appstore listing showed fewer than 5000 installs. The fact that the corporate website touted Lehman Brothers as a customer in 2013 was also a bit of a red flag

On the face of it, though, Globo was having no trouble collecting on these sales. The results for the first half of 2013 reported trade receivables up by only 4 per cent, despite strong revenue growth. But then, this wasn’t exactly an apples-to-apples comparison.

You see on December 3, 2012, Globo sold control of its Greek operations to local management for €11.2mn and with it went €40mn-odd of receivables. Of course, no one was really going to pay much money for nothing much. But the consideration was deferred, so everyone was happy (for now).

5. Harley Bassman on What Investors Are Getting Wrong About the Fed – Tracy Alloway, Joe Weisenthal, and Harley Bassman

Tracy (03:58):

So I have a question to begin with and this is completely out of self-interest as a journalist who’s had to write about convexity at many times during their career and has always struggled to define it in a way that satisfies my editors who want to encapsulate a financial relationship in as few words as possible, how would you describe it?

Harley (04:20):

Convexity is an X word, so everyone gets a little rattled about that, but it’s actually rather simple. It’s just unbalanced leverage, which was also a hard concept. Let’s simplify it a little bit.

If you have a bet, you’re making a wager where you make a dollar or lose a dollar for equal up and down equal opposite payoffs, that’s zero convexity. If you make $2 and lose one, that’s positive convexity. If you could lose $3 and make $2 — negative convexity.

The reason why we hired all these PhD quants in the nineties was to basically figure out what that’s worth. Clearly, you’d rather own something that makes $2 and loses $1 than is one-to-one. And if it’s lose $3, make $2, you better get paid for that. And so all the mumbo jumbo we go do around pricing out these various paths and payoffs is just to make it a fair bet when you have these different payoff profiles. And that’s it. Convexity just means that the payoff is not linear. It’s not one-to-one…

…Let’s just go one step back. When you’re in the bond market — not equities — the bond market, you have three buttons you could push. That’s it. Duration, credit, convexity. Those are your three risks.

You start with cash, overnight cash, and anything you do past there is taking one of those three. Duration is when you get your money back. Credit is if you get it back, convexity is how you get it back. And what a bond manager is trying to do is move around those three buttons to find the best risk-return, the best value.

Presently, selling convexity in the bond market is the best thing to do out there right now.

What’s duration? It is when you get your money back. So a two-year security will move 1.8 points for a one point move. So if rates go from four to five, a two-year bond will move by 1.8 points. A 10-year by about eight points. A 30-year by maybe 17 points, you’re usually paid more to take longer maturity risk because there’s more uncertainty.

An inverted curve is kind of upside-down land because you’re getting paid less to take more risk. We could talk why that is in a little bit, but right now, duration is a very weird place to take risk right now because you’re paid less to go out the curve and by a 10-year versus a two-year versus overnight cash.

Credit right now, investment grade credit is trading about 57 basis points. So a little over half point over the yield curve. And you get that from looking at these interest rate derivatives on your Bloomberg, it’s going to be CDX five-year. That’s actually tighter, [a] smaller number than its historic average of about 65, 66. You’re paid 57 now. Junk bonds, you’re paid about 360, 350, 370, which is also much tighter than usual 440, 450, 460.

So going into credit now, that’s not a great bet. I mean, I wouldn’t say it’s a disaster, but I mean, considering we’re concerned about the possibility of over tightening, a possibility of recession which an inverted curve kind of signals. I don’t really want to go and take credit risk. Convexity, right now, the MOVE Index, which is a measure of the price of convexity the same way it’s – 

Tracy (08:22):

Which you invented, right?

Harley (08:23):

I did. It’s the VIX of bonds, plain and simple. The VIX of bonds, its average is maybe 90 or 100. It’s trading 120 now, which averages out to about maybe seven, eight basis points a day of market movement. That’s higher, much higher than its historical average. That’s the kind of trade you want to go and do…

…Joe (11:35):

When you say, okay, short convexity, what is the type of instrument that allows any trader or investor to express that idea?

Harley (11:43):

Well, the most simple strategy would be for an investor who owns a stock portfolio to go and sell covered calls. I mean, you’re selling options. You’re selling convexity when you go and you sell covered calls, what are you really doing? You’re kind of converting potential capital gains to current income. You’re limiting your upside. Your downside, of course is still large because the stock can go down a lot.

But you’re basically kind of doing a conversion there of taking risk off the table for current income. And there’s a price where you want to go and do that. And there’s prices where you don’t. When the VIX is at 40 or 50, I mean, you probably want to sell covered calls, of course you won’t do it because you’ll be in a panic. But that’s kind of the idea. And theoretically portfolio managers are supposed to have no blood in their veins, and they can go and do these various trades when the time is right…

… Harley (25:09):

If you go look at, you know, various derivatives, it indicates right now the Fed’s going to cut rates, you know, four, five, six times. So call it 120 basis points of cutting in the next year, which seems kind of crazy unless we crash market, we have a market crash.

I think what’s happening is this, I don’t think it’s the market’s predicting that rates are going to come down by a hundred and a quarter basis points. I don’t think that’s it. I think what’s happening here, it’s like an 85% chance that rates don’t move, and a 15% chance that rates go to 1%, that we have some kind of disaster. It’s a bimodal. And if you add those two things together, that’s how you get the down 125. No one’s saying 125, I think it’s zero and 400 and people are using the two-year rate or the five-year rate as an insurance policy against a bad thing happening. If you think of it in those terms, it kind of makes sense because, we only quote one number, but how do we get that number?

Joe (26:05):

Right. So the idea is if you’re long risk assets, which most people are most of the time, one way to hedge that would be to sort of make big bets on rates coming down sharply. It doesn’t mean that that’s your main view. It just means that if your bullish view is going to go wrong, a way to hedge that is to place big bets on rates.

Harley (26:26):

Yeah, well, that’s why the curve’s inverted. But I mean, I think buying 10-year rates is kind of silly right now. I mean, if you’re going to go and buy this theoretical insurance policy of the Fed doing a massive cut because of a hard landing, you want to buy the two-year rate and that’s why we created another product that’s basically a 5x levered two-year…

…Harley (28:17):

Circling back to the duration, credit, convexity idea. Duration is ‘I buy it here, it ends up there.’ Credit, ‘I buy it here. It ends up there.’ It doesn’t matter how it gets to the final destination. Convexity is path dependent. It matters how you get there.

And so what we’re arguing about now is not where we’re going to be, but how we get there. And I’m saying that we’re going to get there much slower than the market thinks, and I want to go and invest accordingly. And if I do that, this is where mortgage bonds come in.

I’ll say, if you want the big prediction, here it is. The Fed wants a 2% inflation rate. They’ll get it eventually, I presume. They’re going to put the funds rate at two and a half, 50 over for a 50 basis point real return. Historically, if you’re a, bond geezer like I am, funds rate to two-years, 50 basis points. So now we’re at three, 2s-10s, a hundred basis points. So now we’re at four.

So we’re kind of looking at, the 10-year right now is what? 380, 390, 404? Whatever it is. I mean, it’s done. You stick a fork in it, man, the 10s aren’t moving. And I think with the 30-year rate, it probably goes up from here as the curve resteepens again, all the action’s the front end, that’s where all the action is going to be when it happens.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 January 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 January 2024:

1. The Impact of Shipping Disruptions in the Red Sea – Tracy Alloway, Joe Weisenthal, Mohsis Andam, and Craig Fuller

Joe (04:00):

There really is a lot to talk about, but why don’t we start off with the disruptions in the Red Sea. Why don’t you characterize, as you see it, the situation right now?

Craig (04:10):

So, I think there’s the short term anxiety that exists in terms of the safety of the crews, the dependability of the global supply chain. A lot of short-term concern, but I think the bigger story [that] is going to play out over the next couple of years is we’re now reaching a point in history where global trade and global shipping is no longer as dependable or as predictable as it has been really since the post Cold War period. Civilian ships are being fired upon. And this is an unusual development that we haven’t seen for really many decades…

…Tracy (04:46):

So walk us through the importance of the Red Sea route. What kind of ships are actually going up and down?…

…Craig (04:59):

There’s a lot of oil and gas, obviously being in the Middle East, it has a lot of exposure to oil and gas and the derivative products that come out of that portion of the world. But it’s also one of the major trade lanes for container flows. And so, think of what moves in container. It’s largely manufactured and consumer goods that are largely dependent upon containers. A lot of these products are coming from Asia, and particularly China into Europe, some products going to the United States East Coast. But the predominance of the products that move through the Suez in the container freight is largely related to products out of Asia, going to Europe for European consumption.

Tracy (05:36):

What other routes are available for that kind of trade?

Craig (05:40):

Well, you have to go around South Africa, and so you’re really adding thousands of miles of additional distance when you aren’t able to cut through the shortcut that is the Suez. I mean, [the] Suez Canal has cut out an enormous amount of distance that geographically the ships have historically had to go around with the Suez. It was able to sort of expedite trade flow from Asia, in particularly to Europe. We do benefit from it in North America, but a much smaller percent of the freight that we depend on in the United States is dependent upon the Suez.

Joe (06:16):

What is the historical role of the US Navy in securing or protecting some of these routes, and what are we seeing from US defense officials now at this acute moment?

Craig (06:29):

There’s a lot of conversation in geopolitical circles about whether the Navy’s role has changed or shifted or is no longer effective in the role that it was believed to be played for the last, really since World War II. So if you think about it, the United States has the largest navy in the world. It’s also a one of the only Blue Water navies that can go anywhere to defend any place on the planet. And that’s really the call to fame.

Joe (06:57):

Sorry, what does it mean “Blue Water?”

Craig (06:58):

It means that they can go into deep oceans…

…They can be anywhere. Basically, there’s no place on the planet that the Navy and the Marines can’t actually reach. And so, the whole purpose of that is to protect trade lanes. That is one of the primary calls of the US Navy is its role is to protect commerce and ensure global trade and really the world. And China has mostly benefited from that, [the] US Navy’s role of protecting sea from things like pirates and state sponsors that want to attack global trade.

And the question now is in a post-, we’re now in this sort of new generation of trade, what does it mean? There’s a lot more protectionism that happens with US policy. And really to be able to defend the role of the US Navy being able to protect all aspects of it with geopolitical tensions in East Asia means that we may not have the resources to actually protect all aspects of trade the way that we did at one point in time.

Joe (08:02):

Just real quickly, pirates and pirate attacks, and you mentioned them, they’re somewhat common. They’re in the news, but what’s different about this is that it’s missiles being fired. They’re not trying to steal the cargo. These are military attacks on private corporation.

Craig (08:23):

These are military techniques…

…Craig (08:26):

We’ve seen helicopters actually land on tops of ships and actually take cruise hostage by way of helicopter. It looks like a SWAT. You probably have seen the video floating around where it looks like a SWAT video. Where they’re flying in and they’re basically taking over a ship through use of a helicopter. We’re seeing situations where, as you mentioned, they’re using missile technology, military grade technologies, which is an unusual development. And then with the proliferation of drones, you now have a low cost way to actually avoid some of the defenses that are set up to protect these ships that they’re able to reach them without, without obstruction.

And I think that has changed the game. And look, we can argue whether these are truly state sponsored or not, but at the end of the day they have access to military grade technology and they are using this to attack civilian vessels. And their goal is to disrupt global trade.

Tracy (09:23):

This was going to be my next question, which is, even if the Navy said, yes, absolutely, we’re going to go in, we’re going to protect all the ships. How much can they actually do in the face of that kind of threat, which has new technology that they’re clearly using, but is also very, very flexible in terms of what it can do?

Craig (09:45):

I think the question is at what cost? Because, I think the US has the capabilities to largely defend every ship or the ships that we have decided to defend. But at what cost? I mean, you’re looking at a missile, anti-missile technologies, a million dollars. We’re firing these defense missiles off at a million dollars apiece, and you’re fighting a drone that cost a couple thousand dollars. I mean, at some point there is a massive tax on US consumers and the US economy for us to do this. And the question is what is our appetite to continue to fund this type of defense technology when the United States is not the primary beneficiary of that type of trade.

Tracy (10:25):

And on a similar note, I’m always curious about the decision-making process to not go through a certain route. So, Maersk said it wasn’t going to go through the Red Sea anymore after the missile was fired. What are the factors that go into making that type of decision? And then if the Navy were to say tomorrow that “We’re going to escort all of these ships”, would that completely address their concerns? Would they [say], “Okay, yes, we’re going to resume this route”?

Craig (10:52):

It’s a great question because, I don’t know that the US with all of our other geopolitical commitments, particularly around China and what’s happening around Taiwan. I mean, the Chinese want our Navies in the Middle East. That’s where they want them because [that] enables them to have an enormous amount of power over East Asia. They want us moving our assets and being distracted in the Middle East. So they actually win geopolitically in terms of their power over their region by moving, forcing us to be distracted in the Middle East. But I don’t know that we have all the resources to defend every single ship from these attacks. And ultimately, what the container lines have to really think about is what’s the cost of a ship? You’re talking hundreds of millions of dollars. What’s the cost of a cargo, again, measured in probably billions of dollars when we a look at a 20,000 TEU (Twenty-foot equivalent unit) ship. And then you have the insurance companies which are saying, “Hey, we’re not going to insure these ships that go through these channels.” And that means that ultimately Maersk and others have to look at alternative routes. They will obviously protect their crews. The crews do understand that, the nature of their jobs is on occasion they put themselves in harm’s way. And we’ve seen that with the movie with Tom Hanks plays as the Captain…

…Joe (28:26):

Well, I’m glad you mentioned the freight tech companies because that’s where I was going to go next. So we already said, we already know it was bad for a lot of companies in 2023. But, going back to 2021, 2022, we got interested in freight, obviously on the Odd Lots podcast, that was also a big year for tech and tech investing.

A lot of VCs suddenly probably woke up to this idea, this world we’re like ‘Oh, the freight industry looks like a mess. I’m sure if we just apply our software magic, we can solve all of these problems.’ We saw some really huge fundraising, but then also in 2023, we saw the reversal of it. So we saw the freight brokerage, Convoy, just basically completely go out of business. I think we saw a pretty big downturn at Flexport. We’ve had their CEO Ryan Peterson on the show a couple of times.

What happened with freight tech? What were the theses maybe of the investors who were going in, they’re [thinking] ‘Oh we can solve this.’ And what reality did they run into that maybe it’s a bit harder to solve some of these problems then they may have assumed?

Craig (29:31):

You know, they were playing the Uber, Lyft, even Airbnb playbooks, which is, ‘Hey, I have this capacity and I can go out and create a digital app. If I could disrupt the taxi industry the way Uber did. Then I could also disrupt the trucking industry.”…

…Craig (29:47):

Here’s the problem, is that the investors that really drove the high valuations didn’t understand freight. They didn’t understand the boom and bust cycle. Convoy arguably had the best roster. Like, it had a dream team of investors. I mean, you had Bill Gates, Jeff Bezos, you had Reid Hoffman, you had the who’s who of sort of Silicon Valley and legacy tech that were investors. I mean, it was the best lineup of investors of probably any company in supply chain you could possibly have. And yet that did not help them survive.

And the reason is that really the investors and the management team, when it first raised money and got into this business, did not understand how cyclical this industry is and how fungible the capacity is. So if I want to disrupt the taxi industry, the reason that that works is I have all of these consumers sitting at home with their cars that are idle 90% of the time. That can create incremental capacity in and out of a market. So as the market surges, you can have, and Uber has piloted this with their search pricing, they will send out messages to their drivers and say, ‘Hey, there’ a football game in town, or there’s a big event in town, please come out and get three to four or five X your normal rate.’ And they’ve created this sort of surge flexible capacity model that works really well in a business like Uber and personal transportation.

The problem in trucking is there is none of that excess capacity sitting against the fence that can flex in and out of a market. And so what ultimately happened is that they were able to apply some digitization to the dispatch process and to the driver management process. But that was incremental. And one would argue, and Brad Jacobs has argued that the incumbents were doing the same thing, is that effectively all of these companies were spending billions of dollars to build technology that everyone else was also building. And not just existing companies like XPO and CH Robinson, but you also had all these tech vendors, companies that provide software that were also building technology that they could sell to hundreds of companies.

All this was happening at the same time. And effectively what Convoy did not understand early on, which I think they certainly understood at the late part of the cycle, a late part of their business is that freight is commodity, it’s highly fungible. The capacity is highly fungible. And no matter how much money I spend acquiring the capacity, there is nothing to keep that capacity from going to the next highest bidder. And because of that, all of the money that they wasted in acquisition costs to acquire capacity was effectively meaningless at the end of the day because that capacity could be found elsewhere….

…Craig (37:35):

So it’s interesting because Brad talked about the fact that when he got in this industry 10 years ago it was largely humans and then over time it had digitized. And I think the statement was he had 97% of its freight was electronic. That very well may be the case for his business. Think of XPO’s role in the business. It’s a big really predominantly, in its focus on LTL, which means it has very large enterprise shippers, big commitments. It’s able to digitize a lot of the transactions. And most of the bigger trucking companies are digital. Like if you go look at Knight-Swift’s operation, okay, look at Schneider’s operation, go look at Old Dominion.

Joe (38:13):

And so that is like placing an order on a thing and it automatically…

Craig (38:15):

That’s right. Okay. And that’s what the big companies want to do. Okay. Is they actually want to eliminate human contact as much as possible. Yeah. Because that’s how they’re able to optimize the, the model. They use technology to do electronic transactions and that is, that probably represents 20% of the business. It’s the cream of the crop business. It’s the business that every company wants because it’s the high volume shippers, dependable volume and…

Joe (38:41):

Standardized lanes, standardized shippers, standardized carriers. Over and over.

Craig (38:45):

Exactly. Highly predictable. Yeah. Highly consistent business. And if you’re building a network, then that’s what you want. Because I can depend on it day in, day out. That’s what the larger companies focus on. I see. And if, if you ask the CEO of Knight-Swift, you would probably get a similar answer about how much of its freight is electronically tendered. CH Robinson the largest freight broker in the country publishes that 78% of its freight doesn’t have a human touch. But the reality is, Joe, is that the hundreds of thousands of freight broker people that are out there making up, at least, the numbers are as high as registered freight brokers in the 60 to 80,000 numbers. We track and think there’s about 5,000 high scale freight brokers that do more than about $10 million in revenue a year. They’re still predominantly human-based and what they’re dealing with are the exceptions…

…Craig (39:39):

So what happens is, a large volume shipper takes 95% of its freight and sends it over to the XPOs and the CH Robinsons and the Knight-Swifts. And so they get all of the electronic stuff dispatched. What’s left over is the really hard to manage. It’s either a lane that nobody wants, it’s somebody who literally chop shops price on every single load. It’s a commodity that nobody wants. And you’ll see in the meme, if you go on Twitter or on X, you see all the memes and freight making fun of the kinds of freight that nobody wants. This is the type of freight that’s left over.

Joe (40:13):

What’s an example of a type of freight that no one wants to deal with?

Craig (40:16):

Grocery. Driver unload…

…Craig (40:21):

Well, it’s typically going to a grocery store. It takes a long time to unload it. They’re miserable because they’re in a cold trailer, in a refrigerated trailer, they have to use something called a Lumper. A Lumper is, I pay somebody at the dock to unload me, or the driver has to unload themselves. They can take eight to 10 hours to load at a farm. They go into a farm facility or distribution center because they’re all hand loaded. Think of like a crate of tomatoes or oranges or something. A lot of it’s loaded not on pallets, but actually sort of flow loaded. So this is undesirable freight for a lot of these guys. It has really tight transit times. So that’s a type of undesirable freight.

Flatbed, which is hauled to project sites. You’re not going to a warehouse, but you’re going to a construction site that has to be manually unloaded. It can take sometimes hours or longer where the truck’s got to sit. And so there’s a lot of freight that’s just undesired. And that’s where a lot of the freight brokers, the humans still take and manage a lot of these sort of long tail transactions. That isn’t the world that an XPO plays in. That is the world that the predominance of your freight brokerage…

…Joe (48:18):

One last quick question. I’m going to pivot. Founder and CEO of FreightWaves. We always talk to you about freight. You also have this whole other business and aviation media and other aviation assets. I want to do like an hour with you at some point. Talk about that. But just real quickly, is it really true that there’s more airports than McDonald’s in United States?

Craig (48:36):

This is an insane stat that no one, I think everyone finds it hard to believe. So if you take the total amount of private, this includes private airports and public airports. So most people think of airports, I’m thinking of like JFK and LaGuardia and Newark. The predominance, the vast majority of airports in the United States are actually privately owned airports or community owned airports. Places that have very small runways of a thousand to 2000, 3000 feet can’t accommodate even a jet. They’re accommodating small aircraft. Yeah. There’s 19,000 of those. And I think the number on McDonald’s is like 16,000…

…Craig (49:26):

People think that private airports is all about jets. And they always think it’s like really rich people. But the predominance of the folks that use these small airports are farmers and their agriculture. And our entire [agriculture] ecosystem is dependent upon airplanes and bees, but airplanes to do things. And so a lot of the airports are used in places out in the heartland for farming. They’re also used for things like mining extraction and stuff. And so the vast majority of those airports are very small airports that most people will never see, will never notice unless they get in a small airplane.

2. My Parents’ Dementia Felt Like the End of Joy. Then Came the Robots – Kat McGowan

WHEN MY MOM was finally, officially diagnosed with dementia in 2020, her geriatric psychiatrist told me that there was no effective treatment. The best thing to do was to keep her physically, intellectually, and socially engaged every day for the rest of her life. Oh, OK. No biggie. The doc was telling me that medicine was done with us. My mother’s fate was now in our hands…

…Beyond physical comfort, my goal as their caregiver was to help them to feel like themselves, even as that self evolved. I vowed to help them live their remaining years with joy and meaning. That’s not so much a matter of medicine as it is a concern of the heart and spirit. I couldn’t figure this part out on my own, and everyone I talked to thought it was a weird thing to worry about.

Until I found the robot-makers.

I’m not talking about the people building machines to help someone put on their pants. Or electronic Karens that monitor an old person’s behavior then “correct” for mistakes, like a bossy Alexa: “Good afternoon! You haven’t taken your medicine yet.” Or gadgets with touchscreens that can be hard for old people to use…

… Instead, the roboticists I learned about are trained in anthropology, psychology, design, and other human-centric fields. They partner with people with dementia, who do not want robots to solve the alleged problem of being old. They want technology for joy and for flourishing, even as they near the end of life. Among the people I met were Indiana University Bloomington roboticist Selma Šabanović, who is developing a robot to bring more meaning into life, while in the Netherlands, Eindhoven University of Technology’s Rens Brankaert is creating warm technology to enhance human connection. These technologists in turn introduced me to grassroots dementia activists who are shaking off the doom loops of despair…

…The robot-makers are a shaft of light at the bottom of the well. The gizmos they’re working on may be far in the future, but these scientists and engineers are already inventing something more important: a new attitude about dementia. They look head-on at this human experience and see creative opportunities, new ways to connect, new ways to have fun. And, of course, they have cool robots. Lots and lots of robots. With those machines, they’re trying to answer the question I’m obsessed with: What could a good life with dementia look like?

THE ROBOT’S TORSO and limbs are chubby and white. It seems to be naked except for blue briefs below its pot belly, although it does not have nipples. It is only 2 feet tall. Its face, a rectangular screen, blinks on. Two black ovals and a manga smile appear.

“Hello! I am QT, your robot friend,” it says. It says this to everyone, because that’s its job. QT raises both arms in a touchdown gesture. The motors whir. They sound expensive.

It might look and sound sort of familiar if you know anything about humanoid social robots—contraptions built to respond to us in ways we recognize. You may also remember their long history of market failures. RIP Kuri, Cozmo, Asimo, Jibo, Pepper, and the rest of their expensive, overpromising metal kin. QT is not like them. It is not a consumer product; it’s a research device equipped with microphones, a 3D camera, face recognition, and data recording capabilities, built by a Luxembourgian company for scientists like Šabanović to deploy in studies. She’s using QT to explore ikigai, a Japanese word that roughly translates to a reason for living or sense of meaning in life, but also includes a feeling of social purpose and everyday joy. Doing a favor for a neighbor can create ikigai, as can a hard week’s work. Even reflecting on life achievements can bring it on. Her team, funded by Toyota Research Institute, is tinkering with QT to see what kind of robot socializing—reminiscing, maybe, or planning activities, or perhaps just a certain line of conversation—might give someone a burst of that good feeling…

…One challenge is that dementia is never the same for any two people. There are different varieties, such as Alzheimer’s, frontotemporal dementia, and Lewy body disease, and they are dynamic, changing with time. Some people have no problem with memory but struggle with words; others make strange decisions. Many say their perception of time changes, or their senses become more acute. Some people are angrier, some calmer, and others lose all filters and say whatever they think…

… Today, Hsu will demo a storytelling game between person and machine. Eventually QT will retain enough information to make the game personalized for each participant. For now, the point is to test QT’s evolving conversational skills to see what behaviors and responses people will accept from a robot and which come across as confusing or rude. I’m excited to see how this plays out. I’m expecting spicy reactions. People with dementia can be a tough audience, with little tolerance for encounters that are annoying or hard to understand…

…Soon, Maryellen, an energetic woman in a red IU ball cap, walks in and takes a seat across from the robot. Maryellen has enjoyed talking to QT in the past, but she’s having an off day. She’s nervous. “I’m in early Alzheimer’s, so sometimes I get things wrong,” she apologizes.

The robot asks her to select an image from a tablet and make up a story. Maryellen gamely plays along, spinning a tale: A woman, maybe a student, walks alone in the autumn woods.

“Interesting,” says QT. “Have you experienced something like this before?”

“I have,” Maryellen says. “We have beautiful trees around Bloomington.” The robot stays silent, a smile plastered across its screen. QT has terrible timing, pausing too long when it should speak, interrupting when it should listen. We all share an apologetic laugh over the machine’s bad manners. Maryellen is patient, speaking to QT as if it were a dim-witted child. She understands that the robot is not trying to be a jerk.

Today’s robot-human chat is objectively dull, but it also feels like a breath of fresh air. Everyone in this room takes Maryellen seriously. Instead of dismissing her pauses and uncertainty as symptoms, the scientists pay careful attention to what she says and does.

Next enters Phil, a man with a tidy brush mustache, neatly dressed in chinos and a short-sleeve button-down printed with vintage cars. After taking a seat across from the robot, he chimes in with QT to sing “Take Me Out to the Ball Game.” He faces the machine, but he’s playing to us, mugging and rolling his eyes. Song over, he first teases Hsu, then another resident, then pretty much every woman in the room. In other circumstances he’d be patronized or “diverted”—someone would attempt to distract him. Instead, we join him in being silly, joking about the situation and the robot.

QT pipes up with another round of awkward conversation (“I love the song. Do you?”), and Phil replies with a combination of graciousness and sass (“You sing very well. Did you have that recorded, maaaybe?”). Hsu asks Phil how he felt talking to the machine. “Like I’m a fool talking to nothing,” he says sharply. “I know it’s not a real person.” Theatrically, he turns to the robot. “You’re not real … are you?” He winks, and laughs uproariously.

He likes the robot? He doesn’t? It’ll be the team’s job to figure out these enigmatic yet relatable reactions. The three of us plus robot pack up and head back to Šabanović’s R-House Lab at the university. In the big conference room there, her team will converge, students of informatics, data science, computer vision, and psychology. They’ll pick apart Maryellen’s kindness and hesitation and Phil’s glee and annoyance, looking for their next task, the next skill QT needs to learn…

…In 2005 she spent time with the pioneering roboticist Takanori Shibata at Japan’s National Institute of Advanced Industrial Science and Technology and his robot seal pup Paro. Handcrafted, the little critter responded to speech and touch by bleating—it was programmed with actual seal pup cries—closing its eyes, and flipping its tail and flippers. It was one of very few robots at the time that could be used outside the lab without expert assistance.

Even at this early stage, elderly people were the target audience. The researchers took the machine to care homes, and Šabanović was startled to see the effect. “People would suddenly light up, start talking to it, tell you stories about their life,” she says. Shibata’s studies, then and later, showed that the cuddly seal improved quality of life; it got people to interact more, reduced stress, and eased depression.

So Šabanović joined the emerging field of human-robot interaction. Her experiments since have explored how we project our “techno-scientific imaginaries”—our cultural baggage, fears, and fantasies—onto these hunks of metal and plastic. Sort of like if Isaac Asimov became an experimental psychologist.

In one early study, she brought Paro into a nursing home to study how the device turned wallflowers into butterflies. Most residents would ignore the seal pup until other people showed up—then it would become an icebreaker or a social lure. They’d gather to touch it. They’d comment on its sounds and movements, laughing. The robot, she saw, seemed to open a door to other people…

…A PAIR OF round, white blobs sit side by side, each the size and shape of a pumpkin. Every 10 minutes or so, the orbs croak like frogs, or chirp like crickets, and sparkle with light. They want your attention. Pick one up, and depending on whether you stroke it, tap it, or shake it, it will respond with noise and light. If the orbs are set to “spring” mode, and you stroke one, it will sing like a bird and blush from white to pink. If you ignore the second blob, it will act jealous, flushing red. If your friend then picks up orb number two, they will mimic each other’s light and sound, encouraging you to play together.

The blobs are called Sam, and together they form a social robot boiled down to its essence: an invitation to connect. Sam is one of the otherworldly creations emerging from the Dementia and Technology Expertise Centre at the Eindhoven University of Technology in the Netherlands. Rens Brankaert and his colleagues don’t call this—or the other things they make—a robot. They call it warm technology. “We want to contribute to the warmth between people,” he says. And to create gadgets that a wider range of people would enjoy using…

…One of the warmest technologies from the Eindhoven group and their collaborators is Vita, a patchwork pillow with vinyl panels. Pass your hand over a patch and a sensor detects your presence, playing a personalized, familiar soundscape: a stroll down a cobblestoned street in the rain, maybe, or the clatter of coffee cups and servers and spoons at a café. Family members and caregivers select the sounds they think will resonate with the user. Over years of testing, the pillow has been fine-tuned, and Brankaert is currently talking to a partner to produce it and bring it to market.

In one demonstration, a white-haired woman sits quietly, looking dreamy, or very possibly sleepy. “Good morning,” says her daughter, but the woman does not respond. The daughter places the pillow on her mother’s lap and guides her mother’s hand over a large yellow patch. The chorus of the World War II chestnut “We’ll Meet Again” emerges. The older woman’s eyes brighten, and a smile of recognition creeps over her face. She begins to sing.

What is this pillow gadget for? It doesn’t restore her speech or fix her memory or replace anything she no longer can do. It helps the two of them find each other again across the dim and confusing terrain of dementia…

…You learn a lot about people by hanging out with robots. QT made it plain to me how much human interaction depends on tiny movements and subtle changes in timing. Even when armed with the latest artificial intelligence language models, QT can’t play the social game. Its face expresses emotion, it understands words and spits out sentences, and it “volleys,” following up your answer with another question. Still, I give it a D+…

…It’s four days before Christmas, and QT is visiting Jill’s House again, decked out in a Santa hat and a forest-green pinny for this visit. With the help of ChatGPT, QT is now more fun to talk to. A few dozen residents, family members, and staff are here, plus much of Šabanović’s team. Šabanović’s 3-year-old daughter, Nora, is nestled on her lap, carrying on the family legacy. She stares shyly at the robot.

This is a holiday party rather than a formal experiment. The session soon devolves into friendly chaos, everyone talking over one another and laughing. We all chime in to sing “Here Comes Santa Claus,” the robot flapping its arms. Phil plays peek-aboo with Nora. It really does feel like a glimpse of the future—the people with dementia as just regular people, and the machine among the humans as just another guest.

3. A Framework For Spotting Value Traps – Dan Shuart

As for value traps, I like to think of them as somewhat of the anti-compounders. They display the opposite of the characteristics described above. Specifically, they demonstrate some combination of;

  1. A need to retain a significant amount of the profits they generate just to maintain existing levels of profitability. In other words, they tread water or slowly drown.
  2. They have very poor incremental, or even negative, returns on incremental invested capital. This results in the business retaining profits and standing still or shrinking.
  3. They return too much capital to shareholders and do not reinvest in the business to an adequate degree or take on excessive leverage to fund unsustainable capital return programs…

…Here is how we look for value traps and a few real world stock examples.

Cash-in, Cash-out Framework

An initial test/filter Matt and I use to spot a potential value trap, or identify a potentially good business, is what we call the cash-in, cash-out framework. It’s simple yet very powerful.

We are trying to answer a simple two-part question: how much cash does the business reinvest and what are the returns on the reinvested cash? We prefer to work from cash flow statements, as normally cash doesn’t lie and it is much more difficult to manipulate than GAAP earnings or balance sheet figures. Just don’t forget to consider stock compensation, which is a very real expense.

I like to look at ten year increments and add up how much cash came into a business from all sources – operating cash flow, debt issuance, and share issuance – versus how much cash left the business via debt repayments, share repurchases, and dividends. Add the two together and you get the dollar amount of cash retained (from all sources) over that time period.

Next, we look at the cumulative profits over the same time period to get an idea what the reinvestment rate is as a percentage of total operating profits. Finally, by looking at the change in operating profits (often this requires some normalization) over the time period and dividing by total retained profits we can assess incremental returns on retained capital (incremental ROIC or I-ROIC). If profits grew by $1B and it took $5B of retained capital to generate that extra $1B, I-ROIC is 20% ($1B/$5B). Reinvestment rate and I-ROIC, in conjunction with shareholder yield, tell me roughly how the business has compounded in value on a per share basis…

…Verizon is puzzling to me as I would expect it to be a better business given it operates in a lightly regulated oligopoly with hard to replicate assets. Alas. The company soaked up 90% of earnings over the last ten years and barely grew for a measly 1% compounding rate. A generous debt-fueled dividend payout took business returns to an underwhelming 6%. While the yield seems attractive, a high dividend payout cannot go on forever if it’s driven by increasing levels of debt.

Macy’s has been a disaster. Left behind by better positioned specialty retailers and ecommerce businesses, Macy’s reinvested a third of profits at highly negative rates and has become far less valuable over the past decade, as you can see…

…To be clear, these stocks are cherry picked and meant to illustrate a point. I’m sure you can find a plethora of stocks that had cheap starting valuations, poor returns on capital, and still re-rated to a higher stock price for some reason over a ten year period. While those stocks undoubtedly exist, and probably in great quantity, I seriously doubt most people’s ability to reliably predict those situations for any extended period of time. I certainly couldn’t do it, it would be akin to throwing darts.

The point I’m making is, by assessing the economic fundamentals of a business whose stock may look cheap, you can implement guard rails as to whether or not you may be looking at a value trap. I’m skeptical of any stock that looks cheap but has flunked the cash-in, cash-out test over a many-year period. This filter at least gives us some hope of not fooling ourselves when we are enamored only by a cheap purchase price. Cheap businesses can be fine investments, but cheap and good businesses can be spectacular, and more importantly, limit your downside. To us, it’s also a far more replicable process, and a lot easier to stick with over the long run.

There are a few more critical other points to this discussion, as what I’ve described above is the easy part of the analysis (anyone can plug numbers into a spreadsheet).

  • The historical numbers are the result of what happened over the past ten years, and what matters is what happens over the next ten years.
  • Understanding what happened is easy, understanding why it happened and, more importantly, if it will continue to happen is where superior qualitative judgement and experience are required…

…Finally, using this I-ROIC framework will cause you to miss opportunities when businesses are at key inflection points and the future looks dramatically rosier than the past. That’s fine with us, because I think it causes us to “miss” more losers by keeping us out of trouble. It also means we will almost surely not find the next Amazon, but that’s not the game we are trying, or equipped, to play.

4. A beginner’s guide to accounting fraud (and how to get away with it) – Leo Perry

But now that I’ve worked out how to read accounts, and find it quite easy to spot signs of fraud, I also have some ideas for how we could run a good one of our own. And I’m not so sure there won’t be a lot more money in that line of business, if you can call it that.

The mechanics of making up sales are pretty simple. If we’re running a business and want to boost our top line, all we have to do is phone a friend. A good friend to be sure, who doesn’t ask questions. It’ll only take them a few hours to do the paperwork for setting up a shell company and then we have our customer, one we can invoice whatever we want. We have our fake sales (and I’m pretty sure our mate has done nothing wrong, in the eyes of the law).

You might think that sounds too easy, that someone would spot the problem and the con would quickly fall apart.

Well back in 2014 I explained what I thought looked like the most obvious fraud to an FT journalist. One of the things that caught my eye about Wirecard was the accounts of a company it bought in Singapore. Tucked away in the notes were references to specific customers — like Ashazi Services. [1] This was a Bahrain entity with no apparent operating business. A dormant shell that had never filed financials. Even the product Wirecard said it was licensing to it, the Elastic Platform, seemed to be a fiction (at least I never found any other mention of it by the company):…

…Dan McCrum, the FT journalist I met with, went to visit what there was of Ashazi as a part of a long-running series of Alphaville posts on Wirecard. And the whole scam did come crashing down . . . a mere six years later…

…Even if some over-eager analyst does turn up at our sham customer, we can always move the goalposts. A few years ago I asked a Chinese-speaking colleague to visit some companies on the mainland. These were businesses that were reported to have signed purchase agreements with a western mining startup, which I was short. This startup had announced a deal to sell product a few years earlier. Then the contract was suddenly cancelled and simultaneously replaced with a similar agreement, but with a different Chinese entity — which we’ll call Tulip Industries. The deal equated to an outlay of approximately $150mn a year by the customer.

What we found at Tulip Industries was little more than a startup itself, with only a few field trials in progress. Even its most ambitious presentation forecasts involved a fraction of the product it had apparently agreed to buy. And its CEO was very clear that the deal wasn’t a firm commitment, only a loose framework. In fact he said he’d never spoken to the company that I was short, the deal was agreed through a friend in Hong Kong whose nephew worked for the miner. (He was much more committed to explaining to my colleague why China needed to invade Japan.)…

…If we don’t want to rely on a third party there’s also the DIY approach, using an entity that we control. A related party.

One of the first sets of financial statements I really struggled to reconcile with the story company management was telling was for Cupid, a UK-listed operator of dating apps and websites. I don’t know how many of its shareholders bothered to try out the sites it ran, even briefly, but I would guess not many. For anyone who did it seemed like they were too good to be true. Wherever you signed in from in the world, dozens of very keen and very attractive women would quickly get in touch. And they all happened to live nearby.

The Kyiv Post looked into how the company might be managing this back in 2013. Australian short seller John Hempton at Bronte Capital even took the trouble to log in from the most remote island in the UK (not in person, he used a virtual private network) and still found no shortage of admirers in the local area — even though the population there was small enough to all know each other. The fact that his profile stated he had syphilis apparently wasn’t a problem either.

Cupid commissioned KPMG to investigate; its report found there was “no evidence of a company-organised practice” of staff using fake profiles to encourage subscriptions.

Cupid’s accounts were not as straightforward as its business model, and shareholders seemed to have even less time for them than they did for its services.

The annual report for 2011 had a chunky £2mn receivable from a company called Amorix, which was controlled by Cupid’s founders. Cupid said Amorix owed it this money because it had been collecting customer subscriptions on its behalf. But Amorix’s own accounts showed it only had about £80,000 in the bank, and no other assets to speak of. There was no trace of the money Cupid said was being collected for it…

…The magic thing about fake sales is they are 100 per cent margin. All profit. You don’t need to go to the trouble of actually producing whatever it is you are pretending to sell, do you? So £100 in sales is £100 of profit. Hold that thought for a minute.

Now let’s think about what kind of business we want to start with to run our little fraud out of. Not a profitable one obviously. That would cost us good money to get control of in the first place, and we want nothing to lose. What we need is a business that has a lot of turnover but makes no money, but isn’t burning cash either. Something like a very low margin distribution business…

…So let’s say we go into the fruit wholesale business. We buy boxes of bananas and sell them on at cost. Why? Well, while we’re only washing our face, if we turn over £100mn in bananas who’s going to notice when we add £1mn that’s lemons? That’s still less than 1 per cent of our sales after all. But if the £1mn is fake then it’s all profit. And as we make no money shipping bananas, the fake lemons are all of our profit.

The reported value in our business now all comes from made up sales to a fictitious customer; a customer set up by a mate that no one outside our office is ever going to know about. No one can pay them a visit if they don’t know its name. And they won’t, because at that size we wouldn’t even have to mention it exists. From the outside there’s just no way to spot anything wrong in our revenue numbers.

5. Why Wasn’t there a Recession? – Michael Batnick

So, how did everyone get 2023 so wrong? Michael Cembalest hit on this in his 2024 outlook.

Monetary policy is tighter but below the level of real rates that led to prior recessions; corporate cash flow is still in good shape, unlike the cash flow deficits which preceded prior recessions; and the corporate sector termed out debt maturities before the rise in rates, partially immunizing itself from the interest spike that preceded prior recessions. Private sector credit creation was similar to prior cycles, but debt servicing risks are lower for companies and households that termed out maturities.

Even though the Fed aggressively raised rates, monetary policy wasn’t as restrictive as it was in the lead-up to prior recessions (not including 2020). That’s not to minimize their efforts of cooling inflation, only putting in perspective that historically, they just weren’t that tight.

And even if they raised rates to 6% or higher, it’s hard to say for sure that we would have had a recession. Almost 90% of S&P 500 debt is long-term fixed, which is why net interest costs didn’t go up with interest rates. Paradoxically, thanks to all the cash on the balance sheets actually earning something, net interest costs went down!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon. Holdings are subject to change at any time.