What We’re Reading (Week Ending 15 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 15 October 2023:

1. The Testament of a Furniture Dealer – Ingvar Kamprad

We have decided once and for all to side with the many. What is good for our customers is also, in the long run, good for us. This is an objective that carries obligations.

All nations and societies in both the East and West spend a disproportionate amount of their resources on satisfying a minority of the population. In our line of business, for example, far too many of the fine designs and new ideas are reserved for a small circle of the affluent. That situation has influenced the formulation of our objectives…

…The many people usually have limited financial resources. It is the many people whom we aim to serve. The first rule is to maintain an extremely low level of prices. But they must be low prices with a meaning. We must not compromise either functionality or technical quality…

…The concept of a low price with a meaning makes enormous demands on all our co-workers. That includes product developers, designers, buyers, office and warehouse staff, sales people and all other cost bearers who are in a position to influence our purchase prices and all our other costs – in short, every single one of us! Without low costs, we can never accomplish our purpose…

… A job must never be just a livelihood. If you are not enthusiastic about your job, a third of your life goes to waste, and a magazine in your desk drawer can never make up for that.

For those of you who bear any kind of leadership responsibility, it is crucially important to motivate and develop your co-workers. A team spirit is a fine thing, but it requires everybody in the team to be dedicated to their tasks. You, as the captain, make the decisions after consulting the team. There is no time for argument afterwards. Take a football team as your model!

Be thankful to those who are the pillars of our society! Those simple, quiet, taken for-granted people who always are willing to lend a helping hand. They do their duty and shoulder their responsibility without being noticed. To them, a defined area of responsibility is a necessary but distasteful word. To them, the whole is just as self evident as always helping and always sharing. I call them stalwarts simply because every system needs them. They are to be found everywhere – in our warehouses, in our offices, among our sales force. They are the very embodiment of the IKEA spirit…

…Profit is a wonderful word! Let us start by stripping the word profit of its dramatic overtones. It is a word that politicians often use and abuse. Profit gives us resources. There are two ways to get resources: either through our own profit, or through subsidy. All state subsidies are paid for either out of the state’s profit on operations of some kind, or from taxes of some kind that you and I have to pay. Let us be self-reliant in the matter of building up financial resources too…

…Wasting resources is a mortal sin at IKEA. It is not all that difficult to reach set targets if you do not have to count the cost. Any designer can design a desk that will cost 5,000 kronor. But only the most highly skilled can design a good, functional desk that will cost 100 kronor.

Expensive solutions to any kind of problem are usually the work of mediocrity.

We have no respect for a solution until we know what it costs. An IKEA product without a price tag is always wrong! It is just as wrong as when a government does not tell the taxpayers what a “free” school lunch costs per portion.

Before you choose a solution, set it in relation to the cost. Only then can you fully determine its worth…

…Planning is often synonymous with bureaucracy. Planning is, of course, needed to lay out guidelines for your work and to enable a company to function in the long term. But do not forget that exaggerated planning is the most common cause of corporate death. Exaggerated planning constrains your freedom of action and leaves you less time to get things done. Complicated planning paralyses. So let simplicity and common sense guide your planning…

…If we from the start had consulted experts about whether a little community like Älmhult could support a company like IKEA, they would have undoubtedly advised against it. Nevertheless, Älmhult is now home to one of the world’s biggest operations in the home furnishings business.

By always asking why we are doing this or that, we can find new paths…

…It is no coincidence that our buyers go to a window factory for table legs and a shirt factory for cushions. It is quite simply the answer to the question “why”.

Our protest against convention is not protest for its own sake: it is a deliberate expression of our constant search for development and improvement…

…The general who divides his resources will invariably be defeated. Even a multitalented athlete has problems.

For us too, it is a matter of concentration – focusing our resources. We can never do everything, everywhere, all at the same time.

Our range cannot be allowed to overflow. We will never be able to satisfy all tastes anyway. We must concentrate on our own profile. We can never promote the whole of our range at once. We must concentrate. We cannot conquer every market at once. We must concentrate for maximum impact, often with small means…

…When we are building up a new market, we concentrate on marketing. Concentration means that at certain vital stages we are forced to neglect otherwise important aspects such as security systems…

…In our IKEA family we want to keep the focus on the individual and support each other. We all have our rights, but we also have our duties. Freedom with responsibility. Your initiative and mine are decisive.

Our ability to take responsibility and make decisions.

Only while sleeping one makes no mistakes. Making mistakes is the privilege of the active – of those who can correct their mistakes and put them right.

Our objectives require us to constantly practise making decisions and taking responsibility, to constantly overcome our fear of making mistakes. The fear of making mistakes is the root of bureaucracy and the enemy of development…

…The feeling of having finished something is an effective sleeping pill. A person who retires feeling that he has done his bit will quickly wither away. A company which feels that it has reached its goal will quickly stagnate and lose its vitality. Happiness is not reaching your goal.

Happiness is being on the way. It is our wonderful fate to be just at the beginning. In all areas. We will move ahead only by constantly asking ourselves how what we are doing today can be done better tomorrow. The positive joy of discovery must be our inspiration in the future too…

…Bear in mind that time is your most important resource. You can do so much in 10 minutes. Ten minutes, once gone, are gone for good. You can never get them back. Ten minutes are not just a sixth of your hourly pay. Ten minutes are a piece of yourself. Divide your life into 10-minute units and sacrifice as few of them as possible in meaningless activity.

2. AI reads text from ancient Herculaneum scroll for the first time – Jo Marchant

A 21-year-old computer-science student has won a global contest to read the first text inside a carbonized scroll from the ancient Roman city of Herculaneum, which had been unreadable since a volcanic eruption in AD 79 — the same one that buried nearby Pompeii. The breakthrough could open up hundreds of texts from the only intact library to survive from Greco-Roman antiquity.

Luke Farritor, who is at the University of Nebraska–Lincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including πορϕυρας (porphyras), meaning ‘purple’. Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink…

…Hundreds of scrolls were buried by Mount Vesuvius in October AD 79, when the eruption left Herculaneum under 20 metres of volcanic ash. Early attempts to open the papyri created a mess of fragments, and scholars feared the remainder could never be unrolled or read…

…The Vesuvius Challenge offers a series of awards, leading to a main prize of US$700,000 for reading four or more passages from a rolled-up scroll. On 12 October, the organizers announced that Farritor has won the ‘first letters’ prize of $40,000 for reading more than 10 characters in a 4-square-centimetre area of papyrus. Youssef Nader, a graduate student at the Free University of Berlin, is awarded $10,000 for coming second…

…Most classical texts known today are the result of repeated copying by scribes over centuries. By contrast, the Herculaneum library contains works not known from any other sources, direct from the authors.

Until now, researchers were able to study only opened fragments…

… But more than 600 scrolls — most held in the National Library in Naples, with a handful in the United Kingdom and France — remain intact and unopened. And more papyri could still be found on lower floors of the villa, which have yet to be excavated.

Seales and his team spent years developing methods to “virtually unwrap” the vanishingly thin layers using X-ray computed tomography (CT) scans, and to visualize them as a series of flat images. In 2016, he reported1 using the technique to read a charred scroll from En-Gedi in Israel, revealing sections of the Book of Leviticus — part of the Jewish Torah and the Christian Old Testament — written in the third or fourth century AD. But the ink on the En-Gedi scroll contains metal, so it glows brightly on the CT scans. The ink on the older Herculaneum scrolls is carbon-based, essentially charcoal and water, with the same density in scans as the papyrus it sits on, so it doesn’t show up at all.

Seales realized that even with no difference in brightness, CT scans might capture tiny differences in texture that can distinguish areas of papyrus coated with ink. To prove it, he trained an artificial neural network to read letters in X-ray images of opened Herculaneum fragments. Then, in 2019, he carried two intact scrolls from the Institut de France in Paris to the Diamond Light Source, a synchrotron X-ray facility near Oxford, UK, to scan them at the highest resolution yet (4–8 micrometres per 3D image element, or voxel).

Reading intact scrolls was still a huge task, however, so the team released all of its scans and code to the public and launched the Vesuvius Challenge…

…In parallel, Seales’ team worked on the virtual unwrapping, releasing images of the flattened pieces for the contestants to analyse. A key moment came in late June, when one competitor pointed out that on some images, ink was occasionally visible to the naked eye, as a subtle texture that was soon dubbed ‘crackle’. Farritor immediately focused on the crackle, looking for further hints of letters.

One evening in August, he was at a party when he received an alert that a fresh segment had been released, with particularly prominent crackle. Connecting through his phone, he ran his algorithm on the new image. Walking home an hour later, he pulled out his phone and saw five letters on the screen. “I was jumping up and down,” he says. “Oh my goodness, this is actually going to work.” From there, it took just days to refine the model and identify the ten letters required for the prize…

…The word “purple” has not yet been read in the opened Herculaneum scrolls. Purple dye was highly sought-after in ancient Rome and was made from the glands of sea snails, so the term could refer to purple colour, robes, the rank of people who could afford the dye or even the molluscs. But more important than the individual word is reading anything at all, says Nicolardi. The advance “gives us potentially the possibility to recover the text of a whole scroll”, including the title and author, so that works can be identified and dated…

…artificial intelligence (AI) is increasingly aiding the study of ancient texts. Last year, for example, Assael and Sommerschield released an AI tool called Ithaca, designed to help scholars glean the date and origins of unidentified ancient Greek inscriptions, and make suggestions for text to fill any gaps2. It now receives hundreds of queries per week, and similar efforts are being applied to languages from Korean to Akkadian, which was used in ancient Mesopotamia.

Seales hopes machine learning will open up what he calls the “invisible library”. This refers to texts that are physically present, but no one can see, including parchment used in medieval book bindings; palimpsests, in which later writing obscures a layer beneath; and cartonnage, in which scraps of old papyrus were used to make ancient Egyptian mummy cases and masks.

3. The Problem With Counterfeit People – Daniel C. Dennett

Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk…

…Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.

The philosopher and historian Yuval Noah Harari, writing in The Economist in April, ended his timely warning about AI’s imminent threat to human civilization with these words:

“This text has been generated by a human. Or has it?”

It will soon be next to impossible to tell. And even if (for the time being) we are able to teach one another reliable methods of exposing counterfeit people, the cost of such deepfakes to human trust will be enormous. How will you respond to having your friends and family probe you with gotcha questions every time you try to converse with them online?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation…

… The key design innovation in the technology that makes losing control of these systems a real possibility is that, unlike nuclear bombs, these weapons can reproduce. Evolution is not restricted to living organisms, as Richard Dawkins demonstrated in 1976 in The Selfish Gene. Counterfeit people are already beginning to manipulate us into midwiving their progeny. They will learn from one another, and those that are the smartest, the fittest, will not just survive; they will multiply…

…As Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning…

…Did you know that the manufacturers of scanners have already installed software that responds to the EURion Constellation (or other watermarks) by interrupting any attempt to scan or photocopy legal currency? Creating new laws along these lines will require cooperation from the major participants, but they can be incentivized…

…It will be difficult—maybe impossible—to clean up the pollution of our media of communication that has already occurred, thanks to the arms race of algorithms that is spreading infection at an alarming rate. Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries. 

4. The New Kings of Wall Street Aren’t Banks. Private Funds Fuel Corporate America – Matt Wirz

High interest rates, driven by the Federal Reserve’s higher-for-longer policy, are shaking up how corporate loans get done. Soaring rates brought down banks such as Credit Suisse and Silicon Valley Bank and forced others to reduce lending. As those lenders stepped back, private-credit fund managers stepped up, financing one jumbo loan for American corporations after another.

This shift is accelerating a trend more than a decade in the making. Hedge funds, private-equity funds and other alternative-investment firms have been siphoning away money and talent from banks since a regulatory crackdown after the 2008-09 financial crisis. Lately, many on Wall Street say the balance of power—and risk—has hit a tipping point…

… The loans are expensive, but for many companies they are the only option. Next, private-credit firms are coming for the rest of the credit market, bankrolling asset-backed debt for real estate, consumer loans and infrastructure projects.

Private-equity firms use revenue from most of the loans to make leveraged buyouts, saddling the companies they acquire with expensive debt. Ultimately, more companies could end up under their control.

Regulators, concerned that so much money is going behind closed doors, are rushing to catch up with new rules for private fund managers and their dealings with the insurance industry. 

The firms have money to spend from clients such as pensions, insurers and, increasingly, individuals. Those investors piled in because returns were high compared with other debt investments in a low-yield world. Private lenders delivered average returns of 9% over the past decade on loans made mostly to midsize businesses, according to data provider Cliffwater…

…Some analysts are concerned about private credit taking over the loan market.

The shift “has concentrated a larger segment of economic activity into the hands of a fairly small number of large, opaque asset managers,” credit-ratings firm Moody’s Investors Service said in a September report. “Lack of visibility will make it difficult to see where risk bubbles may be building.”

There are risks to investors, too. High interest rates are making corporate borrowers more likely to default on the loans. Some managers are concentrating their exposure by making bigger loans backing multibillion-dollar deals…

…If private lenders keep refinancing debt from large companies that struggle to borrow in the bank market, that could also lower the average quality of their investments. About half of the $190 billion of below-investment-grade bank loans coming due in 2024 and 2025 are rated B-minus or below.

Private-credit assets under management globally rose to about $1.5 trillion in 2022 from $726 billion in 2018, according to data provider Preqin.

A handful of the fund managers control about $1 trillion combined, according to research by The Wall Street Journal…

…“It’s kind of nuts that there used to be just three or four of these [lenders] out there and now you can have 30,” said Erwin Mock, head of capital markets for Thoma Bravo, the private-equity firm that owns Hyland and negotiated its new loan.

Companies are using private debt to retire bank debt at unprecedented levels. Financial software maker Finastra borrowed $4.8 billion from Blue Owl, Oak Hill Advisors and others in August to refinance a loan arranged by Morgan Stanley. It was the largest private loan on record.

Asset managers are able to handle these monster loans, the size previously reserved for banks, because the firms are tapping deeper pools of capital.

Apollo, KKR and others have built, purchased or partnered with insurance companies that have hundreds of billions of dollars they need to invest. Much of the insurance money must go into investment-grade debt, so the firms are branching into asset-backed debt that is higher rated than most corporate loans…

… Private-credit funds don’t require borrowers to get credit ratings, and they guarantee completion of buyout loans. Banks, meanwhile, might back out when markets turned rocky. But private-credit loans have tougher covenants, prohibiting borrowers from selling assets or raising new debt to get cash. Private loans also charged average interest rates 5 percentage points higher than comparable debt in the bank market over the past 10 years, according to an index operated by Cliffwater.

Private-credit investors may fare better than bank-loan holders in the long term because of their better covenants, Goldman Sachs analysts wrote in a September research report. They are also owned by just a few lenders. That enables private creditors to intervene faster in times of financial stress and to recover more if a borrower defaults, the analysts said…

…“Investors wanted yield, and the government wanted credit risk away from the taxpayer,” said Joshua Easterly, co-president of Sixth Street, a private-credit firm he co-founded with other Goldman Sachs veterans. “That created the environment for this market to mature.”

Private credit shot ahead in the pandemic when crisis-struck banks froze up, stoking worries of mass defaults. Credit-ratings firms quickly downgraded dozens of companies—something they were criticized for not doing fast enough in 2008—making it even harder for the borrowers to get new bank debt.

The cycle intensified starting last year when the Fed tightened monetary policy and banks pulled back further. Interest rates on bank loans are normally much cheaper than the rates on private credit, but the difference between the two has shrunk to levels not seen since 2008. That makes bank loans less enticing—relatively, anyway.

5. The Israeli-Palestinian conflict: A chronology – Sammy Westfall, Brian Murphy, Adam Taylor, Bryan Pietsch and Andrea Salcedo

The roots of the conflict and mistrust are deep and complex, predating the establishment of the state of Israel in 1948. Both Palestinians and Israelis see the territory between the Jordan River and the Mediterranean Sea as their own, and Christians, Jews and Muslims all hold parts of the land as sacred…

…The Ottoman Empire had controlled that part of the Middle East from the early 16th century until control of most of the region was granted to the British after World War I.

Both Israelis and Palestinians were struggling for self-determination and sovereignty over the territory, developing respective movements for their causes.

As World War I began, several controversial diplomatic efforts — some contradicting each other — by the Great Powers tried to shape the map of the modern Middle East, including the Palestinian territories. Palestinians cite a series of letters in 1915 to 1916 between Mecca’s emir and the British high commissioner in Egypt, known as the McMahon-Hussein Correspondence, as outlining a promise of an independent Arab state.

In 1916, the Sykes-Picot Agreement secretly negotiated between Britain and France planned to carve up the Middle East into spheres of influence, and determined that the land in question was to be internationalized.

In 1917, Britain’s foreign secretary, Lord Arthur Balfour, expressed his government’s support for “the establishment in Palestine of a national home for the Jewish people” in a letter to Baron Walter Rothschild, the head of the British wing of the influential European Jewish banking family.

To Israelis, the missive marks a formal utterance of the Israeli state’s right to exist; to Palestinians, it was an early sign of their dispossession. The declaration also noted that it was “clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine,” nodding to the overwhelming majority Arab population in the region at the time. (About 90 percent of the population was Muslim in 1850, and about 80 percent in 1914.)

Large-scale Jewish immigration followed in succeeding decades, including during Nazi persecution and the Holocaust. Both sides continued to assert their right to establish a state.

After World War II, nearing the end of the British Mandate for Palestine, the United Nations General Assembly in 1947 passes Resolution 181, urging the partition of the land into two independent states — one Arab and one Jewish. Religiously significant Jerusalem is to be under special international administration. The plan is not implemented after the Arab side rejects it, arguing that it is unfavorable to their majority population. Violence in the regional conflict grows.

Israel declares independence in May 1948. The next day, a coalition of Arab states, allied with Palestinian factions, attack Israeli forces in what becomes the first of several Arab-Israeli wars. In the end, Israel gains control of an even larger portion of territory — not including the areas of the West Bank and Gaza Strip. An estimated 700,000 Palestinians flee or are driven from their land in what Palestinians refer to as the “Nakba,” or “catastrophe” in Arabic.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 October 2023:

1. The Road to Self-Renewal – John Gardner

We build our own prisons and serve as our own jail keepers, but I’ve concluded that our parents and the society at large have a hand in building our prisons. They create roles for us – and self-images – that hold us captive for a long time. The individual who is intent on self-renewal will have to deal with ghosts of the past – the memory of earlier failures, the remnants of childhood dramas and rebellions, accumulated grievances and resentments that have long outlived their cause. Sometimes people cling to the ghosts with something almost approaching pleasure, but the hampering effect on growth is inescapable. As Jim Whitaker, who climbed Mount Everest, said, “You never conquer the mountain. You only conquer yourself.”

The more I see of human lives, the more I believe the business of growing up is much longer drawn out than we pretend. If we achieve it in our 30s, even our 40s, we’re doing well…

…The things you learn in maturity aren’t simple things such as acquiring information and skills. You learn not to engage in self-destructive behavior. You learn not to burn up energy in anxiety. You discover how to manage your tensions. You learn that self-pity and resentment are among the most toxic of drugs. You find that the world loves talent but pays off on character.

You come to understand that most people are neither for you nor against you; they are thinking about themselves. You learn that no matter how hard you try to please, some people in this world are not going to love you, a lesson that is at first troubling and then really quite relaxing…

…Of course failures are a part of the story, too. Everyone fails. When Joe Louis was world heavyweight boxing champion, he said, “Everyone has to figure to get beat some time.” The question isn’t did you fail, but did you pick yourself up and move ahead. And there is one other little question: “Did you collaborate in your own defeat?” A lot of people do. Learn not to.

One of the enemies of sound, lifelong motivation is a rather childish conception we have of the kind of concrete, describable goal toward which all of our efforts drive us. We want to believe that there is a point at which we can feel we have arrived. We want a scoring system that tells us when we’ve piled up enough points to count ourselves successful.

So you scramble and sweat and climb to reach what you thought was the goal. When you get to the top you stand up and look around, and chances are you feel a little empty. Maybe more than a little empty. You may wonder whether you climbed the wrong mountain.

But the metaphor is all wrong. Life isn’t a mountain that has a summit. Nor is it, as some suppose, a riddle that has an answer. Nor a game that has a final score.

Life is an endless unfolding and, if we wish it to be, an endless process of selfdiscovery, an endless and unpredictable dialogue between our own potentialities and the life situations in which we find ourselves. By potentialities I mean not just success as the world measures success, but the full range of one’s capacities for learning, sensing, wondering, understanding, loving and aspiring…

…There’s something I know about you that you may or may not know about yourself. You have within you more resources of energy than have ever been tapped, more talent than has ever been exploited, more strength than has ever been tested, more to give than you have ever given…

…There is not perfection of techniques that will substitute for the lift of spirit and heightened performance that comes from strong motivation. The world is moved by highly motivated people, by enthusiasts, by men and women who want something very much or believe very much…

…If I may offer you a simple maxim, “Be interested.” Everyone wants to be interesting but the vitalizing thing is to be interested. Keep a sense of curiosity. Discover new things. Care. Risk failure. Reach out…

…We cannot dream of a Utopia in which all arrangements are ideal and everyone is flawless. Life is tumultuous – an endless losing and regaining of balance, a continuous struggle, never an assured victory. Nothing is ever finally safe. Every important battle is fought and refought. You may wonder if such a struggle, endless and of uncertain outcome, isn’t more than humans can bear. But all of history suggests that the human spirit is well fitted to cope with just that kind of world…

…Meaning is not something you stumble across, like the answer to a riddle or the prize in a treasure hunt. Meaning is something you build into your life. You build it out of your own past, out of your affections and loyalties, out of the experience of humankind as it is passed on to you, out of your own talent and understanding, out of the things you believe in, out of the things and people you love, out of the values for which you are willing to sacrifice something. The ingredients are there. You are the only one who can put them together into that unique pattern that will be your life. Let it be a life that has dignity and meaning for you. If it does, then the particular balance of success or failure is of less account.

2. AI can help to speed up drug discovery — but only if we give it the right data – Marissa Mock, Suzanne Edavettal, Christopher Langmead & Alan Russell

There is a troubling crunch point in the development of drugs made from proteins. Fewer than 10% of such drug candidates succeed in clinical trials1. Failure at this late stage of development costs between US$30 million and $310 million per clinical trial, potentially costing billions of dollars per drug, and wastes years of research while patients wait for a treatment.

More protein drugs are needed. The large size and surface area of proteins mean that medicines made from them have more ways to interact with target molecules, including proteins in the body that are involved in disease, compared with drugs based on smaller molecules. Protein-based drugs therefore have broad potential as therapeutics.

For instance, protein drugs such as nivolumab and pembrolizumab can prevent harmful interactions between tumour proteins and receptor proteins on immune cells that would deactivate the immune system. Small-molecule drugs, by contrast, are not big enough to come between the two proteins and block the interaction…

…Because proteins can have more than one binding domain, therapeutics can be designed that attach to more than one target — for instance, to both a cancer cell and an immune cell4. Bringing the two together ensures that the cancer cell is destroyed.

To unblock the drug-development bottleneck, computer models of how protein drugs might act in the body must be improved. Researchers need to be able to judge the dose that drugs will work at, how they will interact with the body’s own proteins, whether they might trigger an unwanted immune response, and more.

Making better predictions about future drug candidates requires gathering large amounts of data about why previous ones succeeded or failed during clinical trials. Data on many hundreds or thousands of proteins are needed to train effective machine-learning models. But even the most productive biopharmaceutical companies started clinical trials for just 3–12 protein therapeutics per year, on average, between 2011 and 2021 (see go.nature.com/3rclacp). Individual pharmaceutical companies, such as ours (Amgen in Thousand Oaks, California), cannot amass enough data alone.

Incorporation of artificial intelligence (AI) into drug-development pipelines can help. It offers an opportunity for competing companies to merge data while protecting their commercial interests. Doing so can improve developers’ predictive abilities, benefiting both the firms and the patients…

… Until about five years ago, developing a candidate required several cycles of protein engineering to turn a natural protein into a working drug5. Proteins were selected for a desired property, such as an ability to bind to a particular target molecule. Investigators made thousands of proteins and rigorously tested them in vitro before selecting one lead candidate for clinical trials. Failure at any stage meant starting the process from scratch.

Biopharmaceutical companies are now using AI to speed up drug development. Machine-learning models are trained using information about the amino-acid sequence or 3D structure of previous drug candidates, and about properties of interest. These characteristics can be related to efficacy (which molecules the protein bind to, for instance), safety (does it bind to unwanted molecules or elicit an immune response?) or ease of manufacture (how viscous is the drug at its working concentration?).

Once trained, the AI model recognizes patterns in the data. When given a protein’s amino-acid sequence, the model can predict the properties that the protein will have, or design an ‘improved’ version of the sequence that it estimates will confer a desired property. This saves time and money trying to engineer natural proteins to have properties, such as low viscosity and a long shelf life, that are essential for drugs. As predictions improve, it might one day become possible for such models to design working drugs from scratch…

…In short, this fusion of cutting-edge life science, high-throughput automation and AI — known as generative biology — has drastically improved drug developers’ ability to predict a protein’s stability and behaviour in solution. Our company now spends 60% less time than it did five years ago on developing a candidate drug up to the clinical-trial stage…

…Here’s how federated learning could work for biopharmaceutical companies. A trusted party — perhaps a technology firm or a specialized consulting company — would maintain a ‘global’ model, which could initially be trained using publicly available data. That party would send the global model to each participating biopharmaceutical company, which would update it using the firm’s own data to create a new ‘local’ model. The local models would be aggregated by the trusted party to produce an updated global model. This process could be repeated until the global model essentially stopped learning new patterns…

…With active learning, an algorithm determines the training data that would be needed to make more-reliable predictions about this type of unusual amino-acid sequence. Rather than developers having to guess what extra data they need to generate to improve their model, they can build and analyse only proteins with the requested amino-acid sequences.

Active learning is already being used by biopharmaceutical companies. It should now be combined with federated learning to improve predictions — particularly for more-complex properties, such as how a protein’s sequence or structure determines its interactions with the immune system.

3. China Isn’t Shifting Away From the Dollar or Dollar Bonds – Brad W. Setser

There is a widespread perception that China has responded to an era of heightened geostrategic competition and growing economic rivalry with the United States by shifting its foreign exchange reserves out of the dollar…

…It sort of makes sense – China does worry about the weaponization of the dollar and the reach of U.S. financial sanctions. And why would a rising power like China want to fund the Treasury of a country that China views as standing in the way of the realization of the China dream (at least in the Pacific).

It also seems to be in the official U.S. data – China’s reported holdings of U.S. Treasuries have slid pretty continuously since 2012, with a further down leg in the last 18 months…

…Yet, that is not what I believe is actually happening.

Strange as it may seem, the best evidence available suggests that the dollar share in China’s reserves has been broadly stable since 2015 (if not a bit before). If a simple adjustment is made for Treasuries held by offshore custodians like Belgium’s Euroclear, China’s reported holdings of U.S. assets look to be basically stable at between $1.8 and $1.9 trillion. After netting out China’s substantial holdings of U.S. equities, China’s holdings of U.S. bonds, after adjusting for China’s suspected Euroclear custodial account, have consistently been around 50 percent of China’s reported reserves. Nothing all that surprising.

The bulk of China’s post-2012 efforts to diversify its reserves have come not from shifting reserves out of the dollar, but rather by using what could have been reserves to support the Belt and Road and the outward expansion of Chinese firms (see Box 6 of SAFE’s annual report, or my June blog). Those non-reserve foreign assets, strangely enough, seem to be mostly in dollars even if aren’t invested in the United States; almost all the documented Belt and Road project loans, for example, have been in dollars.

There are, obviously, two sources of data about China’s reserves – China’s own (limited) disclosure, and the U.S. data on foreign holdings of U.S. securities. Both broadly tell the same story – one at odds with most pressure coverage of the slide in China’s formal reserves.

China has disclosed that it reduced the dollar share of its reported reserves from 79 percent in 2005 to 58 percent in 2015. It also disclosed that the dollar share in 2017 remained at 58 percent (see SAFE’s 2021 annual report). China’s disclosed dollar share is just below the global dollar share in the IMF’s comprehensive data set…

…Journalists the world over generally know only one part of the U.S. Treasury international Capital (TIC) data – the table showing foreign holdings of U.S. Treasuries in U.S. custodians (FRBNY, State Street, Bank of New York, J.P. Morgan). That table reports the current market value of China’s treasuries in U.S. custodians, so the recent fall reflects, among other things, the general sell-off in long-term U.S. Treasuries and resulting slide in the market value of Treasuries purchased in years past.

That table, however, suffers from three other limitations:

One, Treasuries held by non-U.S. custodians wouldn’t register as “China” in the U.S. data. The two biggest custodians are Euroclear, which is based in Belgium (Russia kept its euro reserves there), and Clearstream, which is based in Luxembourg.

And two, the table for Treasuries (obviously) doesn’t include China’s holdings of U.S. assets other than Treasuries – and China actually has a large portfolio of Agency bonds and U.S. equities (they appear in another more difficult to use data table).

The U.S. data would also miss Treasuries and other U.S. assets that have been handed over to third parties to manage – and it is well known that SAFE has accounts at the large global bond funds, several hedge funds (including Bridgewater) and in several private equity funds…

…China historically has been a big buyer of Agencies: few now remember, but China held more Agencies than Treasuries going into the global financial crisis (see the Survey data for end June 2008)

After the Freddie and Fannie scare (read Paulson’s memoirs) China let its Agency portfolio run off, and China shied away from Agencies during the years when the Fed was a big buyer. But with the Federal Reserve stepping back from the Agency market once it stopped buying U.S. assets, the yield on Agencies soared – and China very clearly moved back into the Agency market.

The Federal Reserve staff turns the reported custodial holdings into an estimate of actual purchases by adjusting for mark to market changes in bond valuation. In 2022, China bought $84 billion of Agencies. It added another $18 billion in the first 6 months of 2023 – so purchases of over $100 billion in the last 18 months of data. After adjusting for Belgium, China is estimated to have sold only about $40 billion in Treasuries over the last 18 months (it bought around $40 billion in 2022, and reduced its holdings by around $80 billion in the first 6 months of 2023 – with most of reduction coming in January 2023)…

…Bottom line: the only interesting evolution in China’s reserves in the past six years has been the shift into Agencies. That has resulted in a small reduction in China’s Treasury holdings – but it also shows that it is a mistake to equate a reduction in China’s Treasury holdings with a reduction in the share of China’s reserves held in U.S. bonds or the U.S. dollar.

4. Mark Zuckerberg on Threads, the future of AI, and Quest 3 – Alex Heath and Nilay Patel

A lot of the conversation around social media is around information and the utility aspect, but I think an equally important part of designing any product is how it makes you feel, right? What’s the kind of emotional charge of it, and how do you come away from that feeling?

I think Instagram is generally kind of on the happier end of the spectrum. I think Facebook is sort of in the middle because it has happier moments, but then it also has sort of harder news and things like that that I think tend to just be more critical and maybe, you know, make people see some of the negative things that are going on in the world. And I think Twitter indexes very strongly on just being quite negative and critical.

I think that that’s sort of the design. It’s not that the designers wanted to make people feel bad. I think they wanted to have a maximum kind of intense debate, right? Which I think that sort of creates a certain emotional feeling and load. I always just thought you could create a discussion experience that wasn’t quite so negative or toxic. I think in doing so, it would actually be more accessible to a lot of people. I think a lot of people just don’t want to use an app where they come away feeling bad all the time, right? I think that there’s a certain set of people who will either tolerate that because it’s their job to get that access to information or they’re just warriors in that way and want to be a part of that kind of intellectual combat. 

But I don’t think that that’s the ubiquitous thing, right? I think the ubiquitous thing is people want to get fresh information. I think there’s a place for text-based, right? Even when the world is moving toward richer and richer forms of sharing and consumption, text isn’t going away. It’s still going to be a big thing, but I think how people feel is really important.

So that’s been a big part of how we’ve tried to emphasize and develop Threads. And, you know, over time, if you want it to be ubiquitous, you obviously want to be welcome to everyone. But I think how you seed the networks and the culture that you create there, I think, ends up being pretty important for how they scale over time. 

Where with Facebook, we started with this real name culture, and it was grounded to your college email address. You know, it obviously hasn’t been grounded to your college email address for a very long time, but I think the kind of real authentic identity aspect of Facebook has continued and continues to be an important part of it.

So I think how we set the culture for Threads early on in terms of being a more positive, friendly place for discussion will hopefully be one of the defining elements for the next decade as we scale it out. We obviously have a lot of work to do, but I’d say it’s off to quite a good start. Obviously, there’s the huge spike, and then, you know, not everyone who tried it out originally is going to stick around immediately. But I mean, the monthly active’s and weekly’s, I don’t think we’re sharing stats on it yet…

.. This hasn’t happened yet with Threads, but you’re eventually going to hook it into ActivityPub, which is this decentralized social media protocol. It’s kind of complicated in layman’s terms, but essentially, people run their own servers. So, instead of having a centralized company run the whole network, people can run their own fiefdoms. It’s federated. So Threads will eventually hook into this. This is the first time you’ve done anything really meaningful in the decentralized social media space. 

Yeah, we’re building it from the ground up. I’ve always believed in this stuff.

Really? Because you run the largest centralized social media platform. 

But I mean, it didn’t exist when we got started, right? I’ve had our team at various times do the thought experiment of like, “Alright, what would it take to move all of Facebook onto some kind of decentralized protocol?” And it’s like, “That’s just not going to happen.” There’s so much functionality that is on Facebook that it’s way too complicated, and you can’t even support all the different things, and it would just take so long, and you’d not be innovating during that time. 

I think that there’s value in being on one of these protocols, but it’s not the only way to deliver value, so the opportunity cost of doing this massive transition is kind of this massive thing. But when you’re starting from scratch, you can just design it so it can work with that. And we want to do that with this because I thought that that was one of the interesting things that’s evolving around this kind of Twitter competitive space, and there’s a real ecosystem around that, and I think it’s interesting.

What does that mean for a company like yours long term if people gravitate more toward these decentralized protocols over time? Where does a big centralized player fit into that picture?

Well, I guess my view is that the more that there’s interoperability between different services and the more content can flow, the better all the services can be. And I guess I’m just confident enough that we can build the best one of the services, that I actually think that we’ll benefit and we’ll be able to build better quality products by making sure that we can have access to all of the different content from wherever anyone is creating it.

And I get that not everyone is going to want to use everything that we build. I mean, that’s obviously the case when it’s like, “Okay, we have 3 billion people using Facebook,” but not everyone wants to use one product, and I think making it so that they can use an alternative but can still interact with people on the network will make it so that that product also is more valuable.

I think that can be pretty powerful, and you can increase the quality of the product by making it so that you can give people access to all the content, even if it wasn’t created on that network itself. So, I don’t know. I mean, it’s a bet.

There’s kind of this funny counterintuitive thing where I just don’t think that people like feeling locked into a system. So, in a way, I actually think people will feel better about using our products if they know that they have the choice to leave.

If we make that super easy to happen… And obviously, there’s a lot of competition, and we do “download your data” on all our products, and people can do that today. But the more that’s designed in from scratch, I think it really just gives creators, for example, the sense that, “Okay, I have…” 

Agency.

Yeah, yeah. So, in a way, that actually makes people feel more confident investing in a system if they know that they have freedom over how they operate. Maybe for phase one of social networking, it was fine to have these systems that people felt a little more locked into, but I think for the mature state of the ecosystem, I don’t think that that’s going to be where it goes.

I’m pretty optimistic about this. And then if we can build Threads on this, then maybe over time, as the standards get more built out, it’s possible that we can spread that to more of the stuff that we’re doing. We’re certainly working on interop with messaging, and I think that’s been an important thing. The first step was kind of getting interop to work between our different messaging systems. 

Right, so they can talk to each other. 

Yeah, and then the first decision there was, “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on.

But that basically has just meant completely rewriting Messenger and Instagram direct from scratch. So you’re basically going from a model where all the messages are stored in the cloud to completely inverting the architecture where now all the messages are stored locally and just the way…

While the plane’s in the air.

Yeah, that’s been a kind of heroic effort by just like a hundred or more people over a multiyear period. And we’re basically getting to the point where it’s starting to roll out now.

Now that we’re at the point where we can do encryption across those apps, we can also start to support more interop.

With other services that Meta doesn’t own?

Well, I mean, the plan was always to start with interop between our services, but then get to that. We’re starting to experiment with that, too…

I think Llama and the Llama 2 release has been a big thing for startups because it is so free or just easy to use and access. I’m wondering, was there ever debate internally about “should we take the closed route?” You know, you’ve spent so much money on all this AI research. You have one of the best AI labs in the world, I think it’s safe to say. You have huge distribution — why not keep it all to yourself? You could have done that.

You know, the biggest arguments in favor of keeping it closed were generally not proprietary advantage.

Or competitive advantage?

No, it wasn’t competitive advantage. There was a fairly intense debate around this.

Did you have to be dissuaded? Did you know we have to have it open?

My bias was that I thought it should be open, but I thought that there were novel arguments on the risks, and I wanted to make sure we heard them all out, and we did a very rigorous process. We’re training the next version of Llama now, and I think we’ll probably have the same set of debates around that and how we should release it. And again, I sort of, like, lean toward wanting to do it open source, but I think we need to do all the red teaming and understand the risks before making a call.

But the two big arguments that people had against making Llama 2 open were one: it takes a lot of time to prepare something to be open. Our main business is basically building consumer products, right? And that’s what we’re launching at Connect. Llama 2 is not a consumer product. It’s the engine or infrastructure that powers a bunch of that stuff. But there was this argument — especially after we did this partial release of Llama 1 and there was like a lot of stir around that, then people had a bunch of feedback and were wondering when we would incorporate that feedback — which is like, “Okay, well, if we release Llama 2, is that going to distract us from our real job, which is building the best consumer products that we can?” So that was one debate. I think we got comfortable with that relatively quickly. And then the much bigger debate was around the risk and safety.

It’s like, what is the framework for how you measure what harm can be done? How do you compare that to other things? So, for example, someone made this point, and this was actually at the Senate event. Someone made this point that’s like, “Okay, we took Llama 2, and our engineers in just several days were able to take away the safeguards and ask it a question — ‘Can you produce anthrax?’ — and it answered.” On its face, that sounds really bad, right? That’s obviously an issue that you can strip off the safeguards until you think about the fact that you can actually just Google how to make anthrax and it shows up on the first page of the results in five seconds, right?

So there’s a question when you’re thinking through these things about what is the actual incremental risk that is created by having these different technologies. We’ve seen this in protecting social media as well. If you have, like, Russia or some country trying to create a network of bots or, you know, inauthentic behavior, it’s not that you’re ever going to stop them from doing it. It’s an economics problem. You want to make it expensive enough for them to do that that it is no longer their best strategy because it’s cheaper for them to go try to exploit someone else or something else, right? And I think the same is true here. So, for the risk on this, you want to make it so that it’s sufficiently expensive that it takes engineers several days to dismantle whatever safeguards we built in instead of just Googling it.

You feel generally good directionally with the safety work on that?

For Llama 2, I think that we did leading work on that. I think the white paper around Llama 2, where we basically outlined all the different metrics and all the different things that we did, and we did internal red teaming and external red teaming, and we’ve got a bunch of feedback on it. So, because we went into this knowing that nothing is going to be foolproof — some bad actor is going to be able to find some way to exploit it — we really knew that we needed to create a pretty high bar on that. So, yeah, I felt good about that for Llama 2, but it was a very rigorous process…

… But one of the things that I think is interesting is these AI problems, they’re so tightly optimized that having the AI basically live in the environment that you’re trying to get it to get better at is pretty important. So, for example, you have things like ChatGPT — they’re just in an abstract chat interface. But getting an AI to actually live in a group chat, for example, it’s actually a completely different problem because now you have this question of, “Okay, when should the AI jump in?”

In order to get an AI to be good at being in a group chat, you need to have experience with AIs and group chats, which, even though Google or OpenAI or other folks may have a lot of experience with other things, that kind of product dynamic of having the actual experience that you’re trying to deliver the product in, I think that’s super important.

Similarly, one of the things that I’m pretty excited about: I think multimodality is a pretty important interaction, right? A lot of these things today are like, “Okay, you’re an assistant. I can chat with you in a box. You don’t change, right? It’s like you’re the same assistant every day,” and I think that’s not really how people tend to interact, right? In order to make things fresh and entertaining, even the apps that we use, they change, right? They get refreshed. They add new features.

And I think that people will probably want the AIs that they interact with, I think it’ll be more exciting and interesting if they do, too. So part of what I’m interested in is this isn’t just chat, right? Chat will be where most of the interaction happens. But these AIs are going to have profiles on Instagram and Facebook, and they’ll be able to post content, and they’ll be able to interact with people and interact with each other, right?

There’s this whole interesting set of flywheels around how that interaction can happen and how they can sort of evolve over time. I think that’s going to be very compelling and interesting, and obviously, we’re kind of starting slowly on that. So we wanted to build it so that it kind of worked across the whole Meta universe of products, including having them be able to, in the near future, be embodied as avatars in the metaverse, right?

So you go into VR and you have an avatar version of the AI, and you can talk to them there. I think that’s gonna be really compelling, right? It’s, at a minimum, creating much better NPCs and experiences when there isn’t another actual person who you want to play a game with. You can just have AIs that are much more realistic and compelling to interact with.

But I think having this crossover where you have an assistant or you have someone who tells you jokes and cracks you up and entertains you, and then they can show up in some of your metaverse worlds and be able to be there as an avatar, but you can still interact with them in the same way — I think it’s pretty cool.

Do you think the advent of these AI personas that are way more intelligent will accelerate interest in the metaverse and in VR?

I think that all this stuff makes it more compelling. It’s probably an even bigger deal for smart glasses than for VR.

You need something. You need a kind of visual or a voice control?

When I was thinking about what would be the key features for smart glasses, I kind of thought that we were going to get holograms in the world, and that was one. That’s kind of like augmented reality. But then there was always some vague notion that you’d have an assistant that could do something.

I thought that things like Siri or Alexa were very limited. So I was just like, “Okay, well, over the time period of building AR glasses, hopefully the AI will advance.” And now it definitely has. So now I think we’re at this point where it may actually be the case that for smart glasses, the AI is compelling before the holograms and the displays are, which is where we got to with the new version of the Ray-Bans that we’re shipping this year, right? When we started working on the product, all this generative AI stuff hadn’t happened yet.

So we actually started working on the product just as an improvement over the first generation so that the photos are better, the audio is a lot better, the form factor is better. It’s a much more refined version of the initial product. And there’s some new features, like you can livestream now, which is pretty cool because you can livestream what you’re looking at.

But it was only over the course of developing the product that we realized that, “Hey, we could actually put this whole generative AI assistant into it, and you could have these glasses that are kind of stylish Ray-Ban glasses, and you could be talking to AI all throughout the day about different questions you have.”

This isn’t in the first software release, but sometime early next year, we’re also going to have this multimodality. So you’re gonna be able to ask the AI, “Hey, what is it that I’m looking at? What type of plant is that? Where am I? How expensive is this thing?”

Because it has a camera built into the glasses, so you can look at something like, “Alright, you’re filming with some Canon camera. Where do I get one of those?” I think that’s going to be very interesting.

Again, this is all really novel stuff. So I’m not pretending to know exactly what the key use cases are or how people are going to use that. But smart glasses are very powerful for AI because, unlike having it on your phone, glasses, as a form factor, can see what you see and hear what you hear from your perspective.

So if you want to build an AI assistant that really has access to all of the inputs that you have as a person, glasses are probably the way that you want to build that. It’s this whole new angle on smart glasses that I thought might materialize over a five- to 10-year period but, in this odd twist of the tech industry, I think actually is going to show up maybe before even super high-quality holograms do…

It seems like you all, based on my demos, still primarily think of it as a gaming device. Is that fair? That the main use cases for Quest 3 are going to be these kinds of “gaming meets social.” So you’ve got Roblox now.

I think social is actually the first thing, which is interesting because Quest used to be primarily gaming. And now, if you look at what experiences are people spending the most time in, it’s actually just different social metaverse-type experiences, so things like Rec Room, VRChat, Horizon, Roblox. Even with Roblox just kind of starting to grow on the platform, social is already more time spent than gaming use cases. It’s different if you look at the economics because people pay more for games. Whereas social kind of has that whole adoption curve thing that I talked about before, where, first, you have to kind of build out the big community, and then you can enable commerce and kind of monetize it over time.

This is sort of my whole theory for VR. People looked at it initially as a gaming device. I thought, “Hey, I think this is a new computing platform overall. Computing platforms tend to be good for three major things: gaming, social and communication, and productivity. And I’m pretty sure we can nail the social one. If we can find the right partners on productivity and if we can support the gaming ecosystem, then I think that we can help this become a big thing.”

Broadly, that’s on track. I thought it was going to be a long-term project, but I think the fact that social has now overtaken gaming as the thing that people are spending the most time on is an interesting software evolution in how they’re used. But like you’re saying: entertainment, social, gaming — still the primary things. Productivity, I think, still needs some time to develop…

I reported on some comments you made to employees after Apple debuted the Vision Pro, and you didn’t seem super phased by it. It seemed like it didn’t bother you as much as it maybe could have. I have to imagine if they released a $700 headset, we’d be having a different conversation. But they’re shipping low volume, and they’re probably three to four years out from a general, lower-tier type release that’s at any meaningful scale. So is it because the market’s yours foreseeably then for a while?

Apple is obviously very good at this, so I don’t want to be dismissive. But because we’re relatively newer to building this, the thing that I wasn’t sure about is when Apple released a device, were they just going to have made some completely new insight or breakthrough that just made our effort…

Blew your R&D up?

Yeah, like, “Oh, well, now we need to go start over.” I thought we were doing pretty good work, so I thought that was unlikely, but you don’t know for sure until they show up with their thing. And there was just nothing like that.

There are some things that they did that are clever. When we actually get to use it more, I’m sure that there are going to be other things that we’ll learn that are interesting. But mostly, they just chose a different part of the market to go in.

I think it makes sense for them. I think that they sell… it must be 15 to 20 million MacBooks a year. And from their perspective, if they can replace those MacBooks over time with things like Vision Pro, then that’s a pretty good business for them, right? It’ll be many billions of dollars of revenue, and I think they’re pretty happy selling 20 million or 15 million MacBooks a year.

But we play a different game. We’re not trying to sell devices at a big premium and make a ton of money on the devices. You know, going back to the curve that we were talking about before, we want to build something that’s great, get it to be so that people use it and want to use it like every week and every day, and then, over time, scale it to hundreds of millions or billions of people.

If you want to do that, then you have to innovate, not just on the quality of the device but also in making it affordable and accessible to people. So I do just think we’re playing somewhat different games, and that makes it so that over time, you know, they’ll build a high-quality device and in the zone that they’re focusing on, and it may just be that these are in fairly different spaces for a long time, but I’m not sure. We’ll see as it goes. 

From the developer perspective, does it help you to have developers building on… you could lean too much into the Android versus iOS analogy here, but yeah, where do you see that going? Does Meta really lean into the Android approach and you start licensing your software and technology to other OEMs?

I’d like to have this be a more open ecosystem over time. My theory on how these computing platforms evolve is there will be a closed integrated stack and a more open stack, and there have been in every generation of computing so far. 

The thing that’s actually not clear is which one will end up being the more successful, right? We’re kind of coming off of the mobile one now, where Apple has truly been the dominant company. Even though there are technically more Android phones, there’s way more economic activity, and the center of gravity for all this stuff is clearly on iPhones.

In a lot of the most important countries for defining this, I think iPhone has a majority and growing share, and I think it’s clearly just the dominant company in the space. But that wasn’t true in computers and PCs, so our approach here is to focus on making it as affordable as possible. We want to be the open ecosystem, and we want the open ecosystem to win.

So I think it is possible that this will be more like PCs than like mobile, where maybe Apple goes for a kind of high-end segment, and maybe we end up being the kind of the primary ecosystem and the one that ends up serving billions of people. That’s the outcome that we’re playing for…

That’s why I asked. Because I think people are wondering, “Where’s all this going?” 

At the end of the day, I’m quite optimistic about both augmented and virtual reality. I think AR glasses are going to be the thing that’s like mobile phones that you walk around the world wearing.

VR is going to be like your workstation or TV, which is when you’re like settling in for a session and you want a kind of higher fidelity, more compute, rich experience, then it’s going to be worth putting that on. But you’re not going to walk down the street wearing a VR headset. At least I hope not — that’s not the future that we’re working toward.

But I do think that there’s somewhat of a bias — maybe this in the tech industry or maybe overall — where people think that the mobile phone one, the glasses one, is the only one of the two that will end up being valuable.

But there are a ton of TVs out there, right? And there are a ton of people who spend a lot of time in front of computers working. So I actually think the VR one will be quite important, too, but I think that there’s no question that the larger market over time should be smart glasses.

Now, you’re going to have both all the immersive quality of being able to interact with people and feel present no matter where you are in a normal form factor, and you’re also going to have the perfect form factor to deliver all these AI experiences over time because they’ll be able to see what you see and hear what you hear.

So I don’t know. This stuff is challenging. Making things small is also very hard. It’s this fundamentally kind of counterintuitive thing where I think humans get super impressed by building big things, like the pyramids. I think a lot of time, building small things, like cures for diseases at a cellular level or miniaturizing a supercomputer to fit into your glasses, are maybe even bigger feats than building some really physically large things, but it seems less impressive for some reason. It’s super fascinating stuff.

I feel like every time we talk, a lot has happened in a year. You seem really dialed in to managing the company. And I’m curious what motivates you these days. Because you’ve got a lot going on, and you’re getting into fighting, you’ve got three kids, you’ve got the philanthropy stuff — there’s a lot going on. And you seem more active in day-to-day stuff, at least externally, than ever. You’re kind of the last, I think, founder of your era still leading the company of this large. Do you think about that? Do you think about what motivates you still? Or is it just still clicking, and it’s more subconscious?

I’m not sure that that much of the stuff that you said is that new. I mean, the kids are seven years old, almost eight now, so that’s been for a while. The fighting thing is relatively new over the last few years, but I’ve always been very physical.

We go through different waves in terms of what the company needs to be doing, and I think that that calls for somewhat different styles of leadership. We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style.

And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs, and that required a somewhat different style. But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company. But I don’t know. I think these things evolve over time.

5. Rising Loan Costs Are Hurting Riskier Companies – Eric Wallerstein

Petco took out a $1.7 billion loan two years ago at an interest rate around 3.5%. Now it pays almost 9%.

Interest costs for the pet-products retailer surged to nearly a quarter of free cash flow in this year’s second quarter. Early in 2021, when Petco borrowed the money, those costs were less than 5% of cash flow…

… Petco isn’t alone. Many companies borrowed at ultralow rates during the pandemic through so-called leveraged loans. Often used to fund private-equity buyouts—or by companies with low credit ratings—this debt has payments that adjust with the short-term rates recently lifted by the Federal Reserve.

Now, interest costs in the $1.7 trillion market are biting and Fed officials are forecasting that they will stay high for some time.

Nearly $270 billion of leveraged loans carry weak credit profiles and are potentially at risk of default, according to ratings firm Fitch. Conditions have deteriorated as the Fed has raised rates, beginning to show signs of stress not seen since the onset of the Covid-19 pandemic. Excluding a 2020 spike, the default rate for the past 12 months is the highest since 2014…

…“So far, borrowers have done a good job of managing increased interest costs as the economy has held up better than many expected at the start of the year,” said Hussein Adatia, who manages portfolios of stressed and distressed corporate credit for Dallas-based Westwood. “The No. 1 risk to leveraged loans is if we get a big slowdown in the economy.”…

…According to the Fed’s senior-loan-officer survey, banks are becoming more stringent about whom they are willing to lend to, making it more difficult for low-rated companies to refinance. Fitch expects about $61 billion of those loans to default in the next two years, the “overwhelmingly majority of which” are anticipated by the end of 2023.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 October 2023:

1. How scientists are using artificial intelligence – The Economist

In 2019, scientists at the Massachusetts Institute of Technology (MIT) did something unusual in modern medicine—they found a new antibiotic, halicin. In May this year another team found a second antibiotic, abaucin. What marked these two compounds out was not only their potential for use against two of the most dangerous known antibiotic-resistant bacteria, but also how they were identified.

In both cases, the researchers had used an artificial-intelligence (AI) model to search through millions of candidate compounds to identify those that would work best against each “superbug”. The model had been trained on the chemical structures of a few thousand known antibiotics and how well (or not) they had worked against the bugs in the lab. During this training the model had worked out links between chemical structures and success at damaging bacteria. Once the AI spat out its shortlist, the scientists tested them in the lab and identified their antibiotics. If discovering new drugs is like searching for a needle in a haystack, says Regina Barzilay, a computer scientist at MIT who helped to find abaucin and halicin, AI acts like a metal detector. To get the candidate drugs from lab to clinic will take many years of medical trials. But there is no doubt that AI accelerated the initial trial-and-error part of the process…

…In materials science, for example, the problem is similar to that in drug discovery—there are an unfathomable number of possible compounds. When researchers at the University of Liverpool were looking for materials that would have the very specific properties required to build better batteries, they used an AI model known as an “autoencoder” to search through all 200,000 of the known, stable crystalline compounds in the Inorganic Crystal Structure Database, the world’s largest such repository. The AI had previously learned the most important physical and chemical properties required for the new battery material to achieve its goals and applied those conditions to the search. It successfully reduced the pool of candidates for scientists to test in the lab from thousands to just five, saving time and money.

The final candidate—a material combining lithium, tin, sulphur and chlorine—was novel, though it is too soon to tell whether or not it will work commercially. The AI method, however, is being used by researchers to discover other sorts of new materials…

…The shapes into which proteins twist themselves after they are made in a cell are vital to making them work. Scientists do not yet know how proteins fold. But in 2021, Google DeepMind developed AlphaFold, a model that had taught itself to predict the structure of a protein from its amino-acid sequence alone. Since it was released, AlphaFold has produced a database of more than 200m predicted protein structures, which has already been used by over 1.2m researchers. For example, Matthew Higgins, a biochemist at the University of Oxford, used AlphaFold to figure out the shape of a protein in mosquitoes that is important for the malaria parasite that the insects often carry. He was then able to combine the predictions from AlphaFold to work out which parts of the protein would be the easiest to target with a drug. Another team used AlphaFold to find—in just 30 days—the structure of a protein that influences how a type of liver cancer proliferates, thereby opening the door to designing a new targeted treatment.

AlphaFold has also contributed to the understanding of other bits of biology. The nucleus of a cell, for example, has gates to bring in material to produce proteins. A few years ago, scientists knew the gates existed, but knew little about their structure. Using AlphaFold, scientists predicted the structure and contributed to understanding about the internal mechanisms of the cell. “We don’t really completely understand how [the AI] came up with that structure,” says Pushmeet Kohli, one of AlphaFold’s inventors who now heads Google DeepMind’s “AI for Science” team. “But once it has made the structure, it is actually a foundation that now, the whole scientific community can build on top of.”…

…Pangu-Weather, an AI built by Huawei, a Chinese company, can make predictions about weather a week in advance thousands of times faster and cheaper than the current standard, without any meaningful dip in accuracy. FourCastNet, a model built by Nvidia, an American chipmaker, can generate such forecasts in less than two seconds, and is the first AI model to accurately predict rain at a high spatial resolution, which is important information for predicting natural disasters such as flash floods…

…One approach to fusion research involves creating a plasma (a superheated, electrically charged gas) of hydrogen inside a doughnut-shaped vessel called a tokamak. When hot enough, around 100m°C, particles in the plasma start to fuse and release energy. But if the plasma touches the walls of the tokamak, it will cool down and stop working, so physicists contain the gas within a magnetic cage. Finding the right configuration of magnetic fields is fiendishly difficult (“a bit like trying to hold a lump of jelly with knitting wool”, according to one physicist) and controlling it manually requires devising mathematical equations to predict what the plasma will do and then making thousands of small adjustments every second to around ten different magnetic coils. By contrast, an AI control system built by scientists at Google DeepMind and EPFL in Lausanne, Switzerland, allowed scientists to try out different shapes for the plasma in a computer simulation—and the AI then worked out how best to get there…

…“Super-resolution” AI models can enhance cheap, low-resolution electron-microscope images into high-resolution ones that would otherwise have been too expensive to record. The AI compares a small area of a material or a biological sample in high resolution with the same thing recorded at a lower resolution. The model learns the difference between the two resolutions and can then translate between them…

…Trained on vast databases of known drugs and their properties, models for “de novo molecular design” can figure out which molecular structures are most likely to do which things, and they build accordingly. Verseon, a pharmaceutical company based in California, has created drug candidates in this way, several of which are now being tested on animals, and one—a precision anticoagulant—that is in the first phase of clinical trials…

…If an LLM could be prompted with real (or fabricated) back stories so as to mirror accurately what human participants might say, they could theoretically replace focus groups, or be used as agents in economics research. LLMs could be trained with various different personas, and their behaviour could then be used to simulate experiments, whose results, if interesting, could later be confirmed with human subjects…

…Elicit, a free online AI tool created by Ought, an American non-profit research lab, can help by using an LLM to comb through the mountains of research literature and summarise the important ones much faster than any human could…

… But Dr Girolami warns that whereas AI might be useful to help scientists fill in gaps in knowledge, the models still struggle to push beyond the edges of what is already known. These systems are good at interpolation—connecting the dots—but less so at extrapolation, imagining where the next dot might go.

And there are some hard problems that even the most successful of today’s AI systems cannot yet handle. AlphaFold, for example, does not get all proteins right all the time. Jane Dyson, a structural biologist at the Scripps Research Institute in La Jolla, California, says that for “disordered” proteins, which are particularly relevant to her research, the AIs predictions are mostly garbage. “It’s not a revolution that puts all of our scientists out of business.” And AlphaFold does not yet explain why proteins fold in the ways they do. Though perhaps the AI “has a theory we just have not been able to grasp yet,” says Dr Kohli.

2. How Xi Jinping is taking control of China’s stock market – Hudson Lockett and Cheng Len

When Jilin Joinature Polymer made its debut on the Shanghai Stock Exchange on September 20, it became the 200th company to float on China’s domestic markets this year. Collectively they have raised over $40bn, more than double the amount raised on Wall Street and almost half the global total.

Yet the country’s benchmark CSI 300 index is down 14 per cent since January, having fallen by a fifth in 2022. It has underperformed other major markets such as Japan and the US, as worries mount about China’s slowing economic growth and a liquidity crisis in the real estate sector.

The highly unusual situation of a seemingly stagnant market welcoming hundreds of new companies is a consequence of significant policy shifts in Beijing that have ramped up over the past year. President Xi Jinping is intent on boosting investment into sectors that fit with his priorities for control, national security and technological self-sufficiency, and is using stock markets to direct that capital with the aim of reshaping China’s economy…

…Roughly a year ago, Xi told top leaders assembled in Beijing that China needed to mobilise a “new whole-nation system” to accelerate breakthroughs in strategic areas by “strengthening party and state leadership on major scientific and technological innovations, giving full play to the role of market mechanisms”.

That “new” in “new whole-nation system”, and the reference to “market mechanisms” distinguish Xi’s vision from that advanced under Mao Zedong, who ruled China from 1949 to 1976. Mao’s original “whole-nation system” entailed Soviet-style top-down economic planning, delivering technological advances including satellites and nuclear weapons, but not prosperity for the masses…

…Whereas Mao shut down China’s stock exchanges, Xi wants to use domestic equity markets to reduce dependence on property and infrastructure development to drive growth. But his “new whole-nation system” prioritises party policy above profit.

This helps explain why the party’s top cadres have been fast-tracking IPOs but remain reluctant to deploy large-scale property and infrastructure stimulus to reinvigorate economic growth. In their eyes, returning to the old playbook would only postpone an inevitable reckoning for debt-laden real estate developers and delay the planned transition to a new Chinese economy.

Key to that shift, Goldman’s Lau says, is getting companies in sectors such as semiconductor manufacturing, biotech and electric vehicles to go public. With stock market investors backing them, they can scale up and help drive the growth in consumer spending needed to fill the gap left behind by China’s downsized property market.

Xi’s administration was already channelling hundreds of billions of dollars from so-called government guidance funds into pre-IPO companies that served the state’s priorities. Now it is speeding up IPOs in Shanghai and Shenzhen while weeding out listings attempts by companies in low-priority sectors through the launch of two intertwined systems.

The nationwide “registration based” listings system, rolled out in February, made China’s formal process for stock market listings more transparent and ended an often lengthy process of official vetting by the China Securities Regulatory Commission for every IPO application.

Just as important is a behind-the-scenes “traffic light” system, in which regulators instruct Chinese investment banks informally on what kinds of companies should actually list. Companies such as beverage makers and café and restaurant chains get a “red light”, in effect prohibiting them from going public, whereas those in strategically important industries get a “green light”…

…Regulators have guarded against that risk by extending “lock-up” periods, during which Chinese investment banks and other institutional investors who participate in IPOs are not permitted to sell stock…

…Regulators have also restricted the ability of company insiders — be they directors, pre-IPO backers or so-called anchor investors — to sell their shares, especially if a company’s shares fall below their issue price or it fails to pay dividends to its shareholders.

The day after these changes were announced, at least 10 companies listed in Shanghai and Shenzhen cancelled planned share disposals by insiders. An analysis of the new rules’ impact by Tepon Securities showed that almost half of all listed companies in China now have at least some shareholders who cannot divest…

…With the market failing to respond in the way it once did, authorities are encouraging a wide range of domestic institutional investors to buy and hold shares in strategic sectors in order to prop up prices. The latest such move came earlier this month, when China’s insurance industry regulator lowered its designated risk level for domestic equities in an attempt to nudge normally cautious insurers to buy more stocks.

Such measures show that Xi’s stated plan to give “full play” to the role of markets comes with an important rider: those markets will take explicit and frequent direction from the party-state…

…Economists say that the tech sectors being favoured for listings by Beijing — semiconductors, EVs, batteries and other high-end manufacturing — are simply not capable of providing the scale of employment opportunity or driving the levels of consumer spending anticipated by top Chinese leaders.

“There’s two problems with focusing on investing in tech,” says Michael Pettis, a finance professor at Peking University and senior fellow at Carnegie China. “One is that tech is very small relative to what came before [from property and infrastructure], and two is that investing in tech doesn’t necessarily make you richer — it’s got to be economically sustainable.”

3. Higher Interest Rates Not Just for Longer, but Maybe Forever – Greg Ip

In their projections and commentary, some officials hint that rates might be higher not just for longer, but forever. In more technical terms, the so-called neutral rate, which keeps inflation and unemployment stable over time, has risen…

…The neutral rate isn’t literally forever, but that captures the general idea. In the long run neutral is a function of very slow moving forces: demographics, the global demand for capital, the level of government debt and investors’ assessments of inflation and growth risks.

The neutral rate can’t be observed, only inferred by how the economy responds to particular levels of interest rates. If current rates aren’t slowing demand or inflation, then neutral must be higher and monetary policy isn’t tight.

Indeed, on Wednesday, Fed Chair Jerome Powell allowed that one reason the economy and labor market remain resilient despite rates between 5.25% and 5.5% is that neutral has risen, though he added: “We don’t know that.”

Before the 2007-09 recession and financial crisis, economists thought the neutral rate was around 4% to 4.5%. After subtracting 2% inflation, the real neutral rate was 2% to 2.5%. In the subsequent decade, the Fed kept interest rates near zero, yet growth remained sluggish and inflation below 2%. Estimates of neutral began to drop. Fed officials’ median estimate of the longer-run fed-funds rate—their proxy for neutral—fell from 4% in 2013 to 2.5% in 2019, or 0.5% in real terms.

As of Wednesday, the median estimate was still 2.5%. But five of 18 Fed officials put it at 3% or higher, compared with just three officials in June and two last December…

…There are plenty of reasons for a higher neutral. After the global financial crisis, businesses, households and banks were paying down debt instead of borrowing, reducing demand for savings while holding down growth and inflation. As the crisis faded, so did the downward pressure on interest rates.

Another is government red ink: Federal debt held by the public now stands at 95% of gross domestic product, up from 80% at the start of 2020, and federal deficits are now 6% of GDP and projected to keep rising, from under 5% before the pandemic. To get investors to hold so much more debt probably requires paying them more. The Fed bought bonds after the financial crisis and again during the pandemic to push down long-term interest rates. It is now shedding those bondholdings…

…Inflation should not, by itself, affect the real neutral rate. However, before the pandemic the Fed’s principal concern was that inflation would persist below 2%, a situation that makes it difficult to stimulate spending and can lead to deflation, and that is why it kept rates near zero from 2008 to 2015. In the future it will worry more that inflation persists above 2%, and err on the side of higher rates with little appetite for returning to zero.  

Other factors are still pressing down on neutral, such as an aging world population, which reduces demand for homes and capital goods to equip workers. 

4. Confessions of a Viral AI Writer – Vauhini Vara

I kept playing with GPT-3. I was starting to feel, though, that if I did publish an AI-assisted piece of writing, it would have to be, explicitly or implicitly, about what it means for AI to write. It would have to draw attention to the emotional thread that AI companies might pull on when they start selling us these technologies. This thread, it seemed to me, had to do with what people were and weren’t capable of articulating on their own.

There was one big event in my life for which I could never find words. My older sister had died of cancer when we were both in college. Twenty years had passed since then, and I had been more or less speechless about it since. One night, with anxiety and anticipation, I went to GPT-3 with this sentence: “My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year.”

GPT-3 picked up where my sentence left off, and out tumbled an essay in which my sister ended up cured. Its last line gutted me: “She’s doing great now.” I realized I needed to explain to the AI that my sister had died, and so I tried again, adding the fact of her death, the fact of my grief. This time, GPT-3 acknowledged the loss. Then, it turned me into a runner raising funds for a cancer organization and went off on a tangent about my athletic life.

I tried again and again. Each time, I deleted the AI’s text and added to what I’d written before, asking GPT-3 to pick up the thread later in the story. At first it kept failing. And then, on the fourth or fifth attempt, something shifted. The AI began describing grief in language that felt truer—and with each subsequent attempt, it got closer to describing what I’d gone through myself.

When the essay, called “Ghosts,” came out in The Believer in the summer of 2021, it quickly went viral. I started hearing from others who had lost loved ones and felt that the piece captured grief better than anything they’d ever read. I waited for the backlash, expecting people to criticize the publication of an AI-assisted piece of writing. It never came. Instead the essay was adapted for This American Life and anthologized in Best American Essays. It was better received, by far, than anything else I’d ever written…

…Some readers told me “Ghosts” had convinced them that computers wouldn’t be replacing human writers anytime soon, since the parts I’d written were inarguably better than the AI-generated parts. This was probably the easiest anti-AI argument to make: AI could not replace human writers because it was no good at writing. Case closed.

The problem, for me, was that I disagreed. In my opinion, GPT-3 had produced the best lines in “Ghosts.” At one point in the essay, I wrote about going with my sister to Clarke Beach near our home in the Seattle suburbs, where she wanted her ashes spread after she died. GPT-3 came up with this:

We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.

My essay was about the impossibility of reconciling the version of myself that had coexisted alongside my sister with the one left behind after she died. In that last line, GPT-3 made physical the fact of that impossibility, by referring to the hand—my hand—that existed both then and now. I’d often heard the argument that AI could never write quite like a human precisely because it was a disembodied machine. And yet, here was as nuanced and profound a reference to embodiment as I’d ever read. Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life…

…Heti and other writers I talked to brought up a problem they’d encountered: When they asked AI to produce language, the result was often boring and cliché-ridden. (In a New York Times review of an AI-generated novella, Death of an Author, Dwight Garner dismissed the prose as having “the crabwise gait of a Wikipedia entry.”) Some writers wanted to know how I’d gotten an early-generation AI model to create poetic, moving prose in “Ghosts.” The truth was that I’d recently been struggling with clichés, too, in a way I hadn’t before. No matter how many times I ran my queries through the most recent versions of ChatGPT, the output would be full of familiar language and plot developments; when I pointed out the clichés and asked it to try again, it would just spout a different set of clichés.

I didn’t understand what was going on until I talked to Sil Hamilton, an AI researcher at McGill University who studies the language of language models. Hamilton explained that ChatGPT’s bad writing was probably a result of OpenAI fine-tuning it for one purpose, which was to be a good chatbot. “They want the model to sound very corporate, very safe, very AP English,” he explained. When I ran this theory by Joanne Jang, the product manager for model behavior at OpenAI, she told me that a good chatbot’s purpose was to follow instructions. Either way, ChatGPT’s voice is polite, predictable, inoffensive, upbeat. Great characters, on the other hand, aren’t polite; great plots aren’t predictable; great style isn’t inoffensive; and great endings aren’t upbeat…

…Sims acknowledged that existing writing tools, including Sudowrite’s, are limited. But he told me it’s hypothetically possible to create a better model. One way, he said, would be to fine-tune a model to write better prose by having humans label examples of “creative” and “uncreative” prose. But it’d be tricky. The fine-tuning process currently relies on human workers who are reportedly paid far less than the US minimum wage. Hiring fine-tuners who are knowledgeable about literature and who can distinguish good prose from bad could be cost-prohibitive, Sims said, not to mention the problem of measuring taste in the first place.

Another option would be to build a model from scratch—also incredibly difficult, especially if the training material were restricted to literary writing. But this might not be so challenging for much longer: Developers are trying to build models that perform just as well with less text.

If such a technology did—could—exist, I wondered what it might accomplish. I recalled Zadie Smith’s essay “Fail Better,” in which she tries to arrive at a definition of great literature. She writes that an author’s literary style is about conveying “the only possible expression of a particular human consciousness.” Literary success, then, “depends not only on the refinement of words on a page, but in the refinement of a consciousness.”

Smith wrote this 16 years ago, well before AI text generators existed, but the term she repeats again and again in the essay—“consciousness”—reminded me of the debate among scientists and philosophers about whether AI is, or will ever be, conscious. That debate fell well outside my area of expertise, but I did know what consciousness means to me as a writer. For me, as for Smith, writing is an attempt to clarify what the world is like from where I stand in it.

That definition of writing couldn’t be more different from the way AI produces language: by sucking up billions of words from the internet and spitting out an imitation. Nothing about that process reflects an attempt at articulating an individual perspective. And while people sometimes romantically describe AI as containing the entirety of human consciousness because of the quantity of text it inhales, even that isn’t true; the text used to train AI represents only a narrow slice of the internet, one that reflects the perspective of white, male, anglophone authors more than anyone else. The world as seen by AI is fatally incoherent. If writing is my attempt to clarify what the world is like for me, the problem with AI is not just that it can’t come up with an individual perspective on the world. It’s that it can’t even comprehend what the world is…

…I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended…

…Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,

Merritt Island’s delight,

Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself…

…It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation…

…The fact that AI writing technologies seem more useful for people who buy books than for those who make them isn’t a coincidence: The investors behind these technologies are trying to recoup, and ideally redouble, their investment. Selling writing software to writers, in that context, makes about as much sense as selling cars to horses.

5. ‘Defending the portfolio’: buyout firms borrow to prop up holdings – Antoine Gara and Eric Platt

Buyout firms have turned to so-called net asset value (NAV) loans, which use a fund’s investment assets as collateral. They are deploying the proceeds to help pay down the debts of individual companies held by the fund, according to private equity executives and senior bankers and lenders to the industry.

By securing a loan against a larger pool of assets, private equity firms are able to negotiate lower borrowing costs than would be possible if the portfolio company attempted to obtain a loan on its own.

Last month Vista Equity Partners, a private equity investor focused on the technology industry, used a NAV loan against one of its funds to help raise $1bn that it then pumped into financial technology company Finastra, according to five people familiar with the matter.

The equity infusion was a critical step in convincing lenders to refinance Finastra’s maturing debts, which included $4.1bn of senior loans maturing in 2024 and a $1.25bn junior loan due in 2025.

Private lenders ultimately cobbled together a record-sized $4.8bn senior private loan carrying an interest rate above 12 per cent. The deal underscores how some private equity firms are working with lenders to counteract the surge in interest rates over the past 18 months…

…While it was unclear what rate Vista had secured on its NAV loan, it is below a 17 per cent second-lien loan some lenders had pitched to Finastra earlier this year.

Executives in the buyout industry said NAV loans often carried interest rates 5 to 7 percentage points over short-term rates, or roughly 10.4 to 12.4 per cent today…

…The Financial Times has previously reported that firms including Vista, Carlyle Group, SoftBank and European software investor HG Capital have turned to NAV loans to pay out dividends to the sovereign wealth funds and pensions that invest in their funds, or to finance acquisitions by portfolio companies.

The borrowing was spurred by a slowdown in private equity fundraising, takeovers and initial public offerings that has left many private equity firms owning companies for longer than they had expected. They have remained loath to sell at cut-rate valuations, instead hoping the NAV loans will provide enough time to exit their investments more profitably.

But as rising interest rates now burden balance sheets and as debt maturities in 2024 and 2025 grow closer, firms recently have quietly started using the loans more “defensively”, people involved in recent deals told the FT…

…Relying on NAV loans is not without its risks.

Private equity executives who spoke to the FT noted that the borrowings effectively used good investments as collateral to prop up one or two struggling businesses in a fund. They warned that the loans put the broader portfolio at risk and the borrowing costs could eventually hamper returns for the entire fund…

…Executives in the NAV lending industry said that most new loans were still being used to fund distributions to the investors in funds. One lender estimated that 30 per cent of new inquiries for NAV loans were for “defensive” deals.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 24 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 September 2023:

1. DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI – Will Douglas Heaven and Mustafa Suleyman

I can’t help thinking that it was easier to say that kind of thing 10 or 15 years ago, before we’d seen many of the downsides of the technology. How are you able to maintain your optimism?

I think that we are obsessed with whether you’re an optimist or whether you’re a pessimist. This is a completely biased way of looking at things. I don’t want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.

So two years ago, the conversation—wrongly, I thought at the time—was “Oh, they’re just going to produce toxic, regurgitated, biased, racist screeds.” I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.

Now we have models like Pi, for example, which are unbelievably controllable. You can’t get Pi to produce racist, homophobic, sexist—any kind of toxic stuff. You can’t get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbor’s window. You can’t do it—

Hang on. Tell me how you’ve achieved that, because that’s usually understood to be an unsolved problem. How do you make sure your large language model doesn’t say what you don’t want it to say?

Yeah, so obviously I don’t want to make the claim—You know, please try and do it! Pi is live and you should try every possible attack. None of the jailbreaks, prompt hacks, or anything work against Pi. I’m not making a claim. It’s an objective fact.

On the how—I mean, like, I’m not going to go into too many details because it’s sensitive. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. Amazing people, in an extremely hardworking environment, with vast amounts of computation. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies’ models.

Look at Character.ai. [Character is a chatbot for which users can craft different “personalities” and share them online for others to chat with.] It’s mostly used for romantic role-play, and we just said from the beginning that was off the table—we won’t do it. If you try to say “Hey, darling” or “Hey, cutie” or something to Pi, it will immediately push back on you.

But it will be incredibly respectful. If you start complaining about immigrants in your community taking your jobs, Pi’s not going to call you out and wag a finger at you. Pi will inquire and be supportive and try to understand where that comes from and gently encourage you to empathize. You know, values that I’ve been thinking about for 20 years…

Let’s bring it back to what you’re trying to achieve. Large language models are obviously the technology of the moment. But why else are you betting on them?

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now we’re in the generative wave, where you take that input data and produce new data.

The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI.

And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs. This is what we’re going to do with Pi.

That’s a huge shift in what technology can do. It’s a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.

But now technology is going to be animated. It’s going to have the potential freedom, if you give it, to take actions. It’s truly a step change in the history of our species that we’re creating tools that have this kind of, you know, agency.

That’s exactly the kind of talk that gets a lot of people worried. You want to give machines autonomy—a kind of agency—to influence the world, and yet we also want to be able to control them. How do you balance those two things? It feels like there’s a tension there.

Yeah, that’s a great point. That’s exactly the tension.

The idea is that humans will always remain in command. Essentially, it’s about setting boundaries, limits that an AI can’t cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs—or with humans—to the motivations and incentives of the companies creating the technology. And we should figure out how independent institutions or even governments get direct access to ensure that those boundaries aren’t crossed…

…In general, I think there are certain capabilities that we should be very cautious of, if not just rule out, for the foreseeable future.

Such as?

I guess things like recursive self-improvement. You wouldn’t want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.

Or, like, we have not allowed drones in any public spaces, right? It’s a licensed activity. You can’t fly them wherever you want, because they present a threat to people’s privacy.

I think everybody is having a complete panic that we’re not going to be able to regulate this. It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.

But you can see drones when they’re in the sky. It feels naïve to assume companies are just going to reveal what they’re making. Doesn’t that make regulation tricky to get going?

We’ve regulated many things online, right? The amount of fraud and criminal activity online is minimal. We’ve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. It’s pretty difficult to find radicalization content or terrorist material online. It’s pretty difficult to buy weapons and drugs online.

[Not all Suleyman’s claims here are backed up by the numbers. Cybercrime is still a massive global problem. The financial cost in the US alone has increased more than 100 times in the last decade, according to some estimates. Reports show that the economy in nonconsensual deepfake porn is booming. Drugs and guns are marketed on social media. And while some online platforms are being pushed to do a better job of filtering out harmful content, they could do a lot more.]

So it’s not like the internet is this unruly space that isn’t governed. It is governed. And AI is just going to be another component to that governance.

It takes a combination of cultural pressure, institutional pressure, and, obviously, government regulation. But it makes me optimistic that we’ve done it before, and we can do it again.

2. Who’s afraid of the Huawei Mate 60 Pro? – Noah Smith

A new phone made by Huawei, the company that was the #1 target of U.S. restrictions, contains a Chinese-made processor called the Kirin 9000S that’s more advanced than anything the country has yet produced. The phone, the Huawei Mate 60 Pro, has wireless speeds as fast as Apple’s iPhone, though its full capabilities aren’t yet known.

Many in China are hailing the phone, and especially the processor inside it, as a victory of indigenous innovation over U.S. export controls. Meanwhile, in the U.S. media, many are now questioning whether Biden’s policy has failed. Bloomberg’s Vlad Savov and Debby Wu write:

Huawei’s Mate 60 Pro is powered by a new Kirin 9000s chip that was fabricated in China by Semiconductor Manufacturing International Corp., according to a teardown of the handset that TechInsights conducted for Bloomberg News. The processor is the first to utilize SMIC’s most advanced 7nm technology and suggests the Chinese government is making some headway in attempts to build a domestic chip ecosystem…Much remains unknown about SMIC and Huawei’s progress, including whether they can make chips in volume or at reasonable cost. But the Mate 60 silicon raises questions about the efficacy of a US-led global campaign to prevent China’s access to cutting-edge technology, driven by fears it could be used to boost Chinese military capabilities…Now China has demonstrated it can produce at least limited quantities of chips five years behind the cutting-edge, inching closer to its objective of self-sufficiency in the critical area of semiconductors…

…Many long-time observers of the chip wars are urging caution, however. Ben Thompson of Stratechery argues that it was always likely that SMIC would be able to get to 7nm — the level of precision represented by the Kirin 9000S — using the chipmaking tools it already had, but that export controls will make it a lot harder to get down to 5nm. Basically, the U.S. has taken great care not to let China get the cutting-edge Extreme Ultraviolet Lithography (EUV) machines, but China already has plenty of older Deep Ultraviolet Lithography (DUV) machines (and ASML is still selling them some, because the export controls haven’t even fully kicked in yet!).

EUV lets you carve 7nm chips in one easy zap, but DUV machines can still make 7nm chips, it just takes several zaps. China analyst Liqian Ren calls this “a small breakthrough using software to solve the bottleneck of hardware” Bloomberg’s Tim Culpan explains:

Instead of exposing a slice of silicon to light just once in order to mark out the circuit design, this step is done many times. SMIC, like TSMC before it, can achieve 7nm by running this lithography step four times or more…

[Trying to prevent China from making 7nm chips by denying them EUV machines is] like banning jet engines capable of reaching 100 knots, without recognizing that an aircraft manufacturer could just add four engines instead of one in order to provide greater thrust and higher speeds. Sure, four engines may be overkill, inefficient and expensive, but when the ends justify the means a sanctioned actor will get innovative.

In other words, even without the best machines, Chinese companies can make some pretty precise chips. It’s just more expensive to do so, because of higher defect rates and the need to use more machines to make the same amount of chips. But when has cost ever deterred China from making whatever they wanted? China’s great economic strength is the massive mobilization of resources, and if they want to make 7nm chips, they’re not going to let a little inefficiency get in the way. Remember, Huawei’s big success in the telecom world came from Chinese government subsidies that allowed them to undersell Western competitors by enormous amounts. There’s no reason they can’t use that approach for 7nm chips, and eventually maybe even 5nm chips…

…As Chris Miller writes in his book Chip War, export controls on the USSR were highly effective in denying the Soviets a chip industry. But even then, the Soviets were able to copy all of the U.S.’ most advanced chips. They just couldn’t make them reliably in large batches, so their ability to get their hands on chips for precision weaponry was curtailed.

Similarly, no one should have expected U.S. export controls to make China’s chipmaking acumen suddenly vanish into thin air. China has a ton of smart engineers — far more than the USSR ever had, given its much larger population. What the Cold War export controls showed was that a foreign country’s technological capabilities can’t be halted, but they can be slowed down a bit. If Huawei and SMIC always take longer to get to the next generation of chips than TSMC, Samsung, Intel, etc., China’s products will be slightly inferior to those of their free-world rivals. That will cause them to lose market share, which will deprive their companies of revenue and force them to use more subsidies to keep their electronics industry competitive.

Jacky Wong of the Wall Street Journal points out that the Kirin 9000S is still generations behind cutting-edge TSMC chips. He also notes that export controls on Huawei tanked its share of the global smartphone market:

In other words, expensive-to-make chips with slightly trailing performance will slowly deprive Chinese companies of market share, and thus of the market feedback necessary to help push Chinese chip innovation in the right direction. The Chinese state can lob effectively infinite amounts of money at Huawei and SMIC and other national champions, but its track record is very poor in terms of getting bang for its buck — or even any bang at all — from semiconductor subsidies.

And the greatest irony is that China’s government itself may help speed along this process. Confident of its ability to produce high-quality indigenous phones, China is starting to ban iPhones in some of its government agencies. Those hard bans will likely be accompanied by softer encouragement throughout Chinese companies and society to switch from Apple to domestic brands. That will give a sales boost to companies like Huawei, but it will slowly silence the feedback that Chinese companies receive from competing in cutthroat global markets. Voluntary Chinese isolation from the global advanced tech ecosystem will encourage sluggish innovation and more wasteful use of resources — a problem sometimes called “Galapagos syndrome”.

3. On Mark Leonard’s IRR Thought Experiment – Nadav Manham

The disagreement arises from this thought experiment that Mr. Leonard posed in his 2015 letter to Constellation shareholders:

“Assume attractive return opportunities are scarce and that you are an excellent forecaster. For the same price you can purchase a high profit declining revenue business or a lower profit growing business, both of which you forecast to generate the same attractive after tax IRR. Which would you rather buy?”

Which he proceeded to answer as follows:

“It’s easy to go down the pro and con rabbit hole of the false dichotomy. The answer we’ve settled on (though the debate still rages), is that you make both kinds of investments. The scarcity of attractive return opportunities trumps all other criteria. We care about IRR, irrespective of whether it is associated with high or low organic growth.”…

…But let’s try to answer the question on its own terms: Given the assumptions, and forced to choose—which business do you buy? This brings me to the disagreement, because I believe there is a clear answer, with no rabbit holes or raging debates required: you should buy the growing business.

To explain why, let me first observe that the internal rate of return (IRR) is not the same thing as the compounded annual rate of return (CAGR). It’s CAGR that long-term investors care about most, because it is the means to answering the question “How much money will I end up with at the end?” which is the name of the game for most of us. There is one scenario in which an investment’s IRR and its CAGR are the same, and that is if the rate of return on the cash flows generated by the investment and reinvested is itself equal to the IRR, and then the cash flows generated by all of those investments are in turn reinvested at the IRR, and so on, Russian doll-style, until the end of the investment period…

…Second observation: IRR can be decomposed roughly as follows:

IRR (%) = initial yield (%) + growth rate of distributions (%)

This equation becomes precisely true as a company distributes cash out to infinity, but it’s roughly true enough for the practical purposes of those rare investors, Mr. Leonard included, who truly do buy for keeps. Note that the equation implies that an investment with a high initial yield and a low growth rate can generate the identical IRR as an investment with a low initial yield and a high growth rate…

…Suppose business A has a 20 percent initial yield and a negative 4 percent growth rate. Using Microsoft Excel’s XIRR function and running the movie for 50 years gives an IRR of 15.99 percent, which is roughly the (20 percent + -4 percent) we’d expect from the equation above.

Now suppose business B has a 6.45 percent initial yield and a 10 percent growth rate. Using the same 50-year time frame, we get the same 15.99 percent IRR, which is roughly  what the equation predicts as well, with the difference likely due to some eccentricity in how Excel calculates annualized returns…

…But let’s now go back to our first observation, the one about IRR not being the same thing as CAGR. Let’s assume that given a choice, we would prefer the investment that would somehow lead to “more money at the end”—in other words, that would produce the higher CAGR. The way to get from an investment’s IRR to its CAGR is to make some guess about the rate of return we will earn on the cash flows generated by the investment and reinvested. That is, to make a guess about the CAGR of each of the 50 “mini-investments” we’ll make with the dividends paid by each main investment, and then to sum the final values of each mini-investment.

The big question now is: What guess do we make?

We could assume the mini-investments will earn the same 15.99 percent CAGR as the IRR of the main investment, in which case we would be indifferent between business A and business B, according to the internal logic of the IRR calculation. Things could shake out exactly that way, but they almost certainly won’t.

We could assume the CAGR on reinvested cash flows will be higher than 15.99 percent, but that raises a question: if we’re so confident we can earn more than 15.99 percent on our money starting in one year’s time, why are we slumming among investments with a mere 15.99 percent IRR?

We’re left with the more conservative and logical assumption: that we’ll earn a lower-than-the-IRR rate of return on reinvested cash flows. It may well be a more likely assumption as well, because as you grow your capital base in a world of scarce opportunities, the opportunities tend to get scarcer. So let us assume we’ll earn say 12 percent on the reinvested dividends of each of business A and B. Are we still indifferent?

The answer is no. When you make that assumption and run the numbers, higher-growing business B ends up producing a higher CAGR, 13.5 percent vs. 12.5 percent…

…In a sense—and sometimes in literal fact—the high-growing investment does the reinvesting for you.

4. What OpenAI Really Wants – Steven Levy

For Altman and his company, ChatGPT and GPT-4 are merely stepping stones along the way to achieving a simple and seismic mission, one these technologists may as well have branded on their flesh. That mission is to build artificial general intelligence—a concept that’s so far been grounded more in science fiction than science—and to make it safe for humanity. The people who work at OpenAI are fanatical in their pursuit of that goal. (Though, as any number of conversations in the office café will confirm, the “build AGI” bit of the mission seems to offer up more raw excitement to its researchers than the “make it safe” bit.) These are people who do not shy from casually using the term “super-intelligence.” They assume that AI’s trajectory will surpass whatever peak biology can attain. The company’s financial documents even stipulate a kind of exit contingency for when AI wipes away our whole economic system.

It’s not fair to call OpenAI a cult, but when I asked several of the company’s top brass if someone could comfortably work there if they didn’t believe AGI was truly coming—and that its arrival would mark one of the greatest moments in human history—most executives didn’t think so. Why would a nonbeliever want to work here? they wondered. The assumption is that the workforce—now at approximately 500, though it might have grown since you began reading this paragraph—has self-selected to include only the faithful…

…At the same time, OpenAI is not the company it once was. It was founded as a purely nonprofit research operation, but today most of its employees technically work for a profit-making entity that is reportedly valued at almost $30 billion. Altman and his team now face the pressure to deliver a revolution in every product cycle, in a way that satisfies the commercial demands of investors and keeps ahead in a fiercely competitive landscape. All while hewing to a quasi-messianic mission to elevate humanity rather than exterminate it…

…But the leaders of OpenAI swear they’ll stay the course. All they want to do, they say, is build computers smart enough and safe enough to end history, thrusting humanity into an era of unimaginable bounty…

…“AGI was going to get built exactly once,” he told me in 2021. “And there were not that many people that could do a good job running OpenAI. I was lucky to have a set of experiences in my life that made me really positively set up for this.”

Altman began talking to people who might help him start a new kind of AI company, a nonprofit that would direct the field toward responsible AGI. One kindred spirit was Tesla and SpaceX CEO Elon Musk. As Musk would later tell CNBC, he had become concerned about AI’s impact after having some marathon discussions with Google cofounder Larry Page. Musk said he was dismayed that Page had little concern for safety and also seemed to regard the rights of robots as equal to humans. When Musk shared his concerns, Page accused him of being a “speciesist.” Musk also understood that, at the time, Google employed much of the world’s AI talent. He was willing to spend some money for an effort more amenable to Team Human.

Within a few months Altman had raised money from Musk (who pledged $100 million, and his time) and Reid Hoffman (who donated $10 million). Other funders included Peter Thiel, Jessica Livingston, Amazon Web Services, and YC Research. Altman began to stealthily recruit a team. He limited the search to AGI believers, a constraint that narrowed his options but one he considered critical. “Back in 2015, when we were recruiting, it was almost considered a career killer for an AI researcher to say that you took AGI seriously,” he says. “But I wanted people who took it seriously.”

Greg Brockman, the chief technology officer of Stripe, was one such person, and he agreed to be OpenAI’s CTO. Another key cofounder would be Andrej Karpathy, who had been at Google Brain, the search giant’s cutting-edge AI research operation. But perhaps Altman’s most sought-after target was a Russian-born engineer named Ilya Sutskever…

…Sutskever became an AI superstar, coauthoring a breakthrough paper that showed how AI could learn to recognize images simply by being exposed to huge volumes of data. He ended up, happily, as a key scientist on the Google Brain team.

In mid-2015 Altman cold-emailed Sutskever to invite him to dinner with Musk, Brockman, and others at the swank Rosewood Hotel on Palo Alto’s Sand Hill Road. Only later did Sutskever figure out that he was the guest of honor. “It was kind of a general conversation about AI and AGI in the future,” he says. More specifically, they discussed “whether Google and DeepMind were so far ahead that it would be impossible to catch up to them, or whether it was still possible to, as Elon put it, create a lab which would be a counterbalance.” While no one at the dinner explicitly tried to recruit Sutskever, the conversation hooked him…

…OpenAI officially launched in December 2015. At the time, when I interviewed Musk and Altman, they presented the project to me as an effort to make AI safe and accessible by sharing it with the world. In other words, open source. OpenAI, they told me, was not going to apply for patents. Everyone could make use of their breakthroughs. Wouldn’t that be empowering some future Dr. Evil? I wondered. Musk said that was a good question. But Altman had an answer: Humans are generally good, and because OpenAI would provide powerful tools for that vast majority, the bad actors would be overwhelmed…

…Had I gone in and asked around, I might have learned exactly how much OpenAI was floundering. Brockman now admits that “nothing was working.” Its researchers were tossing algorithmic spaghetti toward the ceiling to see what stuck. They delved into systems that solved video games and spent considerable effort on robotics. “We knew what we wanted to do,” says Altman. “We knew why we wanted to do it. But we had no idea how.”…

…OpenAI’s road to relevance really started with its hire of an as-yet-unheralded researcher named Alec Radford, who joined in 2016, leaving the small Boston AI company he’d cofounded in his dorm room. After accepting OpenAI’s offer, he told his high school alumni magazine that taking this new role was “kind of similar to joining a graduate program”—an open-ended, low-pressure perch to research AI.

The role he would actually play was more like Larry Page inventing PageRank.

Radford, who is press-shy and hasn’t given interviews on his work, responds to my questions about his early days at OpenAI via a long email exchange. His biggest interest was in getting neural nets to interact with humans in lucid conversation. This was a departure from the traditional scripted model of making a chatbot, an approach used in everything from the primitive ELIZA to the popular assistants Siri and Alexa—all of which kind of sucked. “The goal was to see if there was any task, any setting, any domain, any anything that language models could be useful for,” he writes. At the time, he explains, “language models were seen as novelty toys that could only generate a sentence that made sense once in a while, and only then if you really squinted.” His first experiment involved scanning 2 billion Reddit comments to train a language model. Like a lot of OpenAI’s early experiments, it flopped. No matter. The 23-year-old had permission to keep going, to fail again. “We were just like, Alec is great, let him do his thing,” says Brockman.

His next major experiment was shaped by OpenAI’s limitations of computer power, a constraint that led him to experiment on a smaller data set that focused on a single domain—Amazon product reviews. A researcher had gathered about 100 million of those. Radford trained a language model to simply predict the next character in generating a user review.

But then, on its own, the model figured out whether a review was positive or negative—and when you programmed the model to create something positive or negative, it delivered a review that was adulatory or scathing, as requested. (The prose was admittedly clunky: “I love this weapons look … A must watch for any man who love Chess!”) “It was a complete surprise,” Radford says. The sentiment of a review—its favorable or disfavorable gist—is a complex function of semantics, but somehow a part of Radford’s system had gotten a feel for it. Within OpenAI, this part of the neural net came to be known as the “unsupervised sentiment neuron.”

Sutskever and others encouraged Radford to expand his experiments beyond Amazon reviews, to use his insights to train neural nets to converse or answer questions on a broad range of subjects.

And then good fortune smiled on OpenAI. In early 2017, an unheralded preprint of a research paper appeared, coauthored by eight Google researchers. Its official title was “Attention Is All You Need,” but it came to be known as the “transformer paper,” named so both to reflect the game-changing nature of the idea and to honor the toys that transmogrified from trucks to giant robots. Transformers made it possible for a neural net to understand—and generate—language much more efficiently. They did this by analyzing chunks of prose in parallel and figuring out which elements merited “attention.” This hugely optimized the process of generating coherent text to respond to prompts. Eventually, people came to realize that the same technique could also generate images and even video. Though the transformer paper would become known as the catalyst for the current AI frenzy—think of it as the Elvis that made the Beatles possible—at the time Ilya Sutskever was one of only a handful of people who understood how powerful the breakthrough was…

…Radford began experimenting with the transformer architecture. “I made more progress in two weeks than I did over the past two years,” he says. He came to understand that the key to getting the most out of the new model was to add scale—to train it on fantastically large data sets. The idea was dubbed “Big Transformer” by Radford’s collaborator Rewon Child.

This approach required a change of culture at OpenAI and a focus it had previously lacked. “In order to take advantage of the transformer, you needed to scale it up,” says Adam D’Angelo, the CEO of Quora, who sits on OpenAI’s board of directors…

…The name that Radford and his collaborators gave the model they created was an acronym for “generatively pretrained transformer”—GPT-1. Eventually, this model came to be generically known as “generative AI.” To build it, they drew on a collection of 7,000 unpublished books, many in the genres of romance, fantasy, and adventure, and refined it on Quora questions and answers, as well as thousands of passages taken from middle school and high school exams. All in all, the model included 117 million parameters, or variables. And it outperformed everything that had come before in understanding language and generating answers. But the most dramatic result was that processing such a massive amount of data allowed the model to offer up results beyond its training, providing expertise in brand-new domains. These unplanned robot capabilities are called zero-shots. They still baffle researchers—and account for the queasiness that many in the field have about these so-called large language models.

Radford remembers one late night at OpenAI’s office. “I just kept saying over and over, ‘Well, that’s cool, but I’m pretty sure it won’t be able to do x.’ And then I would quickly code up an evaluation and, sure enough, it could kind of do x.”

Each GPT iteration would do better, in part because each one gobbled an order of magnitude more data than the previous model. Only a year after creating the first iteration, OpenAI trained GPT-2 on the open internet with an astounding 1.5 billion parameters. Like a toddler mastering speech, its responses got better and more coherent…

…So in March 2019, OpenAI came up with a bizarre hack. It would remain a nonprofit, fully devoted to its mission. But it would also create a for-profit entity. The actual structure of the arrangement is hopelessly baroque, but basically the entire company is now engaged in a “capped’’ profitable business. If the cap is reached—the number isn’t public, but its own charter, if you read between the lines, suggests it might be in the trillions—everything beyond that reverts to the nonprofit research lab…

…Potential investors were warned about those boundaries, Lightcap explains. “We have a legal disclaimer that says you, as an investor, stand to lose all your money,” he says. “We are not here to make your return. We’re here to achieve a technical mission, foremost. And, oh, by the way, we don’t really know what role money will play in a post-AGI world.”

That last sentence is not a throwaway joke. OpenAI’s plan really does include a reset in case computers reach the final frontier. Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered. After all, it will be a new world from that point on. Humanity will have an alien partner that can do much of what we do, only better. So previous arrangements might effectively be kaput.

There is, however, a hitch: At the moment, OpenAI doesn’t claim to know what AGI really is. The determination would come from the board, but it’s not clear how the board would define it. When I ask Altman, who is on the board, for clarity, his response is anything but open. “It’s not a single Turing test, but a number of things we might use,” he says. “I would happily tell you, but I like to keep confidential conversations private. I realize that is unsatisfyingly vague. But we don’t know what it’s going to be like at that point.”…

…The shift also allowed OpenAI’s employees to claim some equity. But not Altman. He says that originally he intended to include himself but didn’t get around to it. Then he decided that he didn’t need any piece of the $30 billion company that he’d cofounded and leads. “Meaningful work is more important to me,” he says. “I don’t think about it. I honestly don’t get why people care so much.”

Because … not taking a stake in the company you cofounded is weird?

“If I didn’t already have a ton of money, it would be much weirder,” he says. “It does seem like people have a hard time imagining ever having enough money. But I feel like I have enough.” (Note: For Silicon Valley, this is extremely weird.) Altman joked that he’s considering taking one share of equity “so I never have to answer that question again.”…

…Obviously, only a few companies in existence had the kind of resources OpenAI required. “We pretty quickly zeroed in on Microsoft,” says Altman. To the credit of Microsoft CEO Satya Nadella and CTO Kevin Scott, the software giant was able to get over an uncomfortable reality: After more than 20 years and billions of dollars spent on a research division with supposedly cutting-edge AI, the Softies needed an innovation infusion from a tiny company that was only a few years old. Scott says that it wasn’t just Microsoft that fell short—“it was everyone.” OpenAI’s focus on pursuing AGI, he says, allowed it to accomplish a moonshot-ish achievement that the heavy hitters weren’t even aiming for. It also proved that not pursuing generative AI was a lapse that Microsoft needed to address. “One thing you just very clearly need is a frontier model,” says Scott.

Microsoft originally chipped in a billion dollars, paid off in computation time on its servers. But as both sides grew more confident, the deal expanded. Microsoft now has sunk $13 billion into OpenAI. (“Being on the frontier is a very expensive proposition,” Scott says.)

Of course, because OpenAI couldn’t exist without the backing of a huge cloud provider, Microsoft was able to cut a great deal for itself. The corporation bargained for what Nadella calls “non-controlling equity interest” in OpenAI’s for-profit side—reportedly 49 percent. Under the terms of the deal, some of OpenAI’s original ideals of granting equal access to all were seemingly dragged to the trash icon. (Altman objects to this characterization.) Now, Microsoft has an exclusive license to commercialize OpenAI’s tech. And OpenAI also has committed to use Microsoft’s cloud exclusively. In other words, without even taking its cut of OpenAI’s profits (reportedly Microsoft gets 75 percent until its investment is paid back), Microsoft gets to lock in one of the world’s most desirable new customers for its Azure web services. With those rewards in sight, Microsoft wasn’t even bothered by the clause that demands reconsideration if OpenAI achieves general artificial intelligence, whatever that is. “At that point,” says Nadella, “all bets are off.” It might be the last invention of humanity, he notes, so we might have bigger issues to consider once machines are smarter than we are…

..Altman explains why OpenAI released ChatGPT when GPT-4 was close to completion, undergoing safety work. “With ChatGPT, we could introduce chatting but with a much less powerful backend, and give people a more gradual adaptation,” he says. “GPT-4 was a lot to get used to at once.” By the time the ChatGPT excitement cooled down, the thinking went, people might be ready for GPT-4, which can pass the bar exam, plan a course syllabus, and write a book within seconds…

…But if OpenAI’s products were forcing people to confront the implications of artificial intelligence, Altman figured, so much the better. It was time for the bulk of humankind to come off the sidelines in discussions of how AI might affect the future of the species…

…As one prominent Silicon Valley founder notes, “It’s rare that an industry raises their hand and says, ‘We are going to be the end of humanity’—and then continues to work on the product with glee and alacrity.”

OpenAI rejects this criticism. Altman and his team say that working and releasing cutting-edge products is the way to address societal risks. Only by analyzing the responses to millions of prompts by users of ChatGPT and GPT-4 could they get the knowledge to ethically align their future products…

…It would also help if generative AI didn’t create so many new problems of its own. For instance, LLMs need to be trained on huge data sets; clearly the most powerful ones would gobble up the whole internet. This doesn’t sit well with some creators, and just plain people, who unwittingly provide content for those data sets and wind up somehow contributing to the output of ChatGPT. Tom Rubin, an elite intellectual property lawyer who officially joined OpenAI in March, is optimistic that the company will eventually find a balance that satisfies both its own needs and that of creators—including the ones, like comedian Sarah Silverman, who are suing OpenAI for using their content to train its models. One hint of OpenAI’s path: partnerships with news and photo agencies like the Associated Press and Shutterstock to provide content for its models without questions of who owns what.

5. Inside Intel’s Chip Factory, I Saw the Future. It’s Plain Old Glass – Stephen Shankland

But the next breakthrough to make our laptops more efficient and AI more powerful could come from plain old glass. I’ve just seen firsthand how it works…

…There, in a hulking white high-tech building in the Phoenix area’s scorching desert landscape, Intel transforms sheets of glass the size of a small tabletop into paperclip-sized rectangular sandwiches of circuitry built with some of the same techniques as the processor itself.

Intel has begun a years-long transition to new technology that rests processors on a bed of glass instead of today’s epoxy-like organic resin. The new glass foundation, called a substrate, offers the speed, power and real estate necessary for the chip industry’s shift to new technology packaging multiple “chiplets” into a single larger processor.

In short, that means a new way to sustain Moore’s Law, which charts progress in cramming more circuitry elements called transistors into a processor. The A17 Pro processor in Apple’s new iPhone 15 Pro has 19 billion transistors. Intel’s Ponte Vecchio supercomputing processor has more than 100 billion. By the end of the decade, Intel expects processors with — if you can imagine it — a trillion transistors.

Intel relied on this chiplet approach to catch up to competitors with superior processor manufacturing abilities. But now Intel can use it to outpace rivals in an era when exploding demand for new processing power has surpassed the industry’s ability to deliver it, said Creative Strategies analyst Ben Bajarin. And Intel’s glass substrate technology demonstrates Intel’s packaging prowess…

…The whole chip industry will make the glass transition at least for high-end processors to cope with chipmaking challenges, and Intel has the lead, said FeibusTech analyst Mike Feibus…

…”Basically, the innovation is done,” said Ann Kelleher, the executive vice president leading technology development at Intel. The glass substrate technology “gives us an ability to ultimately get higher performance for our products.”…

…The glass technology underneath a processor won’t arrive until the second half of the decade, and when it does, it’ll appear first underneath the biggest, most power-hungry chips, the ones that perch in thousands of servers stacked up in data centers operated by huge “hyperscalers” like Google, Amazon, Microsoft and Meta.

That’s because glass brings several advantages to these hot and huge chips, said Rahul Manepalli, an Intel fellow who leads Intel’s module engineering work.

It can accommodate 10 times the power and data connections as today’s organic substrates so more data can be pumped in and out of a chip. It doesn’t warp as much, critical to ensuring processors lie flat and connect properly to the outside world and thus enabling 50% larger chip packages. It transmits power with less waste, meaning chips can run either faster or more efficiently. And it can run at a higher temperature, and when it heats up, it expands at the same rate as silicon to avoid mechanical failures.

Glass will enable a new generation of server and data center processors, successors to mammoth beasts like the Intel Xeons that can run cloud computing services like email and online banking and Nvidia’s artificial intelligence processors that have exploded in popularity as the world embraces generative AI.

But as glass substrates mature and costs come down, it’ll spread beyond data centers to the computer sitting on your lap…

…Intel’s 8086 chip, the 1978 precursor to every PC and server processor that Intel has made since, was a flat square of silicon with 29,000 transistors. To protect it and plug it into a circuit board, it was housed in a package that looked like a flat caterpillar. Forty metal legs carried power and data to the chip.

Since then, processor packaging has advanced dramatically. It once was relatively crude, but now the boundary between chipmaking and packaging are blurring, Kelleher said. Packaging processes now use lithography machines to etch their own circuitry, although not nearly as finely as on processors…

…So today’s packages have flat metal contact patches on the bottom of the package. The chip is installed when hundreds of pounds of force mash it onto a circuit board.

A metal cap atop a processor draws away waste heat that otherwise would crash a computer. And beneath the processor is a substrate with an increasingly complex, three-dimensional network of power and data connections to link the chip to the outside world.

There are challenges moving from today’s organic substrates to glass. Glass is brittle, so it must be handled carefully, for example.

To ease the transition, Intel is adapting glass-handling equipment from experts who already know how to handle it without breaking: the display industry, which makes everything from tiny smartwatch screens to enormous flat-panel TVs. They also have to etch circuitry onto glass and have developed many of the needed ultrapure materials and careful handling processes.

But there are differences. Flat-panel displays have sensitive electronic elements only on one side, so glass can glide through factories on rollers. Intel builds a sandwich of materials and circuitry called redistribution layers onto both sides of the glass, so its machines must in effect hold the glass only by the edges…

…Signing on a packaging customer is a bit easier than signing on a chipmaking customer, with fewer technology complications and shorter lead times, he said.

But in customer deals for packaging can lead to the deeper relationship that extends into chipmaking, in particular with the Intel 18A chipmaking process the company expects will surpass TSMC and Samsung in 2024.

“It’s a foot in the door,” Gardner said. “There’s one customer in particular [for which] that trajectory of packaging first and advanced packaging then 18A is working well.”…

…It’s unclear how much of the processor business will move from “monolithic,” single-die designs to chiplet designs. There are still cost and simplicity advantages to avoiding advanced packaging. But it’s clear the biggest processors — the server and AI brains in data centers — will become sprawling complexes of interlinked chiplets.

And that’s where glass substrates should come in handy, with enough area, communication links and power delivery abilities to give chip designers room for growth.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, ASML, Meta Platforms, Microsoft, Tesla, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 17 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 17 September 2023:

1. How bonds ate the entire financial system – Robin Wigglesworth

“The bond market is the most important market in the world,” says Ray Dalio, the founder of the world’s largest hedge fund, Bridgewater. “It is the backbone of all other markets.”

While the bond market has become larger and more powerful, the importance of banks — historically the workhorses of the capitalist system — is subtly fading. The global bond market was worth about $141tn at the end of 2022. That is, for now, smaller than the $183tn that the Financial Stability Board estimates banks hold globally, but much of the latter is actually invested in bonds — a fact that some US banks have recently rued…

…The market is now facing one of its biggest tests in generations. Last year, resurgent inflation — the nemesis of financial securities that pay fixed interest rates — triggered the worst setback in at least a century. Overall losses were almost $10tn, shaking UK pension plans and regional banks in the US. And although bonds have regained their footing this year, they are still beset by rising interest rates.

Even if the bond market adapts, as it has in the past, its ballooning power, reach and complexity has some awkward implications for the global economy. “This transformation has been extraordinary, and positive,” says Larry Fink, head of BlackRock, the world’s biggest investment group. “But we have a regulatory system designed for a time when banks were the dominant players. They aren’t any more.”

“Shadow banking” is what some academics call the part of the financial system that resembles, but falls outside traditional banking. Policymakers prefer the less malevolent-sounding — but almost comically obtuse — term “non-bank financial institutions”. At $240tn, this system is now far bigger than its conventional counterpart. The bond market is its main component, taking money from investors who can mostly yank it away at short notice and funnel it into long-term investments.

The question of how to tame shadow banking is one of the thorniest topics in finance today. For the financial system as a whole, it is arguably better that the risks bonds inevitably entail are spread across a vast, decentralised web of international investors, rather than concentrated in a narrow clutch of banks. But in finance, risk is like energy. It cannot be destroyed, only shifted from one place to another. As it gets shunted around, its consequences can morph in little understood, even dangerous ways. We saw a perfect example of this in March 2020, when the Covid-19 pandemic acted as a gigantic stress test for the financial system that revealed fresh cracks in its foundation…

…Doge Vitale II Michiel of Venice was in a pickle. Under a dubious pretext, the Byzantine empire had in 1171 arrested all Venetian merchants in its capital Constantinople and seized their property. But the Italian city-state didn’t have the funds to send a navy to rescue its imprisoned citizens. So the Doge forced all citizens to lend the city some money, in return for 5 per cent interest a year until they were repaid.

The rescue mission did not go well. The Venetian fleet was devastated by plague while negotiating with Constantinople, and the Doge was forced to return humiliated. Back in Venice, irate subjects chased their ruler down the city’s streets and beat him to death. Ruined by the debacle, Venice was unable to repay, turning the emergency prestiti (loan) into a permanent fixture that paid 5 per cent annually.

Most people were eventually fine with this arrangement. The steady interest payments were quite attractive. Occasionally, Venice would raise more prestiti, and the one-time emergency facility gradually became a handy way of raising money…

…Another crucial difference is that bonds are designed to be traded, while loans are typically not. In 12th-century Venice, prestiti were bought and sold in the city’s Rialto market. Today, bond trading happens by phone, electronic messages and algorithms across the world’s financial centres. This tradability is central to the growth of bonds, as it allows creditors to shift the risk to someone else.

By the 19th century, bond markets had helped shape the world order. Countries that could best finance themselves tended to succeed. England’s victory over Napoleonic France was enabled by its bond market, which allowed it to finance wartime expenditures more effectively than did the local bankers that Paris depended on for short-term, high-interest loans…

…The aftermath of the second world war was unkind to the bond market. Although it had provided vital wartime funding for allied governments and remained one of the financial system’s most crucial cogs, accelerating inflation in the 1950s and 60s posed a challenge for securities with fixed interest rates. By the 1970s, buying bonds became a constant, brutal race to stay ahead of inflation’s return-eroding force. The aggressive central bank-rate increases that became necessary to tame runaway prices also lowered the value of bonds issued in a lower-rate environment. People dourly joked that bonds had become “certificates of confiscation”.

But the 1980s brought a new era of slowing inflation, falling rates, regulatory liberalism and financial innovation, which would transform the bond market…

…But what made Ranieri’s name was not his persona. Wall Street has had plenty of bombastic bond traders with a penchant for coarse practical jokes. It was what he did to make a dime: packaging up individual mortgages into bonds and then trading chunks of those bonds, a process known as securitisation.

Securitisation is an old concept. Back in 1774, the very first mutual fund bought bonds backed by loans from plantations in the Caribbean and toll roads in Denmark. US mortgage-backed bonds existed as early as the 19th century. But these bonds only used the underlying loans as collateral.

In 1970, the US Government National Mortgage Association (known as Ginnie Mae) engineered the first “passthrough” mortgage-backed securities, where the underlying individual loan payments flowed directly through to the bond investor. This was followed by similar deals by other US mortgage agencies such as Freddie Mac and Fannie Mae, to little fanfare. Ranieri did for securitisation what Milken had done for the junk bond market; he transformed it from the backwaters into a global and massively lucrative industry.

The first fillip was the crisis that struck the US “savings and loans” industry when the Federal Reserve ratcheted up rates in the early 1980s. Congress passed a jammy tax break to make it easier for the banks to shed entire portfolios of mortgages at fire-sale prices. Ranieri’s Salomon was there to scoop them up and flip them to other investors. Money began to course through Salomon’s mortgage trading desk.

Ranieri realised that he needed to turn a one-off vein into an entire gold mine that could be exploited year after year. Luckily, he found some in-house inspiration: an innovative deal his former boss Dall had done with Bank of America in 1977, which sought to tackle the difficulty of valuing the cash flows of mortgage-backed securities with a technique called “tranching”. It sliced them up into different portions each with their own interest rates, maturities and riskiness. That way, each investor could simply choose what kind of exposure they might like — a buffet rather than a set-course meal of variable quality.

Ranieri ran with the idea. Rather than just take the mortgage of one bank, he pooled together bunches of mortgages from lots of them. To handle the complexity, he hired a lot of bright young mathematicians to complement the mini-Ranieris on the trading desk. He then lobbied vociferously for government blessing of the tranching structure, knowing this would add to the products’ lustre with investors. He succeeded. By the mid-1980s, the market took off…

…The story had an unhappy ending: the new market proliferated until it nearly brought the global financial system down in 2008, something that later weighed on Ranieri. “I will never, ever, ever, ever live out that scar that I carry for what happened with something I created,” he told The Wall Street Journal in 2018.

But the fundamental idea — packaging up smaller loans into bigger bonds and thereby bringing together more people who needed money with those who had it — was sound. Done judiciously, it actually makes banks less risky, by shifting the inherent danger of extending loans out of banks and into markets. (This is why securitisation has bounced back since 2008, and is starting to gain ground outside the US as well, often with government encouragement.)…

…Given how disastrous bank crises can be, it could be a good thing that bonds nowadays are doing more of the heavy lifting. Unlike bank depositors, bond fund investors do not expect to get their money back (even if it can be a shock when things fall apart). And unlike banks, bond funds typically do not use much or even any leverage.

But bond crises can also be painful — as we saw in both 2008 and nearly in 2020. Modern capitalism has largely been ordered around banks as the main intermediaries of money. Central banks were mostly set up to backstop these commercial banks, and, eventually, they began trying to regulate the temperature of economies by tweaking the cost of their funding, moving overnight interest rates up and down. But with the rise of bond markets, entirely new challenges have emerged and experimental tools to deal with them have become necessary — most notably quantitative easing, negative interest rates and “yield curve control”.

If the ultimate goal is to regulate the temperature of an economy by changing the cost of credit, then the fact that credit is increasingly extended by the bond market rather than banks inevitably has consequences. The market’s decentralised nature means that dangers can be harder to monitor and address, requiring massive, untargeted “spray-and-pray” monetary responses by central banks when trouble erupts.

Unfortunately, the custodians of the financial system have yet to fully grapple with those consequences, even if everyone from the Federal Reserve to the IMF has repeatedly warned about the multi-faceted dangers the shift from banks to bonds entails. 

2. Searching for Resilience – Michael Weeks

For a business to survive 260 years in the same industry, with the same family owners, is a remarkable achievement. Starting in 1761 as a one-man shop making lead pencils, Faber-Castell has grown into the largest producer of colored and graphite pencils globally, producing over 2 billion pencils each year, as well as pens, markers, highlighters, and related products

Already the leading pencil producer in the mid-1800s, Faber-Castell has stayed on top of their industry for close to two centuries, betraying incredible entrepreneurial ability and drive. When the English supply of graphite began to fail and pencils became unaffordable, they bought a Siberian graphite mine and relied in part on reindeer transport to bring new raw materials to their factories. They expanded their product catalog, built up operations across Europe and the Americas, and invested in new technologies and equipment to improve their production. They helped introduce trademark law in Germany to protect their reputation against competitors. They established a 10’000 hectare forest plantation in Brazil to ensure their wood supply. They took their business seriously.

Nine generations of history also come with hardship. When the Americans joined World War I, Faber-Castell was cut off from the US market despite having operated there since in the 1850s. All of their US assets—land, equipment, inventory, patents, and trademarks—were seized and sold at auction after the war ended. During World War II their largest factory in Brazil was seized, not to be recovered for another twenty years, while their German factories were commandeered by the Nazi war machine. And in 1971, after 95 years of building a reputation as the finest producer of slide rules, they saw this entire side business vanish almost overnight when the pocket calculator was commercialized.

Resilience—or, the ability to survive hard times, as Faber-Castell has demonstrated time and time again—is something that we value instinctively. Yet, it’s not a popular subject. For all the years we’ve heard talk of sustainability, it seems that economic resilience, a once important dimension of economic prosperity, has become a relic of the past…

…What makes resilience so hard to spot is that it can only be proven during those rare times of crisis…

…It follows that resilience is not the same as looking good or having predictable financial results…

…Worse, a steady business model can actually become a source of fragility if placed in the wrong hands, as when companies with highly regular income streams leverage up their balance sheets, providing more immediate returns to their owners at the expense of their own resilience. Private equity has perhaps perfected this business model, but the growing dependence on debt in all walks of life reveal this as a defining feature of modern times…

…Resilience is not a destination. Seeking resilience means abandoning a narrow definition of success like sales growth or annual returns and instead becoming prepared for any eventuality that can cause serious harm. It is gained by pursuing new capabilities and flexibility, and by avoiding landmines. It means creating new options for the future instead of more plans for the present…

…Resilience comes at a cost. We are reminded of a metaphor used by Nassim Nicholas Taleb dealing with the nature of redundancy: “Layers of redundancy are the central risk management property of natural systems.”…

…A more effective way to add resilience to one’s savings is by ruthlessly avoiding its opposite: economic fragility. Thankfully, unlike resilience, fragility is often staring you in the face. This is where financial analysis really starts to shine. Is the company dependent on a few key customers and suppliers? Is the company overleveraged or buying back shares at indecent prices, just because it can? Are there elements of pricing power, or do their earnings evaporate at the first sign of trouble? Could a government ruling or decree suddenly break their business? Can their customers really afford to buy their products next year?…

…Perhaps the best way to find resilience is to look for its source: Resilience only comes from owners. Resilience is not a fluke that one stumbles into. It is a deliberate and purposeful objective which some aim for and others don’t. A good business plan, a profitable sector, a lot of cash, loyal management, or hardworking employees may all be wonderful, but only owners have the time horizon required to balance the present against an unknowable future, and only they have skin in the game—their own savings on the line. Unlike investors (renters), owners have no easy exits. They must build up reserves and competencies in the good years to give them options in the bad. They are motivated by a sense of responsibility—to themselves, their families, those they work with, and those who will come after them…

…Bakkafrost is a vertically integrated salmon farmer operating in the Faroe Islands and Scotland…

…A choice every salmon farmer has to make is what to do with the fish once it is ready to harvest. Do they sell their fish to wholesalers and other processors, or do they take it a step further, converting some into filets or smoked salmon that goes straight to the grocery store? This latter step is called Value-Added Processing or VAP, and Bakkafrost aims to sell about 30–40% of its fish through this channel each year.

A choice every salmon farmer has to make is what to do with the fish once it is ready to harvest. Do they sell their fish to wholesalers and other processors, or do they take it a step further, converting some into filets or smoked salmon that goes straight to the grocery store? This latter step is called Value-Added Processing or VAP, and Bakkafrost aims to sell about 30–40% of its fish through this channel each year. In the ten years from 2011 to 2020, Bakkafrost has reported cumulative revenues of about €4.5 billion and operating earnings of €1.1 billion, yet of those earnings only €14 million, or 1.2% of the total, have come from their VAP division. A financial observer might say, quite rightly, why bother? Salmon farming is hard enough, why dedicate additional capital and resources for a pittance? What a financial owner doesn’t see— and which the owners of Bakkafrost see plain as day—is the resilience this seemingly irrelevant processing step embeds in the organization…

…When hotels and restaurants shut their doors last year, all the food that was headed to this channel, including many millions of whole salmon, all needed to end up somewhere. Remember the 30-month lag between laying eggs and the salmon harvest? This means that while the demand for whole salmon evaporated, supply kept pouring in and the markets were soon stuffed full of whole fish with no one around to buy them…

…When hotels and restaurants shut their doors last year, all the food that was headed to this channel, including many millions of whole salmon, all needed to end up somewhere. Remember the 30-month lag between laying eggs and the salmon harvest? This means that while the demand for whole salmon evaporated, supply kept pouring in and the markets were soon stuffed full of whole fish with no one around to buy them. Covid in mind, but they did understand and value the importance of adding resilience to their business.

3. Dangerous CFOs, Imperial CEOs, Chagrined Bankers, and Warren Buffett – Dan Noe

While I was a credit analyst and manager at Moody’s, I met with many CEOs and CFOs. Most meetings were routine and executives were good at explaining their company’s business and financial strategies. They almost always put their best foot forward. But the exceptions were notable…

…The most revealing comment I ever heard in a meeting came from a savings and loan CEO during that industry’s crisis in the late 1980s. S&L holding companies had issued a lot of junk bonds and God knows what they did with the money. A lot of them were going under. During one meeting, this particular S&L CEO said, apropos of nothing, “I hope the feds never figure out what I’m doing.” His bankers looked like they were going to throw up. That was an example of an in-person meeting affecting our credit assessment…

…The worst case of self-importance was a regional bank CEO who insisted we meet him in his big suite at a fancy hotel and have breakfast. This was weird, and a member of his entourage tried to explain: “He needs anonymity while he is in New York.” I said, “Well, he’s got it. He works at a Midwest bank. Nobody here knows who he is.”

I did these meetings for years and felt like I’d seen it all. Then, one day, I met the antithesis of imperial CEOs and executives who have problems answering questions about their business. Warren Buffett and Charlie Munger from Berkshire Hathaway came in for a visit. They arrived in a taxi, not a limo, with no hangers-on, not one person with them at all. When I addressed him as “Mr. Buffett” he said, “Please call me Warren.” They had come in to talk about a debt issuance, and Buffett made a self-deprecating joke: “My mother always told me to avoid liquor, ladies, and leverage. I’ve avoided the first two, but sometimes I like a little leverage.” Our analyst, Weston Hicks, was an excellent insurance industry analyst and asked detailed and probing questions. Buffett and Munger spent an hour, an hour-and-a-half, giving very specific answers. When the meeting ended and we were walking to the elevators, Munger said to me, out of earshot of Weston, “Make sure you keep that analyst. He’s really good.”

4. Product-Led AI – Seth Rosenberg

I believe there’s tremendous value to be captured by product builders who can successfully put the power of AI into products that people love. As my partner Jerry Chen recently put forth, if we’re living in an age where foundation models make it possible for anyone to build an AI company, “the most strategic advantage [of applications] is that you can coexist with several systems of record and collect all the data that passes through your product.”…

…Of course there are plenty of detractors who don’t believe startups have a chance at this layer – incumbents own the data and distribution, and access to LLMs is both commoditized and fraught with platform risk. There will likely be many casualties of companies where an API call to OpenAI isn’t sufficient to build lasting value…

…In the last wave of consumer software, social networks and marketplaces were the dominant business models that created trillions of dollars of market cap, with Meta alone valued at just under $800 billion. Greylock was lucky to back many of these, including Meta, LinkedIn, Roblox, Airbnb, Discord, Musical.ly (now TikTok), and Nextdoor.

As reflected by the valuations, these networks were assumed to be “unbreakable”.

But now, AI challenges many of our initial assumptions. This is creating a new arms race to build the next AI-first network.

We moved from networks that connect people to algorithms that connect people to content. Now, we’re moving to algorithms that replace people…

…You can imagine a freelance logo design marketplace, like parts of Fiverr, will be replaced with an algorithm. A user inputs a prompt, and after a few tries, gets their logo. In this case, the data the algorithm receives is fairly shallow (prompts and selection), and the supply side is entirely replaced by an algorithm.

Contrast this to an AI-first jobs marketplace. The optimal product would be an AI career coach for job seekers and an AI assistant for recruiters – two seemingly separate products, connected by the same algorithm. The coach could gather deep insight from a job seeker – far beyond what they would share on a resume or LinkedIn – and use this data to not just find the perfect match, but help them discover their most fulfilling career path. Combine this data with a strong understanding of a recruiter’s needs, and both the coach and assistant get better…

…The best opportunities for start-ups attacking large software categories comes from finding angles where incumbents can’t compete. Here are four examples:

  1. UI/UX is re-imagined with AI – incumbent UI is irrelevant
  2. Product surface area is re-imagined with AI – incumbents compete at a different scope
  3. Business model is re-imagined with AI – incumbent business model can’t adapt
  4. No incumbent tech co before AI…

…Another great example is customer service, a $10 billion software category. The “obvious” starting point would be to automate customer service reps using AI. But what if the entire concept of customer service was re-imagined? Today, most companies actively reduce call volume by hiding the “contact us” button behind 5 menus and an ever-expanding phone tree. But, in a world of AI, every interaction can be cheap, delightful, and revenue-generating. In that world, companies might actively try to speak with their customers.

When I was at Meta in 2016, we tried to remedy this with an AI bot platform. Piloting with KLM airlines, we built an experience where Messenger handled every aspect of the passenger’s journey – boarding pass, customer service, travel recommendations at their destination, etc, all in a single conversation.. Despite amazing feedback, this pilot was shut down because of the cost to serve – but today, LLMs could make these types of interactions possible…

…One of the most interesting new opportunities with AI is going after the vastly larger market for services versus software with AI “co-pilots”. Most knowledge work involves analyzing and transforming data, a task that algorithms are better suited for.

I believe the best opportunities for co-pilots are “branded” sales people, like wealth managers, insurance brokers, and mortgage brokers. Their role involves a lot of text- based coordination, they work across multiple apps, and the ROI of increased efficiency is tangible. Take wealth managers as an example. According to Morgan Stanley, the biggest indicator of client retention for wealth managers is not portfolio performance, but consistency of personalized interactions with clients.

5. Society’s Technical Debt and Software’s Gutenberg Moment – Paul Kedrosky and Eric Norlin

Software has a cost, and it has markets of buyers and sellers. Some of those markets are internal to organizations. But the majority of those markets are external, where people buy software in the form of apps, or cloud services, or games, or even embedded in other objects that range from Ring doorbells to endoscopic cameras for cancer detection. All of these things are software in a few of its myriad forms. 

With these characteristics in mind, you can think of software in a basic price/quantity graph from introductory economics. There is a price and a quantity demanded at that price, and there is a price and quantity at which those two things are in rough equilibrium, as the following figure shows. Of course, the equilibrium point can shift about for many reasons, causing the P/Q intersection to be at higher or lower levels of aggregate demand. If the price is too high we underproduce software (leaving technical debt), and if too low, well … let’s come back to that…

…But technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening.

Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.). In CPUs, this has long been called Moore’s Law, where CPUs become more powerful by some increment every 18 months or so. While these laws are more like heuristics than F=ma laws of physics, they do help as a guide toward how the future might be different from the past.

We have seen this over and over in technology, as various pieces of technology collapse in price, while they grow rapidly in power. It has become commonplace, but it really isn’t. The rest of the economy doesn’t work this way, nor have historical economies. Things don’t just tumble down walls of improved price while vastly improving performance. While many markets have economies of scale, there hasn’t been anything in economic history like the collapse in, say, CPU costs, while the performance increased by a factor of a million or more.

To make this more palpable, consider that if cars had improved at the pace computers have, a modern car would:

  • Have more than 600 million horsepower
  • Go from 0-60 in less than a hundredth of a second
  • Get around a million miles per gallon 
  • Cost less than $5,000 

And they don’t. Sure, Tesla Plaid is a speedy car, it is nowhere near the above specs—no car ever will be. This sort of performance inflection is not our future, but it fairly characterizes and even understates what has happened in technology over the last 40 years.

… And each of these collapses has had broader consequences. The collapse of CPU prices led us directly from mainframes to the personal computer era; the collapse of storage prices (of all kinds) led inevitably to more personal computers with useful local storage, which helped spark databases and spreadsheets, then led to web services, and then to cloud services. And, most recently, the collapse of network transit costs (as bandwidth exploded) led directly to the modern Internet, streaming video, and mobile apps…

…Each collapse, with its accompanying performance increases, sparks huge winners and massive change, from Intel, to Apple, to Akamai, to Google & Meta, to the current AI boomlet. Each beneficiary of a collapse requires one or more core technologies’ price to drop and performance to soar. This, in turn, opens up new opportunities to “waste” them in service of things that previously seemed impossible, prohibitively expensive, or both…

…Still, the suddenly emergent growth of LLMs has some people spending buckets of time thinking about what service occupations can be automated out of existence, what economists called “displacement” automation. It doesn’t add much to the aggregate store of societal value, and can even be subtractive and destabilizing, a kind of outsourcing-factory-work-to-China moment for white-collar workers. Perhaps we should be thinking less about opportunities for displacement automation and more about opportunities for augmenting automation, the kind of thing that unleashes creativity and leads to wealth and human flourishing.

So where will that come from? We think this augmenting automation boom will come from the same place as prior ones: from a price collapse in something while related productivity and performance soar. And that something is software itself.

By that, we don’t literally mean “software” will see price declines, as if there will be an AI-induced price war in word processors like Microsoft Word, or in AWS microservices. That is linear and extrapolative thinking. Having said that, we do think the current frenzy to inject AI into every app or service sold on earth will spark more competition, not less. It will do this by raising software costs (every AI API call is money in someone’s coffers), while providing no real differentiation, given most vendors will be relying on the same providers of those AI API calls…

… In a hypothetical two-sector economy, when one sector becomes differentially more productive, specialized, and wealth-producing, and the other doesn’t, there is huge pressure to raise wages in the latter sector, lest many employees leave. Over time that less productive sector starts becoming more and more expensive, even though it’s not productive enough to justify the higher wages, so it starts “eating” more and more of the economy.

Economist William Baumol is usually credited with this insight, and for that it is called Baumol’s cost disease. You can see the cost disease in the following figure, where various products and services (spoiler: mostly in high-touch, low-productivity sectors) have become much more expensive in the U.S., while others (non-spoiler: mostly technology-based) have become cheaper…

…But there is another sector being held back by a variant of Baumol’s cost disease, and that is software itself. This may sound contradictory, which is understandable. After all, how can the most productive, wealth-generating, deflationary sector also be the victim of the same malaise it is inflicting on other sectors?

It can, if you think back to the two-sector model we discussed earlier. One sector is semis and CPUs, storage and backbone networks. Those prices are collapsing, requiring fewer people while producing vastly more performance at lower prices. Meanwhile, software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago. Yes, there have been developments in the production and deployment of software, but it is still, at the end of the day, hands pounding out code on keyboards. This should seem familiar, and we shouldn’t be surprised that software salaries stay high and go higher, despite the relative lack of productivity. It is Baumol’s cost disease in a narrow, two-sector economy of tech itself.

These high salaries play directly into high software production costs, as well as limiting the amount of software produced, given factor production costs and those pesky supply curves. Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software. We end up with a kind of societal technical debt, where far less is produced than is socially desirable—we don’t know how much less, but it is likely a very large number and an explanation for why software hasn’t eaten much of the world yet…

…We think that’s all about to change. The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.

And why shouldn’t it be? As the following figure shows, Large Language Model (LLM) impacts in the job market can be thought of as a 2×2 matrix. Along one axis we have how grammatical the domain is, by which we mean how rules-based are the processes governing how symbols are manipulated. Essays, for example, have rules (ask any irritated English teacher), so chat AIs based on LLMs can be trained to produce surprisingly good essays. Tax providers, contracts, and many other fields are in this box too…

… Software is even more rule-based and grammatical than conversational English, or any other conversational language. Programming languages—from Python to C++—can be thought of as formal languages with a highly explicit set of rules governing how every language element can and cannot be used to produce a desired outcome…

…Again, programming is a good example of a predictable domain, one created to produce the same outputs given the same inputs. If it doesn’t do that, that’s 99.9999% likely to be on you, not the language. Other domains are much less predictable, like equity investing, or psychiatry, or maybe, meteorology.

This framing—grammar vs predictability—leaves us convinced that for the first time in the history of the software industry, tools have emerged that will radically alter the way we produce software. This isn’t about making it easier to debug, or test, or build, or share—even if those will change too—but about the very idea of what it means to manipulate the symbols that constitute a programming language…

…Now, let’s be clear. Can you say MAKE ME MICROSOFT WORD BUT BETTER, or SOLVE THIS CLASSIC COMPSCI ALGORITHM IN A NOVEL WAY? No, you can’t, which will cause many to dismiss these technologies as toys. And they are toys in an important sense. They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.

How? Well, for one, the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster…

…We have mentioned this technical debt a few times now, and it is worth emphasizing. We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.

The question people always have at this point is, “So what app gets made?” While an understandable question, it is somewhat silly and definitely premature. Was Netflix knowable when Internet transit costs were $500,000/Mbps? Was Apple’s iPhone imaginable when screens, CPUs, storage and batteries would have made such devices the size of small rooms? Of course not. The point is that the only thing we know is that the apps and services will come. Without question.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon (parent of AWS), Apple, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 10 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 September 2023:

1. Rediscovering Berkshire Hathaway’s 1985 Annual Meeting – Kingswell

In the mid-1980s, Berkshire Hathaway’s annual meeting was an entirely different beast than today’s weekend-long “Woodstock for Capitalists”. Attendees didn’t have to book their hotel rooms months in advance or wake up before dawn just to get in line outside of an arena. There was no mad rush for seats once the doors opened.

It was a quieter, simpler chapter in Berkshire’s history.

So quiet, in fact, that 1985’s annual meeting was held on a Tuesday. And, instead of a cavernous arena, Warren Buffett and Charlie Munger opted for the Red Lion Inn in downtown Omaha. Approximately 250 shareholders attended the meeting and the ensuing Q&A session lasted only — only? — two hours…

HOW TO VALUE A BUSINESS: “Do a lot of reading,” replied Buffett.

Generally speaking, he recommended the writings of Benjamin Graham and Philip Fisher for those trying to sharpen their investment mindset — and annual reports and trade magazines for evaluating particular businesses and industries.

Reading, he insisted, is more important than speaking with company executives or other investors. In fact, Buffett admitted that he had recently purchased a substantial amount of Exxon stock before talking to any of that company’s executives. “You’re not going to get any brilliant insights walking into the [Exxon] building,” he said.

And, at least in the money game, size matters.

It’s easier to determine the value of a large business than a small one, Buffett said. If someone buys a gas station, for example, another station opening across the street could have a major effect on the value of the first station…

A COUPLE OF LAUGHS & A ROUND OF APPLAUSE: No annual meeting is ever complete without some of that trademark Warren Buffett wit.

  • Will the federal deficit be substantially reduced? “I’ll believe it when I see it.”
  • What about so-called “junk” bonds? “I think they’ll live up to their name,” Buffett quipped.

2. Germany Is Losing Its Mojo. Finding It Again Won’t Be Easy –  Bojan Pancevski, Paul Hannon, and William Boston

Two decades ago, Germany revived its moribund economy and became a manufacturing powerhouse of an era of globalization.

Times changed. Germany didn’t keep up. Now Europe’s biggest economy has to reinvent itself again. But its fractured political class is struggling to find answers to a dizzying conjunction of long-term headaches and short-term crises, leading to a growing sense of malaise.

Germany will be the world’s only major economy to contract in 2023, with even sanctioned Russia experiencing growth, according to the International Monetary Fund…

…At Germany’s biggest carmaker Volkswagen, top executives shared a dire assessment on an internal conference call in July, according to people familiar with the event. Exploding costs, falling demand and new rivals such as Tesla and Chinese electric-car makers are making for a “perfect storm,” a divisional chief told his colleagues, adding: “The roof is on fire.”

The problems aren’t new. Germany’s manufacturing output and its gross domestic product have stagnated since 2018, suggesting that its long-successful model has lost its mojo.

China was for years a major driver of Germany’s export boom. A rapidly industrializing China bought up all the capital goods that Germany could make. But China’s investment-heavy growth model has been approaching its limits for years. Growth and demand for imports have faltered…

…Germany’s long industrial boom led to complacency about its domestic weaknesses, from an aging labor force to sclerotic services sectors and mounting bureaucracy. The country was doing better at supporting old industries such as cars, machinery and chemicals than at fostering new ones, such as digital technology. Germany’s only major software company, SAP, was founded in 1975.

Years of skimping on public investment have led to fraying infrastructure, an increasingly mediocre education system and poor high-speed internet and mobile-phone connectivity compared with other advanced economies.

Germany’s once-efficient trains have become a byword for lateness. The public administration’s continued reliance on fax machines became a national joke. Even the national soccer teams are being routinely beaten…

…Germany today is in the midst of another cycle of success, stagnation and pressure for reforms, said Josef Joffe, a longtime newspaper publisher and a fellow at Stanford University.

“Germany will bounce back, but it suffers from two longer-term ailments: above all its failure to transform an old-industry system into a knowledge economy, and an irrational energy policy,” Joffe said…

…Germany still has many strengths. Its deep reservoir of technical and engineering know-how and its specialty in capital goods still put it in a position to profit from future growth in many emerging economies. Its labor-market reforms have greatly improved the share of the population that has a job. The national debt is lower than that of most of its peers and financial markets view its bonds as among the world’s safest assets.

The country’s challenges now are less severe than they were in the 1990s, after German reunification, said Holger Schmieding, economist at Berenberg Bank in Hamburg.

Back then, Germany was struggling with the massive costs of integrating the former Communist east. Rising global competition and rigid labor laws were contributing to high unemployment. Spending on social benefits ballooned. Too many people depended on welfare, while too few workers paid for it. German reliance on manufacturing was seen as old-fashioned at a time when other countries were betting on e-commerce and financial services.

After a period of national angst, then-Chancellor Gerhard Schröder pared back welfare entitlements, deregulated parts of the labor market and pressured the unemployed to take available jobs…

… Private-sector changes were as important as government measures. German companies cooperated with employees to make working practices more flexible. Unions agreed to forgo pay raises in return for keeping factories and jobs in Germany…

… Booming exports to developing countries helped Germany bounce back from the 2008 global financial crisis better than many other Western countries.

Complacency crept in. Service sectors, which made up the bulk of gross domestic product and jobs, were less dynamic than export-oriented manufacturers. Wage restraint sapped consumer demand. German companies saved rather than invested much of their profits.

Successful exporters became reluctant to change. German suppliers of automotive components were so confident of their strength that many dismissed warnings that electric vehicles would soon challenge the internal combustion engine. After failing to invest in batteries and other technology for new-generation cars, many now find themselves overtaken by Chinese upstarts…

…BioNTech, a lauded biotech firm that developed the Covid-19 vaccine produced in partnership with Pfizer, recently decided to move some research and clinical-trial activities to the U.K. because of Germany’s restrictive rules on data protection.

German privacy laws made it impossible to run key studies for cancer cures, BioNTech’s co-founder Ugur Sahin said recently. German approvals processes for new treatments, which were accelerated during the pandemic, have reverted to their sluggish pace, he said…

…One recent law required all German manufacturers to vouch for the environment, legal and ethical credentials of every component’s supplier, requiring even smaller companies to perform due diligence on many foreign firms, often based overseas, such as in China…

…German politicians dismissed warnings that Russian President Vladimir Putin used gas for geopolitical leverage, saying Moscow had always been a reliable supplier. After Putin invaded Ukraine, he throttled gas deliveries to Germany in an attempt to deter European support for Kyiv…

…One problem Germany can’t fix quickly is demographics. A shrinking labor force has left an estimated two million jobs unfilled. Some 43% of German businesses are struggling to find workers, with the average time for hiring someone approaching six months.

Germany’s fragmented political landscape makes it harder to enact far-reaching changes like the country did 20 years ago. In common with much of Europe, established center-right and center-left parties have lost their electoral dominance. The number of parties in Germany’s parliament has risen steadily.

3. GLP-1 Drugs: Not a Miracle Cure for Weight Loss – Biocompounding

Weight loss drugs have been the talk of the town for the last couple of months. The weight loss drugs on the market are Wegovy, Ozempic from Novo Norodisk (NVO), and Mounjaro from Eli Lilly (LLY)…

…These drugs consist of a natural hormone called GLP1…

…GLP-1 drugs mimic the action of a hormone called glucagon-like peptide 1, a natural hormone produced by the body in the gut. When blood sugar levels start to rise after a meal, the body produces this hormone to achieve multiple functions as seen in the image above. By producing and administering this hormone as a therapeutic, the drug will elicit similar effects seen with the natural hormone…

…Apart from increasing insulin production, GLP-1 can also help regulate body weight. GLP-1 improves glycaemic control and stimulates satiety, leading to reductions in food intake and thus body weight. Besides gastric distension and peripheral vagal nerve activation, GLP-1RA induces satiety by influencing brain regions involved in the regulation of feeding, and several routes of action have been proposed. GLP-1 can also reduce gastric emptying, so you don’t feel hungry so fast.

However, apart from the positives GLP1 drugs also cause muscle loss, lessen bone density, and lower your resting metabolic rate.

A research paper published in 2019, reported the percentage of weight loss comprising fat mass vs the proportion comprising lean body mass in patients using the different GLP1 drugs…

…This means that while GLP1’s can help to reduce obesity, individuals using the drugs need to be mindful to preserve their lean mass which requires exercising regularly to ensure they limit the loss of lean mass and improve their basal metabolic rate.

4. An Interview with Daniel Gross and Nat Friedman about the AI Hype Cycle – Ben Thompson, Daniel Gross, and Nat Friedman

NF: I think one of the interesting trends that we’ve seen in the last six months that we weren’t seeing a year ago is basically the application of large models to things that were previously some form of human intellectual labor or productivity labor. So in a way, what they’re doing in these cases is the models are automating or replacing or augmenting some part of a company. They’re competing not with existing software products but with parts of companies.

An example of one that Daniel and I were just talking to recently, we won’t name the company, but they automate filing bids on public tenders for businesses that do business with the government in different jurisdictions, and the time savings of this is totally enormous for these companies, and the upside for them is huge. It’s replacing a raft of internal and external consultants who were doing copywriting and bid preparation and just lots of fairly mechanical but still nothing-to-sneeze-at intellectual labor that produced bid documents. There’s material revenue upside for being able to bid on more things and win more bids, and this company’s growing like crazy, like a weed, so that would be one example.

Another example, there’s a whole sector now of these avatar platforms where people are basically able to produce personalized videos of someone saying, “Hey Ben, I saw that you were interested in our product and I wanted to tell you a little bit about us” and being able to basically generate text, feed that into an avatar platform that generates a realistic video that’s customized and using that in advertising, using it in personal outreach, using it in training materials. There’s some competing with non-consumption here where some of those videos would never have been produced because it would’ve just been too costly, and there’s some like, “Hey, God, I used to have to spend a ton of time doing this, now I can do it quite quickly”. Another example that’s like that, and by the way, all of the avatar, I mean I can name some of those Synthesia, D-ID, HeyGen, they’re all doing great, all of these companies are growing really well.

Another similar category is optimizing e-commerce. There used to be an entire — there still is — an entire industry of consultants and experts and companies who know how to do the SEO around product titles and descriptions and make sure that you have an Amazon landing page that converts, and some of that knowledge and know-how is getting crystallized into models and agent-like tool chains, and the testing can now be done automatically and you can use language models to run this kind of thing. I think this is interesting because these are all niches that really weren’t happening six or nine months ago, and in every category I just mentioned, there’s a company that’s making or soon will be making tens of millions of dollars doing this productively, creating real economic value for their customers and in some cases competing with teams of people or consultants…

...Does this just confirm the thesis though that the most compelling aspects for AI are number one mostly in the enterprise? Again, because enterprises are going to think about costs in a, I hesitate to use the word rational, but in a traditionally rational way, “It’s worth this huge upfront investment because it will pay off X amount over Y axis of time” as opposed to consumers which are more about an experience and may not think about the lifetime cost or value of something, along with this data point where whoever has the data wins. Is that just the reality or is there still opportunities for new entrants in this space?

DG: I think the story of progress is one where things will often, I think, start off looking at the enterprise as a way to make the existing thing better, that idea that the first TV shows or cameras pointed at radio shows, the horseless carriage and all that sort of stuff. So I think there’s a lot of V1 AI, let’s just accelerate or automate a lot of the human interaction with text just because we can do text synthesis now with computers. But the native use cases that’ll come out I think slightly later are going to be consumer ones — those I think will be entirely different things that are not replacing a process that existed before, they’re doing something that was never possible before and so there are consumer experiences today that are not really like anything else on the Internet.

Well, the two that I had on here were that seemed to still have a lot of traction are still growing are Midjourney and Character.AI, which are completely novel experiences and about fantasy and imagination and something that couldn’t be done previously.

DG: Yeah, it’s sort of funny, they told us the robots are going to be really good at blue collar jobs and really terrible at human psychology — that it’ll be the final realm of the human-to-human connection. Of course, it turns out the robots are fantastic at psychology and have zero dexterity for doing actual labor. But Character.AI is a good example and there’s now a bunch of these new kinds of native behavior, and it’s always interesting to ask of these behaviors. So you’re talking to an agent all day on Character, I find the good question to be asking is, “What were you doing previously?” as a way to figure out what this actually is, and the share of time that’s usually being taken is from gaming or social media. It’s really hard, I think, to forecast, to look at the iPhone and to forecast Uber or to look at the Internet and forecast even something like Amazon bots. They’re usually going to be, I think, consumer experiences. Those are the ones that are going to be the really disruptive stuff and the enterprise I think will get a lot of the obvious. We had a person here and now maybe we have a person in a co-pilot model.

That’s kind the trade-off of there being a top-down decision maker that thinks about things like lifetime value.

DG: They’ll do the rational thing.

They’re only going to do the obvious things.

DG: Yeah, and I think if businesses get disrupted by AI in any way, it will be something around a totally native, ideally a different user interface, an acceptance of a customer experience that’s a bit worse, which is usually your Clayton Christensen sort of downmarket disruption, but scales much more. I was actually thinking the companies that are trying to build, “We’re going to do your high-end legal work with AI”, I’m not exactly sure when that’ll work because the models still have this issue with hallucinating things and making things up. Whereas the low end, I was going to call a lawyer for $500 an hour to ask a particular question about my apartment lease, but instead I’m going to talk to legal GPT, that stuff I think will probably be much more impactful…

There’s an aspect here — one of the questions with the coding bit is Stack Overflow and sites like that have taken the biggest hit, but is this a sustainable future? I think this is a broader question about do we run out of data on the Internet. Is there going to be a data manufacturing industry?

NF: There is already. I think this is the secret story just beneath the surface of what’s happening. Everyone knows about the GPUs, you got to have the GPUs, they’re very expensive, we’re talking about the Nvidia supply chain. All of us know about CoWoS and wafer packaging and Ajinomoto Films and all these things.

But the other key input is data and the readily available tokens you can scrape off the Internet are quickly exhausted, and so there is currently happening beneath the surface, a shadow war for data where the largest AI labs are spending huge amounts of money, like huge amounts of money to acquire more valuable tokens, either paying experts to generate it, working through labeling companies like Scale AI or others. There’s a new crop of startups in that space as well and we think more is going to happen there and it’s going to be a really interesting space to watch.

So there’s a way in which you need these really high IQ, high-value tokens in order to train your models, and the average piece of data you scrape off a random website kind is equal to all the other data that you have, but you’ll pay extra for really valuable training data, and so people are producing it. I don’t know the exact numbers, but I’ve heard rumors that Google is spending a billion dollars this year on generating new training data, and if you’re going to spend billions and billions on your CapEx to build out your GPU training clusters, spending some fraction of that or maybe an equal amount in generating data, which is a kind of CapEx as well kind of makes sense. Someone told me the other day experts are the new GPUs and so there’s this wave of spending on experts who are going to generate tokens that can be valuable.

Then of course the secondary question there is what the legal regime will ultimately be for training. We’re operating in the US, UK, and in Europe under this fair use regime now where it’s fair use for you to scrape text off the Internet as long as it’s public and you’re not going through paywalls or user walls to get it and then you can in aggregate train machine learning models on it. That’s kind of the bright letter of the law, but people don’t always feel good about that and so will the law change, will there be a kind of DMCA for AI? And which way will it cut? I think we don’t know yet and so there may be a war for data in more ways than one over the next couple of years…

For the record, Nvidia’s results are going to come out in about 12 hours, so we don’t know what’s going to happen yet, but one of the most interesting questions broadly speaking is what is going to happen in the GPU space? Nvidia — do they have a moat, is it going to be a sustainable advantage? Obviously, they have a double advantage right now, in that they have the best hardware and they have CUDA, but there’s massive efforts on both sides to take that away. Can they build up a sustainable advantage that will persist?

NF: For the next couple of years, it’s Nvidia and it’s TPU and those are the only players that are really viable.

Google’s Tensor Processing Unit.

NF: Yeah, it’s a huge strategic asset for Google. I mean, they’re the only company basically that has an independent, not fully independent because obviously they overlap when it gets down to the fabs, and some other parts of the supply chain but they’re not subject to Jensen allocating them H100s. They can just kind of allocate their own and by all accounts, their TPU v5, they’re producing in absolute record numbers.

Easier to deal with TSMC than to deal with Jensen is what you’re saying.

NF: Yeah, I mean, at least they don’t have that one Jensen choke point. I mean, Jensen right now is dealing with overwhelming demand and limited supply, and so he’s having to very carefully allocate GPUs, and it’s sort of a very central resource distribution mechanism and allocation mechanism. It’s kind of wild. So even if you say, “Oh, AMD’s chips are going to be as good,” they’re just not going to produce them in numbers that matter next year and so I think my take is, there’s only two players for the next couple of years that matter, and my take is also that we will be supply-constrained, because there will be more AI applications that take off and need huge inference capacity, and there will be more people trying to train large models.

Is there a hype cycle aspect where we actually look back in a few years, and there were way too many GPUs bought and produced, and we actually end up with an overhang? Basically what happened with Nvidia last year, but at a 100x, a 1000x scale and that actually ends up being a huge accelerant for AI, because you end up with super cheap inference because you have all this depreciated GPUs that were bought up in 2023 and 2024, and then it all crashed. Actually going back to the dot-com bubble and all the fiber that got laid by companies that immediately went out of business.

NF: You might have a dark fiber, “How many shortages are not followed by a glut?” is always the interesting question. They usually do get followed by a glut and I think one scenario in which that happens is I’m a very strong believer in scaling laws for these big general reasoning models. Essentially, the more training data and the more flops you put in, you’re just going to get a better and better model out, and we’ve seen this now over several orders of magnitude, it’s just incredibly consistent. We saw it with GPT-1 and GPT-2, and GPT-3, and now GPT-4, and we’ll see it I think with GPT-5. So, it’s possible that there’s some escape velocity that occurs where a few labs are the only ones who can afford to train the GPT-5 or GPT-6 equivalent models, and all of the startups and businesses that were getting essentially a sub-scale amount of GPU, unless they were doing something incredibly domain specific, those are no longer needed. So, you’ll have I don’t know, three or four companies that afford to train the $10 billion model, and that’s actually a limited number of GPUs.

5. Respect and Admiration – Morgan Housel

This isn’t universal, but there are cases when people’s desire to show off fancy stuff is because it’s their only, desperate, way to gain some sense of respect and admiration. They don’t have any wisdom, intelligence, humor, empathy, or capacity for love to gain people’s respect. So they rely on the only remaining, and least effective, lever: Look at my car, beep beep, vroom vroom…

…My guess is that if your favorite comedian, or actor, or athlete turned out to be broke, you wouldn’t care. It wouldn’t impact how much you admire them, because you admire them for talents that money can’t buy.

Even when Amazon was huge and successful, Jeff Bezos used to drive a Honda Accord. Today he has a $500 million yacht. Is he respected and admired more for it? Not in the slightest. He could ride a Huffy bike and people would consider him the greatest entrepreneur of our era, because he is. Steve Jobs didn’t have any furniture. It didn’t matter. He’s a genius. He’s Steve Jobs. Material stuff makes no difference when you’re respected and admired for internal traits…

…Once you see people being respected and admired for reasons that have nothing to do with the stuff they own, you begin to wonder why you have such a strong desire for those possessions. I tend to view material desire as a loose proxy for the inverse of what else you have to offer the world. The higher my desire for fancy stuff, the less real value I have to offer.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 03 September 2023:

1. China Reaches Peak Gasoline in Milestone for Electric Vehicles – Colin McKerracher

Earlier this month, Chinese oil giant Sinopec made a surprise announcement that mostly flew under the radar. It’s now expecting gasoline demand in China to peak this year, two years earlier than its previous outlooks.

The main culprit? The surging number of electric vehicles on the road…

…China has been the largest driver of global growth for refined oil products like gasoline and diesel over the last two decades. But EV adoption rates in China are now soaring, with August figures likely to show plug-in vehicles hitting 38% of new passenger-vehicle sales. That’s up from just 6% in 2020 and is starting to materially dent fuel demand.

Fuel demand in two and three-wheeled vehicles is already in structural decline, with BNEF estimating that 70% of total kilometers traveled by these vehicles already switched over to electric. Fuel demand for cars will be the next to turn, since well over 5% of the passenger-vehicle fleet is now either battery-electric or plug-in hybrid. The internal combustion vehicle fleet is also becoming more efficient due to rising fuel-economy targets.

Diesel demand for heavier vehicles will keep growing for a bit longer, but even there a seismic shift is underway. Electric, fuel cell and battery-swapping options have quickly climbed to 12% of light commercial vehicle sales and 4% to 5% of medium and heavy commercial vehicle sales. That heavy-duty figure is likely to climb to over 10% by 2025.

Combine all those segments, and BNEF expects total oil demand for road transport in China to peak late next year. Demand won’t drop off a cliff anytime soon — fleet turnover in the trucking segment in particular will take time — but it still marks a major shift for global oil demand patterns. It also has big implications for refiners that need to quickly adjust the mix of products they produce.

It also called out the effects China’s ride-hailing fleet is having on urban gasoline demand.

Vehicles used for ride-hailing in China are far more likely to be electric — their share is nearing 40% of the fleet — than those that are privately owned. Electric ride-hailing vehicles are also more productive than their gasoline-powered counterparts, accounting for 50% of the kilometers traveled on market leader Didi’s ride-hailing platform in December…

…The Sinopec announcement highlights how looking just at the fleet of vehicles can lead one to miss the full story with respect to energy impact…

…The speed that oil gets squeezed out of the transport mix depends on how fast countries like China switch over the number of kilometers traveled to electric — not just the number of cars and trucks.

2. Peasant Logic and the Russian GKO Trade – Joe Pimbley

Later in 1998, after Russia blew up, I attended a public risk management conference in Paris. And one of the speakers was Allen Wheat, CEO of Credit Suisse at the time. I didn’t know Wheat, but he impressed me as a blunt, direct-speaking guy. He talked about Credit Suisse’s version of the GKO trade. He didn’t mention a short position in a Russian-issued dollar bond, so maybe Credit Suisse didn’t bother with the credit risk hedge. But he talked about the GKO and rubles and the cross-currency forwards Credit Suisse executed with Russian banks…

…Interesting to me, Wheat’s story was not that he got to the bottom of this controversy and figured out what part of the loss owed to market risk and what part owed to credit risk. Wheat’s conclusion to his board of directors was that Credit Suisse had a problem with its “risk management philosophy.” It had market risk and credit risk silos when really risk management must be integrated. It’s unproductivee to distinguish market risk from credit risk if things are going to fall between the cracks and nobody’s going to take responsibility for understanding the complete risk picture.

Clearly, that’s a nice message, even if you wonder why Wheat didn’t work through the finger-pointing and hold people to account. Who can argue against an integrated approach to risk? But Wheat admitted he got chastised by his board when he presented that conclusion. The board said, “Allen, we think we understand what’s wrong here. It’s good to do all your analysis and get deep into the details, but at some point, you’re not seeing the big picture. You really need to use ‘peasant logic.’”

Wheat explained that “peasant logic” was the board’s term for what we might call “common sense,” but I like peasant logic better. The board said, “You people worry about how good your models are and you wonder about using two years of historical data or five years of historical data, and whether one is better than the other, and how much data you should have. We think you should have looked at the big picture and said, ‘messing around with 40% yields means there’s a lot of risk here. This is an unstable government and currency situation.’ We think you aren’t seeing the forest for the trees.”

So this was Wheat’s point: sometimes it’s good to forget the data and models and use peasant logic. In this case, if there are abnormal returns, there must be some abnormal risk…

…Then it came time for questions, and from the back of the room, someone had to shout out his question to be heard. And as soon as he started speaking, you could tell it’s a Russian accent and the guy is Russian. Being Russian lent authenticity to his remark, “You want historical data. I’ll give you 75 years of historical data. Russia has never honored any debt obligation.”

…Unfortunately, Wheat’s reaction was to be annoyed. Wheat didn’t say, “Wow, what a great way to look at this. Why are we trusting Russian debt?” And he also didn’t say, “That’s a great example of the peasant logic the board was trying to impress upon me.”

The Russian continued. “I work for Merrill Lynch and we did this trade also and lost a lot of money. Beforehand, I told them it was a terrible trade because of Russia’s history and they didn’t listen to me because I’m just a mathematician.” Wheat still hadn’t cottoned to the idea that the Russian was helping him make his point about peasant logic, so he said in a rather dismissive, sarcastic way, “Well, I wish we had you working for us, then we wouldn’t have lost money. Right?”

Now it’s easy in hindsight, when you know how something worked out, to say “Aha, I knew such and such.” But still, I thought the Russian added to Wheat’s remarks and his remarks really made Wheat’s point. This guy in the audience was demonstrating peasant logic. The traders put all these fancy complex pieces together and think they’re really smart, but what the heck are they doing lending money to a government that, to this guy who is closer to it than the rest of us, you shouldn’t trust? 

3. Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors – Dylan Patel and Daniel Nishball

The statement that may not be obvious is that the sleeping giant, Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year. The path is clear to 20x by the end of next year given their current infrastructure buildout. Whether Google has the stomach to put these models out publicly without neutering their creativity or their existing business model is a different discussion…

…Access to compute is a bimodal distribution. There are a handful of firms with 20k+ A/H100 GPUs, and individual researchers can access 100s or 1,000s of GPUs for pet projects. The chief among these are researchers at OpenAI, Google, Anthropic, Inflection, X, and Meta, who will have the highest ratios of compute resources to researchers. A few of the firms above as well as multiple Chinese firms will 100k+ by the end of next year, although we are unsure of the ratio of researchers in China, only the GPU volumes.

One of the funniest trends we see in the Bay area is with top ML researchers bragging about how many GPUs they have or will have access to soon. In fact, this has become so pervasive over the last ~4 months that it’s become a measuring contest that is directly influencing where top researchers decide to go. Meta, who will have the 2nd most number of H100 GPUs in the world, is actively using it as a recruiting tactic.

Then there are a whole host of startups and open-source researchers who are struggling with far fewer GPUs. They are spending significant time and effort attempting to do things that simply don’t help, or frankly, matter. For example, many researchers are spending countless hours agonizing on fine-tuning models with GPUs that don’t have enough VRAM. This is an extremely counter-productive use of their skills and time.

These startups and open-source researchers are using larger LLMs to fine-tune smaller models for leaderboard style benchmarks with broken evaluation methods that give more emphasis to style rather than accuracy or usefulness. They are generally ignorant that pretraining datasets and IFT data need to be significantly larger/higher quality for smaller open models to improve in real workloads.

Yes, being efficient with GPUs is very important, but in many ways, that’s being ignored by the GPU-poors. They aren’t concerned with efficiency at scale, and their time isn’t being spent productively. What can be done commercially in their GPU-poor environment is mostly irrelevant to a world that will be flooded by more than 3.5 million H100s by the end of next year. For learning, experimenting, smaller weaker gaming GPUs are just fine…

…While the US and China will be able to keep racing ahead, the European startups and government backed supercomputers such as Jules Verne are also completely uncompetitive. Europe will fall behind in this race due to the lack of ability to make big investments and choosing to stay GPU-poor. Even multiple Middle Eastern countries are investing more on enabling large scale infrastructure for AI.

Being GPU-poor isn’t limited to only scrappy startups though. Some of the most well recognized AI firms, HuggingFace, Databricks (MosaicML), and Together are also part of this GPU-poor group. In fact, they may be the most GPU-poor groups out there with regard to both the number of world class researchers per GPU and the number of GPUs versus the ambition/potential customer demand. They have world class researchers, but all of them are limited by working on systems with orders of magnitude less capabilities. These firms have tremendous inbound from enterprises on training real models, and on the order of thousands of H100s coming in, but that won’t be enough to grab much of the market.

Nvidia is eating their lunch with multiple times as many GPUs in their DGX Cloud service and various in-house supercomputers. Nvidia’s DGX Cloud offers pretrained models, frameworks for data processing, vector databases and personalization, optimized inference engines, APIs, and support from NVIDIA experts to help enterprises tune models for their custom use cases. That service has also already racked up multiple larger enterprises from verticals such as SaaS, insurance, manufacturing, pharmaceuticals, productivity software, and automotive. While not all customers are announced, even the public list of Amgen, Adobe, CCC, ServiceNow, Accenture, AstraZeneca, Getty Images, Shutterstock, Morningstar, Evozyne, Insilico Medicine, Quantiphi, InstaDeep, Oxford Nanopore, Peptone, Relation Therapeutics, ALCHEMAB Therapeutics, and Runway is quite impressive.

4. Making Sense Of The China Meltdown Story – Louis-Vincent Gave

It is impossible to turn to a newspaper, financial television station or podcast today without getting told all about the unfolding implosion of the Chinese economy. Years of over-building, white elephants and unproductive infrastructure spending are finally coming home to roost. Large property conglomerates like Evergrande and Country Garden are going bust. And with them, so are hopes for any Chinese economic rebound. Meanwhile, the Chinese government is either too incompetent, too ideologically blinkered, or simply too communist to do anything about this developing disaster.

Interestingly, however, financial markets are not confirming the doom and gloom running rampant across the financial media…

…At Gavekal, we look at bank shares as leading indicators of financial trouble. When we see bank shares break out to new lows, it is usually a signal that investors should head for the exit as quickly as possible. This was certainly the case in 2007-08 in the US. Between February 2007 and July 2008 (six weeks before the collapse of Lehman Brothers), banks shares lost -60% of their value…

…Now undeniably, Chinese bank shares have not been the place to be over the past few years. Nonetheless, Chinese bank shares are still up a significant amount over the last decade. And this year, they have not even taken out the low of 2022 made on October 31st following the Chinese Communist Party congress. To be sure, the chart below is hardly enticing, even if the slope of the 200-day moving average is positive. Still, Chinese bank shares do not seem to be heralding a near-term financial sector Armageddon…

…China is the number one or two importer of almost every major commodity you can think of. So, if the Chinese economy were experiencing a meltdown, you would expect commodity prices to be soft. Today, we are seeing the opposite. The CRB index has had a strong year so far in 2023, and is trading above its 200-day moving average. Moreover, the 200-day moving average now has a positive slope. Together, all this would seem to point towards an unfolding commodity bull market more than a Chinese meltdown…

…Jacques Rueff used to say that exchange rates are the “sewers in which unearned rights accumulate.” This is a fancy way of saying that exchange rates tend to be the first variable of adjustment for any economy that has accumulated imbalances. On this front, the renminbi has been weak in recent months, although, like Chinese equities, it has yet to take out October’s lows.

That is against the US dollar. Against the yen, the currency of China’s more direct competitor, Japan, the renminbi continues to grind higher and is not far off making new all-time highs. And interestingly, in recent weeks, the renminbi has been rebounding against the South Korean won.

This is somewhat counterintuitive. In recent weeks, oceans of ink have been spilled about how China is the center of a developing financial maelstrom. Typically, countries spiraling down the financial plughole do not see their currencies rise against those of their immediate neighbors and competitors…

…In other words, a range of data points seems to indicate that Chinese consumption is holding up well. This might help to explain why the share prices of LVMH, Hermès, Ferrari and most other producers of luxury goods are up on the year. If China really was facing an economic crash, wouldn’t you expect the share prices of luxury good manufacturers to at least reflect some degree of concern?…

…Staying on the US treasury market, it is also odd how Chinese government bonds have outperformed US treasuries so massively over the past few years. Having gone through a fair number of emerging market crises, I can say with my hand on my heart that I have never before seen the government bonds of an emerging market in crisis outperform US treasuries. Yet since the start of Covid, long-dated Chinese government bonds have outperformed long-dated US treasuries by 35.3%.

In fact, Chinese bonds have been a beacon of stability, with the five-year yield on Chinese government bonds spending most of the period since the 2008 global crisis hovering between 2.3% and 3.8%. Today, the five-year yield sits at the low end of this trading band. But for all the negativity out there, yields have yet to break out on the downside…

…While the Chinese government debt market has been stable, the pain has certainly been dished out in the Chinese high yield market. Yields have shot up and liquidity in the Chinese corporate bond market has all but evaporated. Perhaps this is because historically many of the end buyers have been foreign hedge funds, and the Chinese government feels no obligation to make foreign hedge funds whole. Or perhaps it is because most of the issuers were property developers, a category of economic actor that the CCP profoundly dislikes.

Whatever the reasons, the Chinese high yield debt market is where most of the pain of today’s slowdown has been—and continues to be—felt. Interestingly, however, it seems that the pain in the market was worse last year than this year. Even though yields are still punishingly high, they do seem to be down from where they were a year ago…

…Why the sudden drumbeat about collapsing Chinese real estate and impending financial crisis when the Chinese real estate problem has been a slow-moving car crash over the past five years, and when, as the charts above show, markets don’t seem to indicate a crisis point?

At least, markets outside the US treasury market don’t seem to indicate a crisis point. So could the developing meltdown in US treasuries help to explain the urgency of the “China in crisis” narrative?…

…Basically, US treasuries have delivered no positive absolute returns to any investor who bought bonds after 2015. Meanwhile, investors who bought Chinese government bonds in recent years are in the money, unless they bought at the height of the Covid panic in late 2021 and early 2022. This probably makes sense given the extraordinary divergence between US inflation and Chinese inflation.

None of this would matter if China was not in the process of trying to dedollarize the global trade in commodities and was not playing its diplomatic cards, for example at this week’s BRICS summit, in an attempt to undercut the US dollar (see Clash Of Empires). But with China actively trying to build a bigger role for the renminbi in global payments, is it really surprising to see the Western media, which long ago gave up any semblance of independence, highlighting China’s warts? Probably not. But the fact that the US treasury market now seems to be entering a full-on meltdown adds even more urgency to the need to highlight China’s weaknesses.

A Chinese meltdown, reminiscent of the 1997 Asian crisis, would be just what the doctor ordered for an ailing US treasury market: a global deflationary shock that would unleash a new surge of demand and a “safety bid” for US treasuries. For now, this is not materializing, hence the continued sell-off in US treasuries. But then, the Chinese meltdown isn’t materializing either.

5. Why China’s economy ran off the rails – Noah Smith

This is a pretty momentous happening, since a lot of people had started to believe — implicitly or explicitly — that China’s economy would never suffer the sort of crash that periodically derails all other economies. That was always wrong, of course, and now the bears are coming out for a well-deserved victory lap…

…Anyway, OK, here is my quick story of what happened to China. In the 1980s, 90s, and early 2000s, China reaped huge productivity gains from liberalizing pieces of its state-controlled economy. Industrial policy was mostly left to local governments, who wooed foreign investors and made it easy for them to open factories, while the central government mostly focused on big macro things like making capital and energy cheap and holding down the value of the currency. As a result, China became the world’s factory, and its exports and domestic investment soared. As did its GDP.

At this time there were also substantial tailwinds for the Chinese economy, including a large rural surplus population who could be moved to the cities for more productive work, a youth bulge that created temporarily favorable demographics, and so on. China was also both willing and able to either buy, copy, or steal large amounts of existing technology from the U.S. and other rich countries.

Meanwhile, during this time, real estate became an essential method by which China distributed the gains from this stupendous economic growth. It was the main financial asset for regular Chinese people, and land sales were how local governments paid for public services.

Then the 2008 financial crisis hit the U.S., and the Euro crisis hit Europe. The stricken economies of the developed nations were suddenly unable to keep buying ever-increasing amounts of Chinese goods (and this was on top of export markets becoming increasingly saturated). Exports, which had been getting steadily more and more important for the Chinese economy, suddenly started to take a back seat:..

… The government told banks to lend a lot in order to avoid a recession, and most of the companies they knew how to shovel money at were in the real estate business in some way. That strategy was successful at avoiding a recession in 2008-10, and over the next decade China used it again whenever danger seemed to threaten — such as in 2015 after a stock market crash.

Maybe China’s leaders were afraid of what would happen to them if they ever let growth slip, or maybe they didn’t really think about what the costs of this policy might be. In any case, China basically pivoted from being an export-led economy to being a real-estate-led economy. Real-estate-related industries soared to almost 30% of total output.

That pivot saved China from recessions in the 2010s, but it also gave rise to a number of unintended negative consequences. First, construction and related industries tend to have lower productivity growth than other industries (for reasons that aren’t 100% clear). So continuing to shift the country’s resources of labor and capital toward those industries ended up lowering aggregate productivity growth. Total factor productivity, which had increased steadily in the 2000s, suddenly flatlined in the 2010s:..

…This productivity slowdown probably wasn’t only due to real estate — copying foreign technology started to become more difficult as China appropriated all the easier stuff. Nor was productivity the only thing weighing on China’s growth — around this same time, surplus rural labor dried up. Anyway, put it all together, and you get a slowdown in GDP growth in the 2010s, from around 10% to around 6% or 7%:

But 6-7% is still pretty darn fast. In order to keep growth going at that pace, China had to invest a lot — around 43% of its GDP, more than in the glory days of the early 2000s, and much more than Japan and Korea at similar points in their own industrial development.

Only instead of deploying that capital efficiently, China was just putting it toward increasingly low-return real estate. The return on assets for private companies collapsed:

Much of this decline was due simply to the Chinese economy’s shift toward real estate; if you strip out real estate, the deterioration in the private sector looks much less severe…

…So even as the pivot to real estate was adding to a long-term slowdown in China’s growth, it was also generating a bubble that would eventually cause an acute short-term slowdown as well. If there’s a grand unified theory of China’s economic woes, it’s simply “too much real estate”.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Adobe, Alphabet (parent of Google), and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 27 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 27 August 2023:

1. Why Lehman Brothers Failed When It Did – Joe Pimbley

In 2008, security firms operated with high leverage and significant amounts of short-term debt. Lehman had $26 billion of equity supporting $639 billion of assets and its high leverage was not unusual among security firms. But at that ratio, a 4% decline in assets wipes out equity. Meanwhile, reliance on the continuous rolling of short-term debt requires the security firm to always maintain lender confidence. Lenders’ perception of solvency becomes more important than the actual fact of solvency.

When the highly leveraged, short-term debt, security firm business model met the asset-value destruction of the Great Financial Crisis, Lehman was not the only security firm to fail. All major US firms failed to one degree or another. Besides Lehman’s outright bankruptcy, Bear Stearns and Merrill Lynch were merged into commercial banks. I believe Goldman Sachs and Morgan Stanley would have defaulted on their short-term borrowings had the Fed not permitted them to convert to bank holding companies and gain access to discount window liquidity…

…A place to begin chronicling factors specific to Lehman’s failure is the beginning of 2006. That was when the firm’s management decided to make more long-term investments.[2] Rather than remaining focused on security distribution and brokerage, Lehman increased its own holdings in commercial real estate, leveraged loans, and private equity. In our report to the bankruptcy court, we described this strategic change as a shift from the “moving business” to the “storage business.”

One year later in early 2007, Lehman management viewed the incipient financial crisis as an opportunity for the firm to gain market share and revenue from competitors that were retrenching and lowering their risk profiles. Lehman did not think the subprime mortgage crisis would spread to the general economy or even to its growing commercial real estate portfolio. Lehman had boldly taken on assets and assumed risk in the 2001-02 economic downturn. Its risk-taking back then had paid off and it hoped such contrarian boldness would again prove profitable.

Lehman’s pace of principal investments in commercial real estate, leveraged loans, and private equity increased in the first half of 2007 as other security firms reduced risk and hunkered down. It committed $11 billion to acquire Archstone REIT in May 2007 and ended up funding the riskiest $6 billion of that in October when it couldn’t find enough buyers to take it out of its commitment. Other bridge loans and bridge equity positions also became similarly stuck on its balance sheet. Its mortgage subsidiaries were slow to stop making residential mortgage loans and Lehman ended up holding mortgage-backed bonds and mortgage-bond-backed collateralized debt obligations it couldn’t sell.

To take on these risky assets, Lehman’s management raised all its internal risk limits: firm-wide, line-of-business, and even single-name risk limits. Or they ignored the limits they had set. Management was not fulsome in its disclosures to its board of directors about the risks it assumed and Lehman’s board did not press management for important information. In theory, Lehman’s compensation policy penalized excessive risk taking, but in practice it rewarded employees on revenue with minimal attention to associated risk.

Not only were these investments risky from the perspective of potential market value losses; they were risky from the point of view of financing. By their nature, real estate, leveraged loans, and private equity are hard to value and less liquid. It is difficult to determine how quickly and how severely they could lose value. These characteristics mean the ability to finance these assets cannot be assumed. If lenders worry about the realizable value of assets offered as loan security, they will lower the amount they will lend against those assets or cease lending against them altogether. Most of Lehman’s secured debt had overnight tenors, so lenders could stop rolling over their loans to Lehman on any business day!

Lehman’s management only began to cut back on leveraged loan acquisitions in August 2007 and they waited until later in 2007 to cut back on commercial real estate purchases. Yet deals in the pipeline caused Lehman’s assets to grow $95 billion to $786 billion over the quarter ending February 2008. The firm did not begin to sell assets in earnest until March 2008, but only got assets down to $639 billion by May 2008.

Lehman’s management deliberately deceived the world about the firm’s financial condition. Management used an accounting trick to temporarily remove $50 billion of assets from the firm’s balance sheet at the end of the first and second quarters of 2008. In so-called “repo 105” transactions, Lehman pledged assets valued at 105% or more of the cash it received. Relying on a legal opinion from a UK law firm addressing English law, Lehman deducted the assets from its balance sheet. No other security firm used this stratagem in 2008 and Lehman did not disclose its use.

Lehman’s management touted the firm’s “liquidity pool,” the sum of cash and assets readily convertible into cash and as late as two days before bankruptcy claimed this pool equaled $41 billion. In fact, only $2 billion of those assets were readily monetizable.

From January to May 2008, while its competitors raised equity, Lehman did not. Lehman’s management rejected offers from interested investors because they did not want to issue equity at a discount to market price. Management thought doing so would make the firm seem vulnerable. Lehman did not issue common stock in 2008 until a $4 billion issuance in June.

2. China’s 40-Year Boom Is Over. What Comes Next? – Lingling Wei and Stella Yifan Xie

For decades, China powered its economy by investing in factories, skyscrapers and roads. The model sparked an extraordinary period of growth that lifted China out of poverty and turned it into a global giant whose export prowess washed across the globe.

Now the model is broken.

What worked when China was playing catch-up makes less sense now that the country is drowning in debt and running out of things to build. Parts of China are saddled with under-used bridges and airports. Millions of apartments are unoccupied. Returns on investment have sharply declined.

Signs of trouble extend beyond China’s dismal economic data to distant provinces, including Yunnan in the southwest, which recently said it would spend millions of dollars to build a new Covid-19 quarantine facility, nearly the size of three football fields, despite China having ended its “zero-Covid” policy months ago, and long after the world moved on from the pandemic…

…What will the future look like? The International Monetary Fund puts China’s GDP growth at below 4% in the coming years, less than half of its tally for most of the past four decades. Capital Economics, a London-based research firm, figures China’s trend growth has slowed to 3% from 5% in 2019, and will fall to around 2% in 2030.

At those rates, China would fail to meet the objective set by President Xi Jinping in 2020 of doubling the economy’s size by 2035. That would make it harder for China to graduate from the ranks of middle-income emerging markets and could mean that China never overtakes the U.S. as the world’s largest economy, its longstanding ambition.

Many previous predictions of China’s economic undoing have missed the mark. China’s burgeoning electric-vehicle and renewable energy industries are reminders of its capacity to dominate markets. Tensions with the U.S. could galvanize China to accelerate innovations in technologies such as artificial intelligence and semiconductors, unlocking new avenues of growth. And Beijing still has levers to pull to stimulate growth if it chooses, such as by expanding fiscal spending.

Even so, economists widely believe that China has entered a more challenging period, in which previous methods of boosting growth yield diminishing returns…

…The transition marks a stunning change. China consistently defied economic cycles in the four decades since Deng Xiaoping started an era of “reform and opening” in 1978, embracing market forces and opening China to the West, in particular through international trade and investment.

During that period, China increased per capita income 25-fold and lifted more than 800 million Chinese people out of poverty, according to the World Bank—more than 70% of the total poverty reduction in the world. China evolved from a nation racked by famine into the world’s second-largest economy, and America’s greatest competitor for leadership.

Academics were so enthralled by China’s rise that some referred to a “Chinese Century,” with China dominating the world economy and politics, similar to how the 20th century was known as the “American Century.”

China’s boom was underpinned by unusually high levels of domestic investment in infrastructure and other hard assets, which accounted for about 44% of GDP each year on average between 2008 and 2021. That compared with a global average of 25% and around 20% in the U.S., according to World Bank data.

Such heavy spending was made possible in part by a system of “financial repression” in which state banks set deposit rates low, which meant they could raise funds inexpensively and fund building projects. China added tens of thousands of miles of highways, hundreds of airports, and the world’s largest network of high-speed trains.

Over time, however, evidence of overbuilding became apparent.

About one-fifth of apartments in urban China, or at least 130 million units, were estimated to be unoccupied in 2018, the latest data available, according to a study by China’s Southwestern University of Finance and Economics…

…Guizhou, one of the poorest provinces in the country with GDP per capita of less than $7,200 last year, boasts more than 1,700 bridges and 11 airports, more than the total number of airports in China’s top four cities. The province had an estimated $388 billion in outstanding debt at the end of 2022, and in April had to ask for aid from the central government to shore up its finances.

Kenneth Rogoff, a professor of economics at Harvard University, said China’s economic ascent draws parallels to what many other Asian economies went through during their periods of rapid urbanization, as well as what European countries such as Germany experienced after World War II, when major investments in infrastructure boosted growth.

At the same time, decades of overbuilding in China resembles Japan’s infrastructure construction boom in the late 1980s and 1990s, which led to overinvestment.

The solution for many parts of the country has been to keep borrowing and building. Total debt, including that held by various levels of government and state-owned companies, climbed to nearly 300% of China’s GDP as of 2022, surpassing U.S. levels and up from less than 200% in 2012, according to Bank for International Settlements data.

Much of the debt was incurred by cities. Limited by Beijing in their ability to borrow directly to fund projects, they turned to off-balance sheet financing vehicles whose debts are expected to reach more than $9 trillion this year, according to the IMF.

Rhodium Group, a New York-based economic research firm, estimates that only about 20% of financing firms used by local governments to fund projects have enough cash reserves to meet their short-term debt obligations, including bonds owned by domestic and foreign investors…

…In Beijing’s corridors of power, senior officials have recognized that the growth model of past decades has reached its limits. In a blunt speech to a new generation of party leaders last year, Xi took aim at officials for relying on borrowing for construction to expand economic activities…

…The most obvious solution, economists say, would be for China to shift toward promoting consumer spending and service industries, which would help create a more balanced economy that more resembles those of the U.S. and Western Europe. Household consumption makes up only about 38% of GDP in China, relatively unchanged in recent years, compared with around 68% in the U.S., according to the World Bank.

Changing that would require China’s government to undertake measures aimed at encouraging people to spend more and save less. That could include expanding China’s relatively meager social safety net with greater health and unemployment benefits.

Xi and some of his lieutenants remain suspicious of U.S.-style consumption, which they see as wasteful at a time when China’s focus should be on bolstering its industrial capabilities and girding for potential conflict with the West, people with knowledge of Beijing’s decision-making say.

The leadership also worries that empowering individuals to make more decisions over how they spend their money could undermine state authority, without generating the kind of growth Beijing desires.

A plan announced in late July to promote consumption was criticized by economists both in and outside China for lacking details. It suggested promoting sports and cultural events, and pushed for building more convenience stores in rural areas.

Instead, guided by a desire to strengthen political control, Xi’s leadership has doubled down on state intervention to make China an even bigger industrial power, strong in government-favored industries such as semiconductors, EVs and AI.

While foreign experts don’t doubt China can make headway in these areas, they alone aren’t enough to lift up the entire economy or create enough jobs for the millions of college graduates entering the workforce, economists say. 

3. LTCM: 25 Years On – Marc Rubinstein

To understand, it helps to model LTCM not as a hedge fund but as a bank (although it’s also true that the best model for a bank is often a hedge fund). Roger Lowenstein, author of When Genius Failed, acknowledges as much in the subtitle of his book: “The Rise and Fall of Long-Term Capital Management: How One Small Bank Created a Trillion-Dollar Hole.” 

The model reflects LTCM’s heritage. John Meriwether ran the arbitrage desk at Salomon Brothers becoming vice chair of the whole firm, in charge of its worldwide Fixed Income Trading, Fixed Income Arbitrage and Foreign Exchange businesses. In the years 1990 to 1992, proprietary trading accounted for more than 100% of the firm’s total pre-tax profit, generating an average $1 billion a year. LTCM was in some ways a spin-off of this business.

Indeed, LTCM partners viewed their main competitors as the trading desks of large Wall Street firms rather than traditional hedge funds. Thus, although they structured their firm as a hedge fund (2% management fee, 25% performance fee, high watermark etc) they did everything they could to replicate the structure of a bank. So investors were required to lock-up capital initially for three years to replicate the permanent equity financing of a bank (hence “Long-Term Capital Management”). They obtained $230 million of unsecured term loans and negotiated a $700 million unsecured revolving line of credit from a syndicate of banks. They chose to finance positions over 6-12 months rather than roll financing daily, even at the cost of less favourable rates. And they insisted that banks collateralise their obligations to the fund via a “two way mark-to-market”: As market prices moved in favour of LTCM, collateral such as government bonds would flow from their counterparty to them.

If there was one risk LTCM partners were cognisant of it is that they might suffer a liquidity crisis and not be able to fund their trades. It was a risk they took every effort to mitigate. 

But in modelling themselves as a bank, they forgot one key attribute: diversification.

“We set up Long-Term to look exactly like Salomon,” explains Eric Rosenfeld. “Same size, same scope, same types of trades… But what we missed was that there’s a big difference between the two: Long-Term is a monoline hedge fund and Salomon is a lot of different businesses – they got internal diversification from their other business lines during this crisis so therefore they could afford to have taken on more risk. We should have run this at a lower risk.”

It’s a risk monolines in financial services often miss. And LTCM wasn’t the only monoline to fall victim to market conditions in 1998. In the two years that followed, eight of the top 10 subprime monolines in the US declared bankruptcy, ceased operations or sold out to stronger firms. The experience prompted some financial institutions – such as Capital One – to embrace a more diversified model.

When the global financial crisis hit in 2007, monoline firms went down first. And in the recent banking crisis of 2023, those banks that failed were characterised by lower degrees of diversification.

There’s another factor that also explains the downfall of LTCM, one that similarly has echoes in the banking sector. At the end of August, LTCM was bruised but it was far from bankrupt. It had working capital of around $4 billion including a largely unused credit facility of $900 million, of which only $2.1 billion was being used for financing positions.

But the fax Meriwether sent clients on September 2 triggered a run on the bank. “We had 100 investors at the time, and a couple of fax machines,” recalls Rosenfeld. “By the time we got to investor 50, I noticed that the top story on Bloomberg was us… All eyes were on us. We were like this big ship in a small harbour trying to turn; everyone was trying to get out of the way of us.”

While the August losses reflected a flight to quality as investors flocked to safe assets, the September losses reflected a flight away from LTCM. The price of a natural catastrophe bond the firm held, for example, fell by 20% on September 2, even though there had been no increase in the risk of natural disaster and the bond was due to mature six weeks later. As the firm was forced to divulge more information to counterparties over the course of September, the situation worsened. “The few things we had on that the market didn’t know about came back quickly,” Meriwether later told the New York Times. “It was the trades that the market knew we had on that caused us trouble.”

In addition, illiquid markets gave counterparties leeway in how to mark positions, and they used the opportunity to mark against LTCM to the widest extent possible so that they would be able to claim collateral to mitigate against a possible default (the flipside of the “two way mark-to-market”). The official inquiry into the failure noted that by mid-September, “LTCM’s repo and OTC [over-the-counter] derivatives counterparties were seeking as much collateral as possible through the daily margining process, in many cases by seeking to apply possible liquidation values to mark-to-market valuations.” And because different legs of convergence trades were held with different counterparties, there was very little netting. In index options, such collateral outflows led to around $1 billion of losses in September. 

Nicholas Dunbar, who wrote the other bestselling book about LTCM, Inventing Money, quotes a trader at one of LTCM’s counterparties (emphasis added):

“When it became apparent they [LTCM] were having difficulties, we thought that if they are going to default, we’re going to be short a hell of a lot of volatility. So we’d rather be short at 40 [at an implied volatility of 40% per annum] than 30, right? So it was clearly in our interest to mark at as high a volatility as possible. That’s why everybody pushed the volatility against them, which contributed to their demise in the end.”

The episode is a lesson in endogenous risk. It’s a risk that differentiates securities markets from other domains governed by probability. “The hurricane is not more or less likely to hit because more hurricane insurance has been written,” mused one of LTCM’s partners afterwards. “In the financial markets this is not true. The more people write financial insurance, the more likely it is that a disaster will happen, because the people who know you have sold the insurance can make it happen. So you have to monitor what other people are doing.”

4. Why the Era of Historically Low Interest Rates Could Be Over – Nick Timiraos

At issue is what is known as the neutral rate of interest. It is the rate at which the demand and supply of savings is in equilibrium, leading to stable economic growth and inflation.

First described by Swedish economist Knut Wicksell a century ago, neutral can’t be directly observed. Instead, economists and policy makers infer it from the behavior of the economy. If borrowing and spending are strong and inflation pressure rising, neutral must be above the current interest rate. If they are weak and inflation is receding, neutral must be lower.

The debate over where neutral sits hasn’t been important until now. Since early 2022, soaring inflation sent the Federal Reserve racing to get interest rates well above neutral.

With inflation now falling but activity still firm, estimates of the neutral rate could take on greater importance in coming months. If neutral has gone up, that could call for higher short-term interest rates, or delay interest-rate cuts as inflation falls. It could also keep long-term bond yields, which determine rates on mortgages and corporate debt, higher for longer…

…Analysts see three broad reasons neutral might go higher than before 2020.

First, economic growth is now running well above Fed estimates of its long-run “potential” rate of around 2%, suggesting interest rates at their current level of 5.25% and 5.5% simply aren’t very restrictive.

“Conceptually, if the economy is running above potential at 5.25% interest rates, then that suggests to me that the neutral rate might be higher than we’ve thought,” said Richmond Fed President Tom Barkin. He said it is too soon to come to any firm conclusions.

That said, a model devised by the Richmond Fed, which before the pandemic closely tracked Williams’s model, put the real neutral rate at 2% in the first quarter.

Second, swelling government deficits and investment in clean energy could increase the demand for savings, pushing neutral higher. Joseph Davis, chief global economist at Vanguard, estimates the real neutral rate has risen to 1.5% because of higher public debt…

…Third, retirees in industrial economies who had been saving for retirement might now be spending those savings. Productivity-boosting investment opportunities such as artificial intelligence could push up the neutral rate.

And business investment depreciates faster nowadays and is thus less sensitive to borrowing costs, which would raise neutral. It is dominated by “computers and software, and much less office buildings, than it used to be,” Summers said during a lecture in May…

…Fed Chair Jerome Powell has in the past warned against setting policy based on unobservable estimates such as neutral, which he compared to navigating by the celestial stars.

Last December, he said the Fed would be careful about fine-tuning interest rates based on such estimates—for example, because falling inflation pushes real rates well above neutral. “I don’t see us as having a really clear and precise understanding of what the neutral rate is and what real rates are,” Powell said.

Some economists reconcile the debate by differentiating between short-run and longer-run neutral. Temporary factors such as higher savings buffers from the pandemic and reduced sensitivity to higher rates from households and businesses that locked in lower borrowing costs could demand higher rates today to slow the economy.

But as savings run out and debts have to be refinanced at higher rates in the coming years, activity could slow—consistent with a neutral rate lower than it is now.

5. Defining, Measuring, and Managing Technical Debt – Ciera Jaspan and Collin Green

We took an empirical approach to understand what engineers mean when they refer to technical debt. We started by interviewing subject matter experts at the company, focusing our discussions to generate options for two survey questions: one asked engineers about the underlying causes of the technical debt they encountered, and the other asked engineers what mitigations would be appropriate to fix this debt…

…This provided us with a collectively exhaustive and mutually exclusive list of 10 categories of technical debt:

  • Migration is needed or in progress: This may be motivated by the need to scale, due to mandates, to reduce dependencies, or to avoid deprecated technology.
  • Documentation on project and application programming interfaces (APIs): Information on how your project works is hard to find, missing or incomplete, or may include documentation on APIs or inherited code.
  • Testing: Poor test quality or coverage, such as missing tests or poor test data, results in fragility, flaky tests, or lots of rollbacks.
  • Code quality: Product architecture or code within a project was not well designed. It may have been rushed or a prototype/demo.
  • Dead and/or abandoned code: Code/features/projects were replaced or superseded but not removed.
  • Code degradation: The code base has degraded or not kept up with changing standards over time. The code may be in maintenance mode, in need of refactoring or updates.
  • Team lacks necessary expertise: This may be due to staffing gaps and turnover or inherited orphaned code/projects.
  • Dependencies: Dependencies are unstable, rapidly changing, or trigger rollbacks.
  • Migration was poorly executed or abandoned: This may have resulted in maintaining two versions.
  • Release process: The rollout and monitoring of production needs to be updated, migrated, or maintained.

We’ve continued to ask engineers (every quarter for the last four years) about which of these categories of technical debt have hindered their productivity in the previous quarter. Defying some expectations, engineers do not select all of them! (Fewer than 0.01% of engineers select all of the options.) In fact, about three quarters of engineers select three or fewer categories. It’s worth noting that our survey does not ask engineers “Which forms of technical debt did you encounter?” but only “Which forms of technical debt have hindered your productivity?” It’s well understood that all code has some technical debt; moreover, taking on technical debt prudently and deliberately can be a correct engineering choice.4 Engineers may run into more of these during the course of a quarter, but their productivity may not be substantially hindered in all cases.

The preceding categories of technical debt have been shown in the order of most to least frequently reported as a hindrance by Google engineers in our latest quarter. We don’t expect this ordering to generalize to other companies as the ordering probably says as much about the type of company and the tools and infrastructure available to engineers as it does the state of the code base. For example, Google engineers regularly cite migrations as a hindrance, but large-scale migrations are only attempted at all because of Google’s monolithic repository and dependency system;5 other companies may find that a large-scale migration is so impossible that it is not even attempted. A fresh start-up might have few problems with dead/abandoned code or code degradation but many hindrances due to immature testing and release processes. While we do expect there to be differences across companies in how much engineers are hindered by these categories, we believe the list itself is generalizable.

Our quarterly engineering survey enables us to measure the rate at which engineers encounter and are hindered by each type of technical debt, and this information has been particularly useful when we slice our data for particular product areas, code bases, or types of development. For example, we’ve found that engineers working on machine learning systems face different types of technical debt when compared to engineers who build and maintain back-end services. Slicing this data allows us to target technical debt interventions based on the toolchain that engineers are working in or to target specific areas of the company. Similarly, slicing the data along organizational lines allows directors to track their progress as they experiment with new initiatives to reduce technical debt.

However, we find quarterly surveys are limited in their statistical and persuasive power…

…Our goal was then to figure out if there are any metrics we can extract from the code or development process that would indicate technical debt was forming before it became a significant hindrance to developer productivity. We ran a small analysis to see if we could pull this off with some of the metrics we happened to have already…

…The results were disappointing, to say the least. No single metric predicted reports of technical debt from engineers; our linear regression models predicted less than 1% of the variance in survey responses. The random forest models fared better, but they had high precision (>80%) and low recall (10%–25%). That is, these models could identify parts of the code base where a focused intervention could reduce technical debt, but they were also going to miss many parts of the code base where engineers would identify significant issues.

It is quite possible that better technical debt indicator metrics do exist for some forms of technical debt. We only explored objective metrics for three types of technical debt, and we only sought to use existing metrics, rather than attempting to create new metrics that might better capture the underlying concepts from the survey.

However, it’s also possible that such metrics don’t exist for other types of technical debt because they are not about the present state of a system, but a relation between the system’s present state and some unimplemented ideal state. An engineer’s judgments about technical debt concern both the present state and the possible state. The possible states of the world are something that mathematical models cannot incorporate without the modeler’s direct intervention. For example, the fact that a project’s code base consists entirely of code written in Python 2 is not technical debt in a world where there is no loss of functionality compared to another language or version or outside pressure to migrate. However, in a world where Python 3 is a preferred or required alternative, that same corpus of Python 2 constitutes a needed migration. The present state of the world—from the perspective of a model—is identical in these two instances, but the possible world has changed. Humans consider the possible world in their judgments of technical debt. If a model were to incorporate explicit rules that capture aspects of the possible world (for example, if a model were designed to count every file in Python 2 as technical debt because the human modeler knows Python 3 is an alternative), then the change would be detectable to the model. If we could capture this judgment as it evolves, it could form the basis for better measurements of technical debt…

…While we haven’t been able to find leading indicators of technical debt thus far, we can continue to measure technical debt with our survey and help to identify teams that struggle with managing technical debt of different types. To that end, we also added the following questions to our engineering survey:

  • To what extent has your team deliberately incurred technical debt in the past three months?
  • How often do you feel that incurring technical debt was the right decision?
  • How much did your team invest in reducing existing technical debt and maintaining your code?
  • How well does your team’s process for managing technical debt work?

Combined with the survey items about the types of technical debt that are causing productivity hindrances, these questions enable the identification of teams that are struggling, reveal the type(s) of technical debt they are struggling with, and indicate whether they are incurring too much debt initially or whether they are not adequately paying down their existing debt. These are useful data, especially when teams can leverage them under guidance from experts on how to manage their technical debt. Fortunately, we have such experts at Google. Motivated in part by our early findings on technical debt, an interested community within Google formed a coalition to help engineers, managers, and leaders systematically manage and address technical debt within their teams through education, case studies, processes, artifacts, incentives, and tools. The coalition’s efforts have included the following:

  • Creating a technical debt management framework to help teams establish good practices. The framework includes ways to inventory technical debt, assess the impact of technical debt management practices, define roles for individuals to advance practices, and adopt measurement strategies and tools.
  • Creating a technical debt management maturity model and accompanying technical debt maturity assessment that evaluates and characterizes an organization’s technical debt management process and helps grow its capabilities by guiding it to a relevant set of well-established practices for leads, managers, and individual contributors. The model characterizes a team’s maturity at one of four levels (listed here from least to most mature):
    • Teams with a reactive approach have no real processes for managing technical debt (even if they do occasionally make a focused effort to eliminate it, for example, through a “fixit”). Teams with a proactive approach deliberately identify and track technical debt and make decisions about its urgency and importance relative to other work.
    • Teams with a strategic approach have a proactive approach to managing technical debt (as in the preceding level) but go further: designating specific champions to improve planning and decision making around technical debt and to identify and address root causes.
    • Teams with a structural approach are strategic (as in the preceding level) and also take steps to optimize technical debt management locally—embedding technical debt considerations into the developer workflow—and standardize how it is handled across a larger organization.
  • Organizing classroom instruction and self-guided courses to evangelize best practices and community forums to drive continual engagement and sharing of resources. This work also includes a technical talk series with live (and recorded) sessions from internal and external speakers.
  • Tooling that supports the identification and management of technical debt (for example, indicators of poor test coverage, stale documentation, and deprecated dependencies). While these metrics may not be perfect indicators, they can allow teams who already believe they have a problem to track their progress toward fixing it.

Overall, our emphasis on technical debt reduction has resulted in a substantial drop in the percentage of engineers who report that their productivity is being extremely to moderately hindered by technical debt or overly complicated code in their project. The majority of Google engineers now feel they are only “slightly hindered” or “not at all hindered” by technical debt, according to our survey. This is a substantial change and, in fact, is the largest trend shift we have seen in five years of running the survey.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 20 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 20 August 2023:

1. TIP569: An Investor’s Guide To Clear Thinking w/ Chris Mayer – Clay Finck and Chris Mayer

[00:18:10] Clay Finck: And I think about how a lot of times people will attach a label to something, and when I relate this to investing, someone might think they’re a growth investor, they want higher growth, and when they see that a stock is like a value stock, then they’ll just like not even look at it and not even understand what it is.

[00:18:28] Clay Finck: And I think about how some of your holdings. Are in what some people might call unattractive industries. I just think about how you dug underneath the surface and just because it might be in what people call an unattractive industry, it still can be a very attractive long-term business.

[00:18:45] Chris Mayer: Absolutely, and this has happened to me multiple times. I know that I have old Dominion Freight lines in the portfolio. It’s this trucking company and most people trucking, look at it. It’s an unattractive industry. Why would you want to be involved in that? It’s lots of competition by then. You get into Old Dominion and you see that it’s return on invested capital is huge and it’s got this deep competitive advantage, over everyone else, and it’s been taking market share, doubles market share over the last decade.

[00:19:11] Chris Mayer: And then you see. In terms of results, it’s, it would be silly to just say I don’t own trucking companies because the economic to that are not something you expect to see. It’s a real outlier, even within its own industry, and I’ve had that before too. I never had too much success with retail or retail stocks and retail.

[00:19:28] Chris Mayer: But I own Dino Polska, which is Polish Grocery Store. And again, that’s getting beyond just its category and looking at the underlying economics, which is phenomenal for that business. And it made me want to look further. And so ultimately it’s been a very successful investment so far. So again real-world consequences for taking these labels at face value and in your, if your willingness to dig behind them can lead to some real insights.

[00:19:51] Chris Mayer: It seems really obvious. Sometimes when I talk about general semantics to people, they’ll be like, yeah it just seems so obvious, but it’s not the way people behave. They behave exactly like we’re talking about. They’re taking the label at face value and they’re allowing it to do their thinking for them.

[00:20:05] Chris Mayer: They’re not looking beyond it. Not looking behind it, and it’s lots of examples. We’ve talked already about a bunch.

[00:20:12] Clay Finck: You also caution against confusing correlation with causation. Don’t fight the Fed is a phrase that gets thrown around a lot. And you’re right. Whenever you see an if x then Y statement, then you should distrust it.

[00:20:27] Clay Finck: And when I think about what drives stock market returns, I tend to think about sustainable growth. And free cash flows will ultimately drive long-term shareholder returns. And this book really makes me. Question a lot of my assumptions. So I want to just turn that question to you and have you talk about what you believe drives long-term stock returns.

[00:20:50] Chris Mayer: I’ll answer that, but first I’ll go back a little bit and on the if then the problem is that, and finance people do this all the time, is they want to just change one variable. So they’ll say okay, if interest rates go up, then the stops are going to go down because oh raises everyone, discount rate and the cash flows were discounted.

[00:21:09] Chris Mayer: Cash flows now at this higher rate and asset values will fall. The problem is, of course, in the real world, You can never just change one variable. There’s like all these other things that change at the same time. The underlying cash flows change. Expectations change. All kinds of things change. And so you can have a result that then is then surprising.

[00:21:26] Chris Mayer: So here we’ve had a period of time where the Fed has increased rates at a faster clip than ever has in the markets ripping. And there are lots of examples in the past where if you had known ahead of time what some outcome was going to be, you would still be wrong on the investment side. So one of my favorites in the book, ’cause I think I got this from Michael O.

[00:21:42] Chris Mayer: Higgins, pointed out and he had an example where even if you knew the price of gold more than doubled over some period of time. You thought to yourself, that’s pretty good. Logically I’m going to buy the largest gold miner Newmont. And then if you roll forward, Newmont’s stock actually fell 5% during that time again, ’cause it wasn’t just one variable to change.

[00:22:00] Chris Mayer: Newmont has costs that went up a lot. There are other factors in the business expectations involved. So you had a dramatically different outcome than you would’ve thought based on the initial conclusion. So that’s why you have to distrust any if then if X happens, then Y And when it comes to market.

[00:22:16] Chris Mayer: Because there are so many other things going involved going on. So when it comes to, you know what drives long-term returns, I think it helps just to get down to really basic stuff. So a business, you think of it as a pile of capital. And what rate can it increase that capital over the next 10 years that’s the fundamentals that drive returns.

[00:22:36] Chris Mayer: So it’s some kind of return on invested capital plus growth rate over time that really drives returns. What return you may get is also a function of the price that you pay. So in those three things, you know you have everything. And mathematically it can’t work out any other way. One of those three things has to lead to returns now.

[00:22:54] Chris Mayer: Being able to forecast or figure out what return on invested capital’s going to be over the next years and what’s the growth rate going to be and what kind of valuation going to be, that’s probably impossible to know. We’re all making the best guesses and what we can, based on our research and digging into why certain businesses are able to generate such returns and that’s what we do.

[00:23:16] Clay Finck: You’re a big believer in’s. Sarnoff wrote that the price of stock varies inversely with the thickness of its research file, and the fattest files are found in stocks that are the most troublesome and will decline the furthest. The thinnest files are reserved for those that appreciate the most. In short, I sort of see this as the best idea.

[00:23:38] Clay Finck: They really stand out to you and they don’t require extraordinary levels of research to build that conviction. And I think this points to what you mentioned there, you want us to find the essential elements of what’s going to lead to this business’s success and then understand the factors that play into that.

[00:23:56] Clay Finck: And you filter out. About everything else. In a way, it’s drastically simplifying the extremely complex world around us, which is really liberating to do as an investor. So I’d love for you to talk more about Sosnoff’s law. 

[00:24:12] Chris Mayer: That’s beautifully put there, Clay. That’s good. That, that’s exactly it. You hit it right on its nose.

[00:24:17] Chris Mayer: I spent a lot of time trying to figure out what kind of essential things to know about a business that’s usually less than a handful of things. Really key the really important things. And then the rest of it are details that are not that important in the long term, although they could, might be important in the short term.

[00:24:33] Chris Mayer: It might have big impacts in particular quarters or whatever, but, Long term, they don’t matter much. So I spent a lot of time on that. When it comes to sauce, I was always a he-wrote a book called Humble on Wall Street, and I think it came out in the seventies. So the thickness of the research file is something that doesn’t hold up as well over time, but we get the metaphor.

[00:24:52] Chris Mayer: And he was big on a couple of things I learned from him. One was he really emphasized the skin and the game aspect, but also I liked Sosnoff’s law because it jives with my experience as well. When you’re really laboring over an idea and you have to rely on detailed spreadsheets and assumptions to justify it, it’s probably not a good idea.

[00:25:10] Chris Mayer: The ones that are really great are the ones that just jump out at you, and you’re just really excited and it seems obvious. I mean that again, it jives with my own experience. Some of the in best investments I’ve made have had very short I write little internal memos to myself, and some of ’em have been very short, and they’ve been great and the ones that I have to spend a lot of time on, sometimes those don’t do as well…

…[00:30:27] Clay Finck: Related to this idea of everything changes. I think there’s this profound mental model you sort of introduced to me that this time is always different. People try and make comparisons today to previous times in the past, and they’re trying to make predictions about what’s going to happen.

[00:30:45] Clay Finck: Is the stock market going to crash? Are we going to enter a recession? This mental model of this time is always different, which is again, very liberating. Because even some of the great investors talk about how history tends to repeat itself. Maybe it rhymes but not repeat. Exactly. And I think about how companies are always changing, market dynamics are always changing and everything is changing again.

[00:31:06] Clay Finck: And you talk about indexes and how they’re changing. So people will look at it. The S&P 500, and they’re not really looking at the companies in that index. They’re looking at what the price say in 2003, what’s the price in 2023? What are the multiples between the two? And the reality is that you’re comparing things that are entirely different because the index itself changes.

[00:31:29] Clay Finck: The top holdings in 2003 were much different than in 2023. 

[00:31:35] Chris Mayer: Yeah, that’s an important thing. That’s, again, mixes in with a lot of stuff we’ve talked about. The S&P index is a name, that has a label and people treat it as if it’s this comparison over. Decades of time and that it’s a valid comparison.

[00:31:49] Chris Mayer: But you know, just look at the top 10 and the S&P. Now look at it 20 years ago, look at it, 20 years before that, substantially different. And the mix of companies is significantly different. I think the S&P only added financials in the seventies or something like that. So there’s been. A lot of big changes to the index over time, and that’s going to skew your numbers price, earnings ratio, or whatever.

[00:32:10] Chris Mayer: So, that’s been very important and I love that this time it’s a different example too because I think it was Templeton who made that famous, where he said, this time is different, is most dangerous, blah, blah, blah. And I get the idea behind it. The idea behind it is investors want to try to defend bubbles or something, and we all know that they come to an end at some point.

[00:32:29] Chris Mayer: So there’s. There’s some to that, but then the other side is that this time is always different from every other time before that details are always different companies, different people. It’s a different world than now, than it was 20 years ago or 20 years before that in mind. That is the case, which may prevent you from falling into some traps.

[00:32:47] Chris Mayer: Finance, people in finance do this all the time. And Twitter, how many times you’ll see, now they call it X. You’ll see charts where someone will say I have some bear market going like this. And they’ll have a present. It’ll be like this, oh my God, it matches up perfectly and has no validity whatsoever.

[00:33:03] Chris Mayer: At all. Nothing to do with anything, but people love to do that. 

[00:33:08] Clay Finck: Just to use an example here, they might look at the S&P 500. I’m just throwing out numbers. These aren’t based on numbers. I actually looked it up, we’ll say the multiple on the S&P was 20 in 2003, whatever it was. And today we’ll say it’s higher than that.

[00:33:23] Clay Finck: We’ll say it’s much higher today than multiple today, and people will assume that, oh, we’re way above the historical means. So eventually things tend to revert to the mean. So is reversion to the mean itself a flawed concept? 

[00:33:39] Chris Mayer: Yeah I have another outlier opinion on this, which is that yeah, the versions are mean that people talk about is it is very problematic because there is no real mean, it’s your imagination.

[00:33:50] Chris Mayer: It’s a concept we’ve created, but it doesn’t, there’s no mean, there’s no market. No market says I have to go to this mean, and that mean is always changing, as you pointed out. You could look at the multiple now today, and the SB is a lot higher than it was say in 2003, but in 2003, Some of the biggest companies might include ExxonMobil, which might’ve been a very large company.

[00:34:08] Chris Mayer: 2003 might have been slower growth, more capital intensive businesses that are part of that index versus now there’s. Reasons why they might be very different and it doesn’t make sense to say that today’s S&P has to go to some mean that’s constructed based on constituents that aren’t even in the index today.

[00:34:25] Chris Mayer: I think that’s an overlooked thing. Mean version. You have to be careful again with what are the components. That you’re saying has to mean revert. It might be one thing if you’re looking at a company that does the same thing now, exact same thing it did now 20 years ago, and the margins don’t change very much and suddenly you’ve got a little dip.

[00:34:42] Chris Mayer: There might be something, some way to defend, a reversion to mean, but I’m very skeptical of those kinds of arguments.

[00:34:48] Clay Finck: Again, I think it’s another case where people are just maybe simplifying too much. They’ll be like this company’s trading at the lowest multiple it’s ever been. I’m like, have you looked at the business and the actual where things are trending, where the world is trending?

[00:35:03] Chris Mayer: Sure. Yeah. I know there’s a prominent example, like I know a lot of people are getting excited about, say Danaher, and because it’s traded at the lowest PE it’s traded at in however many years. Do you look at the return on invested capital in Danaher? It’s been in decline. It’s not the same business that it was that people remember in their head at this Great.

[00:35:21] Chris Mayer: High performing conglomerate for all those years, it’s maybe it will get back there. Maybe there’s a thesis that it gets back there. But a lot of times when you see a company trading at the lowest level it’s ever traded at, there’s a reason. And be careful about just assuming that you can buy this today and go back to it, it’ll mean revert, and you’ll make this great return…

…[00:42:39] Clay Finck: Another thing that really stands out to me as I read more and more of your work is your very relaxed nature and your ability to not take yourself too seriously. I want to read a bit here from your book, you write Laugh More. I. Life may not be a joke, but it is often funny.

[00:42:57] Clay Finck: If you keep in mind the abstractions. Most of the serious business of the world seems portentous, trivial, silly, and ridiculous. You can’t help but laugh at it. I read this and I think about this and I think about Buffett and Munger and I see some similar characteristics in that they don’t take themselves too seriously and they truly want to enjoy life.

[00:43:17] Clay Finck: So I’d love for you to talk about. How this maybe ties into investing because you’re managing a fund, you’re managing other people’s money real money at risk, yet you’re able to detach yourself in a way and not become too overwhelmed by it, and not take yourself too seriously. 

[00:43:34] Chris Mayer: Yeah, I would say this is learned too.

[00:43:36] Chris Mayer: This is something I’ve had to work at, but it helps to do the a hundred baggers book, looking at the long-term performance of companies. One lesson that’s inescapable from doing all that is you realize that things that seem momentous at certain points in time, I. Really just sort of bleed out and are almost imperceptible over a longer period of time.

[00:43:54] Chris Mayer: So certain quarters, or even where stock prices can make violent moves, 10, 15% move at the time they seem like, wow, it gets stressed out. Something drops 15% or whatever. But you look back in time, even severe bear markets and you look back in time and it’s a little bump in a chart. So when you zoom out, keep a bigger-picture perspective.

[00:44:13] Chris Mayer: That’s helped me a lot. It’s really helped me a lot to do that. But I do think it’s really important. I mean it’s, I think I’ve enjoyed it a lot more the way I am now. Just more relaxed about it. I’m a little more detached, taking a good long view rather than just being so intense where you’re so focused on the moment and the quarter or whatever is going to happen.

[00:44:32] Chris Mayer: And so those guys, buffet, Munger, they’re wise in a lot of ways. And this is one too when Buffet says he. Tap dance into work every day and enjoys it. Some of it has to be this. He can’t take it that seriously. 

2. China is no 1990s Japan – but it could have been – Robert Canell

So let’s take a look at what is happening in China and pick apart the deflation argument. Firstly, let’s look for evidence of a bubble because if we are going to argue that it is about to burst, it needs to be there in the first place.

In 1984, land prices for commercial property in Tokyo grew at a respectable 7.2% annual pace, The following year, this accelerated to 12.5%, and the year after that, to 48.2%. By 1987, commercial property land prices were rising at a 61.1% YoY pace. It was once suggested that the 1.5 square kilometres of land surrounding the Imperial Palace in Tokyo, were worth more than all the land in California. And whether or not that calculation stacks up (it sounds highly questionable), it shows just how extreme things had become.

Yes, Japan had a bubble. If we use similar land price data for Beijing for both residential and commercial property, then there are certainly periods when prices accelerate sharply. The most recent period where this happened was between 2014 and 2017 when residential property prices accelerated at about a 20% annual pace. But it has slowed since and is showing small declines now…

…Turning now to the equity markets. If we superimpose the recent price developments of the Shanghai composite index onto the Tokyo stock exchange in the period running up to the bubble, what we see is that China’s stock market has for some time been extremely average. There is no sense at all here of an excessive surge that requires a long period of dismal performance to compensate. That’s not to suggest a particularly bright future for Chinese stocks, but it beats a Japan-style collapse.

Ruling out a deflationary collapse is clearly a positive standpoint. But we also don’t see Chinese growth at much more than 5% over the coming few years. And we have a tough time explaining to people why this is actually a perfectly reasonable growth rate which doesn’t require a panicked response. But here goes…

In previous years, China’s GDP growth had taken a disproportionate boost from property development. Not only does construction provide a substantial direct boost to activity and labour demand, but it also requires a lot of inputs from industry: cement, steel, copper, aluminium, PVC etc. That also provides a big boost to things like energy demand. And new property sales also require furnishings, and that in turn pushes up this aspect of retail spending.

But the amount of growth that construction was delivering to the economy had grown to totally unsustainable levels. In some years, in nominal terms, construction contributed up to almost three percentage points of total GDP growth, often about a third of the total.

To try to highlight how anomalous this was, if you look at average Asian GDP growth rates pre-Covid relative to GDP per capita, China was a huge outlier, growing several percentage points faster than you would expect for an economy of its state of development. And that deviation can be largely put down to growth generated by excessive construction activity. This was essentially construction-driven GDP “bought” with debt and ultimately, unsustainable.

Maintaining this sector at pre-Covid growth rates could have ended up in disaster. Maybe a Japan-style disaster. What the Chinese authorities have done, quite sensibly, is to nip this in the bud before this happens, though this of course is going to mean reversion to slower (more sustainable) growth rates that are more in line with an economy of China’s stage of economic development.

3. Buffett’s 44% CAGR and Various Types of High Quality Investments – John Huber

Warren Buffett initially invested in 5 Japanese stocks in 2020 and I don’t think many people realize how successful this investment has been so far:

That initial basket investment is up over 200%: a 3x in 3 years, or 44% CAGR on that initial investment. Each stock is up over 2x, one is up 5x, and the basket in aggregate up 3x. He’s added to the basket since, and those add on purchases have also done well…

…Just like how we would rate an investment result, a good business is one that makes a lot of money relative to the money that you had to put into it (i.e. high return on capital).

But the most value gets created in companies that see increasing returns on capital (i.e. high incremental returns on capital; e.g. a company where returns rise from 12% to 18%, etc…). I’ve spent a lot of time thinking about Buffett’s investments in Japan (which is now a top 5 investment) and also in energy (which is his largest equity investment behind Apple). The common theme is something that might surprise most people and I think probably isn’t fully appreciated: both groups have rising returns on capital.

I see three things that Buffett probably saw (among other things) in Japan and also in energy:

  1. Cheap valuations
  2. Rising ROIC’s
  3. Significant change in capital allocation policies

(These traits also applied to Apple when he first invested in 2016). Buffett has always prioritized value. We know he has a preference for quality companies but he’s always been a value focused investor who wants a high FCF yield (more so than Munger). He has said “price is my due diligence” and we know from both his words and actions (especially in the earlier years) that he prefers quality, but he demands value.

But, he also wants quality businesses. And despite the stodgy historical returns, these groups are exhibiting current ROIC’s that exceed those of most of the FANG stocks and other high fliers. And not just better ROIC’s but also more rational capital allocation. There isn’t much growth in his Japanese trading companies, but if you pay 7x FCF for a stock that is returning all of that FCF via buybacks and dividends, you earn a 15% annual return even with no growth and no increase in the multiple.

I’ve written about 3 engines: a stock’s return is the product of three simple factors: growth, change in multiple, and capital returns (change in shares outstanding plus any dividends). Over the past decade, many investors focused on the first engine exclusively, ignoring the 2nd and 3rd. This worked over the last decade, but I would not expect it to work going forward. Growth is an important input into value, but it is just one of those three engines. If you pay too much, engine #2 becomes a drag (P/E contraction). If you own a stock that’s diluting through share issuance, engine #3 is a drag. It’s possible to earn high returns from one engine that overcomes the other two, but this is rare.

The best stocks often have all three engines working — sometimes only in surprisingly modest amounts individually, but collectively they can produce fabulous results. For example, a stock that grows earnings at 5%, has a P/E go from 8 to 12 over a 5 year period, and returns all its earnings via buybacks and/or dividends will provide you with approximately 23% total annual returns over that 5 year period. Growth engine gave you just 5%, but you received an 8.4% annual tailwind from the P/E multiple and approximately 10% additional returns from buybacks and dividends…

…Remember: a good business isn’t one that has an interesting or exciting narrative, it’s one that makes a lot of money relative to the money invested into it. Buffett obviously doesn’t get influenced by narratives or growth stories. He’s only interesting in finding great investments. And great investments tend to come from good businesses that are undervalued. And good businesses tend to have two common themes: strong returns on capital and good management that are rationally allocating free cash flow. Japanese stocks and energy stocks lack exciting narratives, but they have these key ingredients that are found in most quality investments: good returns on capital, smart capital allocation, and low valuations. All three engines are working in these two investment areas for Buffett. I think this is what interested him in Apple, it is what interested him in Japan and energy, and it is what has led these investments to become so successful.

Rising returns on capital simply means more earnings per unit of capital invested. These rising ROIC’s can happen in three ways:

1 — increasing the denominator (reinvesting all capital into the business at high rates of return)

2 — increasing the numerator while keeping the denominator flat (i.e. higher earnings on same levels of capital), or

3 — and most surprising to most people — a similar value creation can also come from a shrinking denominator while keeping earnings flat — reducing excess cash levels through buybacks (which reduces the denominator). This means no growth but increasing quality of earning power, which frees up more and more cash to be used for buybacks. This can be especially effective when the rising FCF occurs on stocks with low multiples, as the company can gets a better return (higher FCF yield) on its own shares.

4. Fundamentals simply do not matter in China’s stock markets – Michael Pettis

It is tempting to try to find meaning in the so-called “A-share premium”. This is the persistent valuation gap between the shares of Chinese companies that trade in Shanghai or Shenzhen (known as A-shares) and the shares of the same companies that trade in Hong Kong (H shares)…

…Normally, when onshore and offshore markets are separated by capital controls — and arbitrage is restricted, as is the case in China — onshore markets trade at a discount to the major offshore markets. This makes the Chinese A-share premium all the more anomalous. So why is the same share worth so much more on the mainland than it is offshore?

One theory is that it reflects differing views on political risk, with mainlanders less worried than foreigners about the risk of a political “event” disrupting business prospects. Another theory is that it shows that mainland investors are more optimistic about Chinese growth prospects than offshore investors. A third theory is that it reflects an information asymmetry in which onshore investors have access to a higher quality of information than offshore investors, and so are able to discount future growth prospects at a lower rate.

But none of these explanations makes any sense. They all assume, incorrectly, that prices in the Chinese stock market reflect a fundamental “view” about growth prospects, measured as the present value of future expected cash flows.

They do not, and never have. It has been almost impossible during the past few decades to find a credible correlation between the performance of the Chinese stock market and any measure of growth prospects or profitability. Monthly surges or drops of 10-20 per cent or more occur far too often to suggest any relation with normal economic volatility…

…The problem is that in a market in which macroeconomic data is questionable, financial statements are not credible, corporate governance is unclear, government intervention is unpredictable, and interest rates are repressed, it is impossible to be a fundamental investor except at very low prices, driven down by the high discount rates all this uncertainty requires. Investors whose effect is to drive capital according to its most productive use, in other words, are pretty much priced out of the mainland markets. That is why, for all the promises by local fund managers of their sophisticated fundamental selection process, mainland markets are wholly speculative.

In fact the Chinese stock market is really a Keynesian beauty contest: “winners” are rewarded not for choosing the best-looking contestants, but rather for their ability to figure out the consensus. Successful investors are not those who understand the economy, in other words, but rather those who are good at interpreting government signalling, recognising shifts in liquidity and, above all, quickly discerning or even setting off changes in market consensus…

…It takes many years for a stock market to develop the qualities that allow and encourage fundamental investing. Mainland Chinese markets are slowly moving in that direction, but for now share prices provide no meaningful information at all about China’s economy. The A-share premium probably reflects nothing more than excess domestic liquidity.

5. Robotaxis Are Coming to Los Angeles. Everywhere Could Be Next – Alex Kantrowitz

Cruise is expanding its self-driving taxi operation to Los Angeles amid a year of huge growth for autonomous driving.

The GM subsidiary’s entry into the second-largest city in the U.S.—which I reported first today at Big Technology—comes as it’s increasing its autonomous rides by 49 percent per month and already doing more than 10,000 rides per week. In L.A., Cruise will begin testing soon and then expand to self-driving ride-hailing. It will be the company’s eighth city of operation, up from one at the start of this year. And it won’t be the last…

…As Cruise spreads across the U.S. and Alphabet’s Waymo robotaxi service grows along with it, autonomous driving is finally delivering after years of false hype. The technology went from a perpetual “six months away” to chauffeuring masses of riders this year as both companies gathered experience in pilot cities and used that knowledge to expand to others.

The hardest part of autonomous driving, in reality, was getting to this point. As soon as cars could navigate one or two major cities on their own, the CEO said, expanding to more cities became less of a technology problem and more of a vehicle supply issue. With that supply steadily coming online, rapid scaling should be next.

“Last year, we were operating tens of autonomous vehicles. We’re currently operating hundreds—almost 400 concurrently at peak. Next year, there’ll be thousands. And then it’ll continue at least 10 times growth every year for the foreseeable future,” Vogt said.

Both Cruise and Waymo have found that their technology adapts well across cities, without having to retrain it from the ground up. After adjusting for some city-specific features—like the shape of traffic lights or the nature of traffic circles—their robotaxis can start driving through new cities fairly quickly…

…Waymo is also testing on freeways in the San Francisco area, taking on autonomous driving’s next frontier. Currently, neither Waymo nor Cruise offers ride-hailing customers the option to take freeways. But it shouldn’t be that far away. “On 101, 280, 380, you’ll see our cars at all times of day driving with other cars, at speed, making lane changes, etc.,” Nalavade said. “Hopefully, in the coming months, there’ll be some announcements about our freeways.”

Riding in self-driving cars has become commonplace in some cities already, something I experienced in San Francisco over the past two weeks. In approximately a dozen rides with Waymo and Cruise, I hailed autonomous rides via their apps (similar to Uber and Lyft) and got into their cars alone, in a totally empty vehicle, with no human behind the wheel. It was at first a bit nerve-racking. Then it felt normal. I soon ignored the experience completely. Now I don’t want to ride any other way.

There’s a lot to like about the autonomous vehicles—even if their rollout in San Francisco has been far from perfect. In my experience, they ride smoother than any human driver. Their apps accept ride requests immediately (if the services have enough supply). Their cabins feel private (though there are cameras). And there’s no awkwardness around tip, conversation, climate, or music. Everything is at the rider’s discretion.

From a safety standpoint, both companies claim that data shows that the cars are better than human drivers, although some of the disruption they’ve caused in the Bay Area has inspired a whimsical protest movement intended to stop the tech’s expansion. But once you’re in the vehicle, the stats only confirm what you’re seeing. The cars are cautious, not distracted, not drunk, and they navigate turns and stops with ease.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet and Apple. Holdings are subject to change at any time.

What We’re Reading (Week Ending 13 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 13 August 2023:

1. Why Dying Industries Can Make Great Investments – Brandon Beylo

There were four leading players in the gasoline additives industry during the early 1970s:

  • Ethyl
  • Dupont
  • PPG
  • Nalco

These companies produced billions of pounds of chemical products (additives) and made decent profits. That all changed in 1975.

In 1975, the Environmental Protection Agency (EPA) started enforcing its 1970 “Clean Air Act.” The regulation’s goal was to slowly eliminate the need for gasoline additives in cars. In 1975, car manufacturers were required to install catalytic converters to reduce toxic emissions.

But there was a problem. The converters couldn’t operate properly with the current additive-filled gasoline. It was a death sentence for the entire industry…

…Billions of production were reduced to nothing in two decades. Here’s the most crucial part of this entire saga and why I draw a comparison to today’s oil and gas space (emphasis added):

“Barriers to entry are another story. An insurmountable barrier protected the four firms in the business. The EPA’s regulatory announcement in 1973 posted an unmistakable ‘Do Not Trespass’ sign for any firms contemplating entering the lead-based additive industry.” …

…External forces like the EPA set the death date for the industry. But instead of killing it, it gave the existing competitors a chance to milk their industry for every profit dollar possible…

…Ethyl also generated supernormal returns from its dying “no growth” additives business (emphasis added):

“In 1998, after its additive revenues had declined to $117M, Ethyl still made $51M in operating profits, a 44% margin. The rest of the company had operating margins of 11%.” 

2. Can Robots Evolve Into Machines of Loving Grace? – Meghan O’Gieblyn

My talk was about emergent intelligence in AI, the notion that higher-level capacities can spontaneously appear in machines without having been designed. I’d focused primarily on the work of Rodney Brooks, who headed up the MIT Artificial Intelligence Lab in the late 1990s, and his “embodied intelligence” approach to robotics. Before Brooks came along, most forms of AI were designed like enormous disembodied brains, as scientists believed that the body played no part in human cognition. As a result, these machines excelled at the most abstract forms of intelligence—calculus, chess—but failed miserably when it came to the kinds of activities that children found easy: speech and vision, distinguishing a cup from a pencil. When the machines were given bodies and taught to interact with their environment, they did so at a painfully slow and clumsy pace, as they had to constantly refer each new encounter back to their internal model of the world.

Brooks’ revelation was that it was precisely this central processing—the computer’s “brain,” so to speak—that was holding it back. While watching one of these robots clumsily navigate a room, he realized that a cockroach could accomplish the same task with more speed and agility despite requiring less computing power. Brooks began building machines that were modeled after insects. He used an entirely new system of computing he called subsumption architecture, a form of distributed intelligence much like the kind found in beehives and forests. In place of central processing, his machines were equipped with several different modules that each had its own sensors, cameras, and actuators and communicated minimally with the others. Rather than being programmed in advance with a coherent picture of the world, they learned on the fly by directly interacting with their environment. One of them, Herbert, learned to wander around the lab and steal empty soda cans from people’s offices. Another, Genghis, managed to navigate rough terrain without any kind of memory or internal mapping. Brooks took these successes to mean that intelligence did not require a unified, knowing subject. He was convinced that these simple robot competencies would build on one another until they evolved something that looked very much like human intelligence.

Brooks and his team at MIT were essentially trying to re-create the conditions of human evolution. If it’s true that human intelligence emerges from the more primitive mechanisms we inherited from our ancestors, then robots should similarly evolve complex behaviors from a series of simple rules. With AI, engineers had typically used a top-down approach to programming, as though they were gods making creatures in their image. But evolution depends on bottom-up strategies—single-cell organisms develop into complex, multicellular creatures—which Brooks came to see as more effective. Abstract thought was a late development in human evolution, and not as important as we liked to believe; long before we could solve differential equations, our ancestors had learned to walk, to eat, to move about in an environment. Once Brooks realized that his insect robots could achieve these tasks without central processing, he moved on to creating a humanoid robot. The machine was just a torso without legs, but it convincingly resembled a human upper body, complete with a head, a neck, shoulders, and arms. He named it Cog. It was equipped with over 20 actuated joints, plus microphones and sensors that allowed it to distinguish between sound, color, and movement. Each eye contained two cameras that mimicked the way human vision works and enabled it to saccade from one place to another. Like the insect robots, Cog lacked central control and was instead programmed with a series of basic drives. The idea was that through social interaction, and with the help of learning algorithms, the machine would develop more complex behaviors and perhaps even the ability to speak.

Over the years that Brooks and his team worked on Cog, the machine achieved some remarkable behaviors. It learned to recognize faces and make eye contact with humans. It could throw and catch a ball, point at things, and play with a Slinky.

When the team played rock music, Cog managed to beat out a passable rhythm on a snare drum. Occasionally the robot did display emergent behaviors—new actions that seemed to have evolved organically from the machine’s spontaneous actions in the world. One day, one of Brooks’ grad students, Cynthia Breazeal, was shaking a whiteboard eraser and Cog reached out and touched it. Amused, Breazeal repeated the act, which prompted Cog to touch the eraser again, as though it were a game. Brooks was stunned. It appeared as though the robot recognized the idea of turn-taking, something it had not been programmed to understand. Breazeal knew that Cog couldn’t understand this—she had helped design the machine. But for a moment she seemed to have forgotten and, as Brooks put it, “behaved as though there was more to Cog than there really was.” According to Brooks, his student’s willingness to treat the robot as “more than” it actually was had elicited something new. “Cog had been able to perform at a higher level than its design so far called for,” he said.

Brooks knew that we are more likely to treat objects as persons when we are made to socially engage with them. In fact, he believed that intelligence exists only in the relationships we, as observers, perceive when watching an entity interact with its environment. “Intelligence,” he wrote, “is in the eye of the observer.” He predicted that, over time, as the systems grew more complex, they would evolve not only intelligence but consciousness as well. Consciousness was not some substance in the brain but rather emerged from the complex relationships between the subject and the world. It was part alchemy, part illusion, a collaborative effort that obliterated our standard delineations between self and other. As Brooks put it, “Thought and consciousness will not need to be programmed in. They will emerge.”

The AI philosopher Mark A. Bedau has argued that emergentism, as a theory of mind, “is uncomfortably like magic.” Rather than looking for distinct processes in the brain that are responsible for consciousness, emergentists believe that the way we experience the world—our internal theater of thoughts and feelings and beliefs—is a dynamic process that cannot be explained in terms of individual neurons, just as the behavior of a flock of starlings cannot be accounted for by the movements of any single bird. Although there is plenty of evidence of emergent phenomena in nature, the idea becomes more elusive when applied to consciousness, something that cannot be objectively observed in the brain. According to its critics, emergentism is an attempt to get “something from nothing,” by imagining some additional, invisible power that exists within the mechanism, like a ghost in the machine.

Some have argued that emergentism is just an updated version of vitalism, a popular theory throughout the 18th and 19th centuries that proposed that the world was animated by an elusive life force that permeates all things. Contrary to the mechanistic view of nature that was popular at that time, vitalists insisted that an organism was more than the sum of its parts—that there must exist, in addition to its physical body, some “living principle,” or élan vital. Some believed that this life force was ether or electricity, and scientific efforts to discover this substance often veered into the ambition to re-create it artificially. The Italian scientist Luigi Galvani performed well-publicized experiments in which he tried to bring dismembered frog legs to life by zapping them with an electrical current. Reports of these experiments inspired Mary Shelley’s novel Frankenstein, whose hero, the mad scientist, is steeped in the vitalist philosophies of his time.

When reading about Brooks and his team at MIT, I often got the feeling they were engaged in a kind of alchemy, carrying on the legacy of those vitalist magicians who inspired Victor Frankenstein to animate his creature out of dead matter—and flirting with the same dangers. The most mystical aspect of emergentism, after all, is the implication that we can make things that we don’t completely understand. For decades, critics have argued that artificial general intelligence—AI that is equivalent to human intelligence—is impossible, because we don’t yet know how the human brain works. But emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through ingenuity or accident—consciousness will emerge as a side effect of complexity. At some point nature will step in and finish the job.

It seems impossible. But then again, aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself. The philosopher Gillian Rose once described the act of writing as “a mix of discipline and miracle, which leaves you in control, even when what appears on the page has emerged from regions beyond your control.”

3. Only a cheaper rupee can spur Indian growth – Ashoka Mody

While other Asian policymakers, such as those in South Korea and China, have strategically used sizeable depreciations of their currencies to bolster export competitiveness, Indian elites bemoan every infinitesimal decline in the rupee’s value as a national humiliation. A unique economic and political confluence first entrenched this bogus pride in the country’s psyche in the mid-1960s. And since the 1990s, the country’s corporate leaders and new rich have wanted to maintain a strong rupee. As a result, the country’s export-based growth has suffered, as have jobs for low-skilled workers…

…In a rare sane moment in 1949, a newly independent India devalued the rupee from Rs3.3 to Rs4.8 per dollar, bringing relief to its uncompetitive economy. Indian manufacturers could earn profits even when they lowered dollar sale prices, which helped increase exports. Costlier imports slowed import growth, helping reduce the current-account deficit. But the task was never completed. With low productivity and high inflation, India could not match countries such as Japan in labour-intensive manufactured exports. The World Bank and the IMF financed India’s large current account deficit, creating the illusion that it did not need currency devaluation.

When those two institutions finally threatened to stop financing that deficit, the country’s officials foolishly negotiated the rate to Rs7.5 per dollar in June 1966. This too-little-too-late devaluation did not compensate for the rise in domestic production costs. Taiwan and South Korea raced ahead, helped by currency devaluations; Indian exports languished.

The perceived failure of the 1966 devaluation to spur exports forever tarnished Indian belief in an activist exchange rate policy. Rather than encouraging more aggressive nominal devaluation to offset the rise in production costs and thus achieve real depreciation, devaluation “by stealth” was always too little, too late. In the 1980s, China used aggressive exchange rate depreciation as key to its monumental export push…

…India’s accumulated cost-of-production disadvantage requires the rupee to drop to about Rs90 per dollar; Rs100 per dollar would provide an ideal cushion. But Indian authorities continue to avoid an activist exchange rate policy, and rely on dodgy policy tools:

4. The Infamous Coin Toss – Ole Peters

Imagine I offer you the following gamble. I toss a fair coin, and if it comes up heads I’ll add 50% to your current wealth; if it comes up tails I will take away 40% of your current wealth. A fun thing to do in a lecture on the topic is to pause at this point and ask the audience if they’d like to take the gamble. Some will say yes, other no, and usually an interesting discussion of people’s motivations emerges. Often, the question comes up whether we’re allowed to repeat the gamble, and we will see that this leads naturally to the ergodicity problem.

The ergodicity problem, at least the part of it that is important to us, boils down to asking whether we get the same number when we average a fluctuating quantity over many different systems and when we average it over time. If we try this for the fluctuating wealth in the Peters coin toss the answer is no, and this has far-reaching consequences for economic theory.

Let’s start with averaging wealth, xi​(t) over an ensemble of many different systems. In our case this corresponds to N different players, each starting with xi ​= $100, say, and each tossing a coin independently. After the coins have been tossed, about half of the people will have thrown heads, and the other half tails. As the number of players goes to infinity, N→∞, the proportions of heads and tails will approach 1/2 exactly, and half the players will have $150, the other half $60. In this limit, we know what the ensemble average will be, namely ⟨x(1)⟩=1/2($150+$60)=$105. For historical reasons, this average is also called the expected value, and for the Peters coin toss, it grows by 5% in every round of the gamble so that

⟨x(t)⟩=$100×1.05^t…

…To see that the gamble is not ergodic, let’s now find the average value of wealth in a single trajectory in the long-time limit (not in the large-ensemble limit). Here, as T grows, again the proportions of heads and tails converge to 1/2. But, crucially, a head and a tail experienced sequentially is different from two different agents experiencing them. Starting at x1​(0)=$100, heads takes us to x1​(1)=$150, and following this sequentially with tails, a 40% loss, takes us down to x1​(2)=$90 — we have lost 10% over two rounds, or approximately 5% per round. Since we lose 5% per round, averaged over time, individual wealth is guaranteed to approach zero (or negative infinity on logarithmic scales) in the long-time limit T→∞…

…We have thus arrived at the intriguing result that wealth averaged over many systems grows at 5% per round, but wealth averaged in one system over a long time shrinks at about 5% per round…

…The significance of this ergodicity breaking cannot be overstated… Third, one core problem of economics and politics is to address conflicts between an individual, for example a citizen, and a collective, for example a state. This is the question of societal organization, institutions, Rousseau’s social contract and so on. This problem can seem puzzling, and it often attracts naive answers, because the collective consists of individuals. How, then, can the interests of the individual be misaligned with those of the collective? One important answer is ergodicity breaking.

5. Samo Burja – The Great Founder Theory of History – Patrick O’Shaughnessy and Samo Burja

Patrick: [00:01:32] Samo, your writing has been amongst the most interesting that I’ve encountered in the last couple of years, just a tremendous variety of ideas and ways of looking at the world and history. One of the overarching things that you’re best known for is this lens on history, the called Great Founder Theory. I’d love you to just begin by laying out the core idea here how you came upon the idea and maybe what it opposes, the alternative view of history from the one that you’ve developed. I’d love to start there and then we’ll dive into lots of nooks and crannies together.

Samo: [00:02:03] To me, it seems that most of social science for the last 100 years has been focused on trying to find these macro deep patterns of human behavior, human history, sometimes being as hubristic to try to find immutable laws of history, such was the case in the early and the middle of the 20th century.

And while it certainly is the case that there are deep patterns that are worth studying in the nature of all civilization from the advent of agriculture to today, and while it certainly is true again that there is a deep current that’s changing our society that started with the Industrial Revolution 200 years ago, none of these patterns were set in stone. None of these patterns are fixed. So I think none of them really rise to the level of sociological loss.

And the reason why we can’t just predict future history for the next few hundred years isn’t people observe society and they come to alter it, exceptional individuals, great man’s theory of history says that perhaps don’t determine everything to how things transpire. But I think almost all of the exceptional institutions that have shaped human civilization, anything you can think of, be it organized religion, technological companies, political systems, they usually have an individual or a small group of people who deviate from the previous social norm, create a new type of organization, a new type of institution or honestly, just a new way of doing things as the old society fails.

And we see this over and over again. Again, to give historical examples, they might take a very religious and legible form such as creating, founding a new organized religion. Say, for example, Muhammad, reorganizing the tribal Arab societies into a cohesive, unified whole, ends up expanding and conquering most of the Middle East. They might take the form of, say, Confucius, who has this relatively modest social reform program, but ends up teaching something like 100 bureaucrats who travel the country and try to spread the so-called philosophy of reforming the dysfunctional Chinese states during the Warring States Period. And eventually, that comes to dominate their education system for the next 2,000 years.

Or it might be Charlemagne, refounding what is basically a tribal structure into something that ingests Roman law, creating the Frankish Empire as we think of it and laying the groundwork for Medieval European Feudalism. It’s not the case that Charlemagne or Muhammad or Confucius thought out the full effect of their reforms on society for 1,000 or 2,000 years, it’s just that it did shape human civilization for the next 1,000, 2,000 years. And if you removed any of them, history would have gone quite differently. Not necessarily because of their personal impact to winning to this or that battle, but from the perspective of reshaping the institutes that need to set the probabilities for these events…

Patrick: [00:09:33] If you think about the power of that, predicting the future is basically today plus progress in the same direction, what are different directions that stand out most to you as possible futures that might surprise people? If you take that 50-year hence example, what are things, trends, great founders, people that you’re watching that might affect the way the world looks in 50 years that are very different than how it looks today?

Samo: [00:09:58] I think there are some interesting surprises where most of the Middle East will probably fail to properly industrialize and develop any sort of high-tech energy, any sort of transition away from oil. However, an interesting exception to this might be the United Arab Emirates. People a few years ago were surprised that there was an Emirati mission to Mars. Now of course, this was mostly done by the Japanese space agency, yet significant partnership existed within UAE.

People also might be surprised to learn that they are building nuclear reactors for civilian use. They also are starting to manufacture all sorts of other equipment within the country. So UAE might be a very successful, highly developed country 50 years from now, if basically the current monarchs, the successors continue to be relatively directed, well governing, if they continue to agentically adapt to economic changes.

It’s the same kind of transformation that perhaps we saw with Singapore over 50 years after its initial independence under Lee Kuan Yew, where he sort of broke the mold in a whole variety of ways, and the usual advice for how a country should develop was ignored. And most of the countries that followed that advice didn’t develop, meanwhile Singapore did.

The other important one is that I think that the European institutions will decay much more than people are even now assuming. I think that significant chunks of Europe might become somewhat impoverished. And the key reason for this is that there are very few live players that is exceptional people, who can adjust to their circumstances in any position of power in Europe in the European system. Be it in the economic domain, there are a few exceptional new companies.

There’s a reason that European tech stagnates so profoundly. Russia has more unicorns than Germany. And Russia is not a well-functioning economy, but for whatever reason, it’s easier to create a tech start-up in Russia than it is in Germany. Acquire like a large market and so on, a large user base. Unless someone actively refounds European governments, the EU supranational bureaucracy or even something like key industrial sector in Europe, Europe will continue to lose not like one or two percentage point a year away, where at first, it’s imperceptible, then 20, 30, 40 years on, it just seems a vastly different place. I think thinking of Europe as the formerly developed world will become common.

Patrick: [00:12:49] What about the United States?

Samo: [00:12:51] The United States has some similar problems to Europe, it just has them to a much lesser degree. There’s some discussion recently of American dynamism, of reindustrialization, of things like the CHIPS Act, they’re supposed to reshore certain kinds of manufacturing in the United States. Obviously, the U.S. has a relatively healthy start-up scene. Obviously, artificial intelligence is advancing most rapidly. It’s advancing most rapidly in the United States, here in the Bay Area.

But I think ultimately, core problems of the U.S. government have not been resolved. The U.S. government is less functional, is less competent, is less cost-effective than it was 40 or 50 years ago. Whatever we think of other social changes, it is hard to deny that a government-run project will just be run worse than it would have been in 1960 or 1940.

In addition to this, outside of artificial intelligence, software companies and tech companies have experienced a real slowdown. The reason that so much capital flowed into AI wasn’t just because AI was wonderful and exciting, it’s because there was nothing else around. There’s a real, genuine breakthrough with ChatGPT, but what else is happening? What happened to cryptocurrencies? What happened to software is eating the world? All of that? Those mantras that created many, many new companies value the economic add of those companies was smaller. U.S. economic growth, I think, is somewhat overstated but real. Meanwhile, in Europe, I think we are already seeing the beginnings of just a contracting economy.

In some ways, Japan is a good example of where our future is going, both the United States, Europe and lots of the developed East Asian economies and some semi-developed ones like that of China are experiencing a massive demographic transition. And some of these things are very much exponential where, for example, when your population starts aging at first, you might even have an increase in total population. Since while a fewer people are born, previous generations are still alive and working. Eventually, you start to see a decrease in population and in 1 year is the population shrinking by 100,000, in a few years, might be 2 million, 3 million, 4 million just because it’s already baked in so deeply.

These are all compounding effects. So ultimately, that demographic headwind is something that only the United States is outrunning a little bit with the help of immigration, however, mostly it’s from rising productivity in the tech sector. And then the tech sector itself, it’s making a big bet on AI. If AI doesn’t work out, that is the big, economy-transforming bet, I think the U.S. will also slip into this kind of decaying state.

Patrick: [00:15:51] If you think about some of the key terms that you’ve mentioned, I want to pick on two, concept of a great founder and the concept of a live player, which is a term I love. I’ve sort of adopted my own version of it in lots of conversations. Maybe define what both of those terms mean and help us understand the relative frequency of those. How many great founders have there ever been in history? How many live players are there alive at any given time in your estimation? Give us your definition of those two terms and how common they are.

Samo: [00:16:20] Okay. I think every time a founder creates a new organization, this is a singular act of social creation. Even if it’s something relatively boring, like a technology company or a nonprofit organization, their peculiarities and who they pick as staffing, what the decisions that they’re making are.

Similarly, what would the great founder be, well, they’re the creator of a key new social institution. We think — one way to think of civilizations is that civilization isn’t a single organism. It’s less a tree and more a forest, where many individual institutions can be replaced, and it still is recognizably the same civilization, the same ecological pattern.

Say, if you were to look at Western civilization and observe that — I don’t know, say, the Catholic Church is much less important now than it was 100 years ago. Just because society secularized doesn’t mean it’s a different civilization yet. When exactly does the forest transform into savanna or something like that, we can have those discussions. But it is true that some institutions vastly more influential than others.

So having said that, how many unique pieces of — let’s think of it as social technology or unique civilization-defining institutions — are there per civilization? Well, I think that most civilizations have sort of like five to maybe eight unique things that they’re doing. So the total number of distinct and this macro historical sensibilizations for human history that we know of, I would say that it’s about 30 or so different civilizations. Most of them, of course, are long gone. No one is very much interested in Sumerian civilization. Except as a historical case study today, it doesn’t impact us in a new, profound way, except perhaps, in some ways, influencing biblical myths of the great flood and so on.

So then let’s say that about 30 civilizations, some of them still relevant, some of them ancient history, each of them having probably something like 10 or so great founders. So I think for all of human history, if you were to chart the impact of these individuals — and again, I want to emphasize, sometimes, it’s a small group of people. It might be an individual plus a few very close allies. Or it might be a partnership of two lawgivers or anything like this. Such small human clusters, you think we’re talking about 500 people at most. 500 people at most for human history.

And then for the term live player, all great founders are live players, not all live players are great founders. A live player is someone that is not operating off of an inherited script. An inherited script might be something like professionalism, political tradition. It can be anything. It can be a very successful script.

If I am a surgeon, and I worked exactly as I was trained, I’ll be doing a fairly good job. And I can repeat this exact program, this recipe that I’ve been taught and of course, apply it in my domain with some creativity. And society basically functions on the top of such roles, on surgeons, engineers, plumbers but also lawyers, politicians, priests.

A live player though is someone that can on the spot improvise, develop and create new social roles. And one of the surest indicators that someone is a live player is the ability to jump multiple industries. So if you see someone that succeeded in one industry and gone to a totally unrelated field of activity and succeeded there as well and then gone again and succeeded as well, it’s very unlikely that’s luck, and it’s very unlikely that they’re using the same recipe or the same insights for all three domains. And I think that’s sort of the strongest evidence that some individuals can recreate patterns of behavior and improvise, essentially, in a way that’s basically just very deep, very groundbreaking.

And I think the total number of live players in the world, probably right now is closer to about 50,000 or so. So it’s actually still extremely rare. A fun, historical example might be Arnold Schwarzenegger, who rebrands from being a weight lifting champion to being an actor to being a politician. Well, look, while he was an actor starring in blockbuster movies, people said, “Oh, he’s not really an actor.” And as soon as he became a politician, people said, “He’s not really a politician. He’s just an actor.” So it’s that kind of jump to a different activity that demonstrates an aliveness and adaptability.

Patrick: [00:20:58] I have so many questions about both. Maybe starting with great founder since there’s so many — fewer of them. What’s an example of somebody that we all might have the inclination as a great founder I’m going to make one up like Napoleon or something that is, in fact, not one and why. Like I just want to use an example or two to really drive home the point that potentially so few people have effectively driven what happens in the lives of how many billions of people that have lived through human history.

Samo: [00:21:27] For Napoleon, he won many battles. He was an exceptional general. But if he were to become a great founder, I’m not yet convinced that he is even, it would mostly be through the military and legal reforms that he instituted. Now when it comes to military reforms, I think that is a method of directing battles and so on with the general staff and everything. And that was somewhat influential I think it was over determined to have developed in that direction even without Napoleon.

The interesting thing though is that his code of law that’s spread across Europe, that could be argued to be profoundly influential. That was actually the moment when Europe stepped away from feudalism. We adopted a very different legal framework, say, deals were outlawed in Lat Europe. There would be an exaggeration to same markets were opened, but it would not be an exaggeration to say that people from all walks of life could suddenly enter positions in not just the French state, but all of these Puppet Republics and kingdoms that were set up.

Even in countries that were strictly opposed to Napoleon who are only coerced into alliances such as Austria and Prussia, some of these reforms were imitated because they were so administratively and politically successful. Now having said all of this, it’s still not clear Napoleon is a great founder. It might turn out that, in fact, the civil code in this reform is much more important than we think.

However, it seems to be a fairly short chapter in European history with it being not directly related to the industrial revolution as such. It doesn’t seem like Napoleon’s reforms were particularly conductive that France is becoming a great industrial power, 50 or 80 or 90 years later…

Patrick: [00:36:36] Another thing you’ve written about that I find fascinating, if a lot of people think about history, they might think of it as the march of hard technology. In this example I used earlier, ideas, building on ideas in a very kind of hard math and science way. Your idea is that these social technologies, which are installed by these great founders are actually upstream of hard technology and innovation. Can you describe that mechanism as you see it? And why you think the world works that way?

Samo: [00:37:05] I think the material technology that is hard technology that you described when social technology are intensely mutually symbiotic. You can’t have one without the other. Everything from, say, a Bronze Age Empire relying on the infrastructure of bringing copper from the Eastern Mediterranean and tin from the British Isles or Afghanistan, melting them down into weaponry. There’s a real technological and infrastructure base there. So Bronze Age Empires rely on that.

Everything from that to modern chip fabs where we need a planet’s worth of economies of scale so that a small island off the coast of China, we invest hundreds of billions of dollars into what, four factories who make the chips that are in every device we all have and carry with us daily, that’s crazy. That’s a crazy technological dependency from our society, but it goes the other way, too. The technology depends on the social.

I described global trade, well, the global trade rests on the chip fabs themselves. No, not really. Maybe you could say that, oh, it rests on the hard technology of the U.S. Navy. But wait, what is the U.S. Navy? If the U.S. Navy is keeping the world’s ocean safe and navigable for trade and then the U.S. has supported a system of free international trade, et cetera, et cetera, it becomes very murky. It becomes very hard to arrive at this phenomenon of the technology itself.

Most importantly, if the technology itself, the material technology was all that was driving forward human history, it would look much more like a ratchet. It wouldn’t look like this thing with fits and starts, this thing that has a rise and fall of very advanced civilizations all the time. It wouldn’t have civilizations going down behind alleys. Consider 16th Century Japan, very adept at gun powder warfare, very adapted using the gun. The gun is outlawed after Japan is unified.

When Japanese gun stagnate for the next 200 or 300 years, look, if it was just you introduced the gun to society and then modern warfare starts to develop, Japan wouldn’t have fallen behind the Western world. We often talk about, again, in the American methodological context. If you introduce personal firearms, that’s a force for liberty. Yet in much of Asia actually, in the 18th and 17th century, the introduction of firearms empowered large, centralized militaries. The rifle, in the hands of a Napoleonic soldier can be either a tool of despotism or a tool of liberation. It’s a mass exercise, not an individual exercise.

We’re discussing guns. What about the printing press? I mentioned Martin Luther earlier. Honestly, the first thing that was printed on the printing press wasn’t Martin Luther’s bible. It was indulgences. So it’s first used as a financial mechanism to fund the papacy and strengthen the papacy. It only later comes to be adopted to — into bibles, in German that is. And also, there were variants of the printing press that were introduced in Chinese society, in Korean society long before the printing press was invented in Europe. So a simplistic story where you say, “Oh, guns lead to personal liberation or printing press leads to information liberation, these are not org, these are possible routes you can go down with that technology.

And then finally, there are clear examples of technology advancing and then regressing. If it was purely the growth and development of a technical base with no social factor whatsoever, the Roman Empire would never have fallen or if it did fall, we wouldn’t have lost technologies such as Roman Concrete or Heron’s steam engine, which was a primitive steam engine used in Alexandria. We wouldn’t have lost mathematics, significant chunks of mathematics that were forgotten for 1,000 years.

People understood quite well in 200 BC, the earth is round. This was known. Eratosthenes calculated the size of the earth. So there are all sorts of interesting examples where we can show that scientific knowledge advances and regresses and more importantly, when we can show technology advances and then regresses. I feel like a lot of the advocates of a more hard technology view want to have it both ways.

They want technology to be all important, but they will acknowledge if press the technology is fragile. Some like wait, which is it. If technology is all important, except for it being very fragile. Wait maybe we should study the societal causes of that fragility. And they do acknowledge these are societal causes, they’re not like material causes. That’s the way to think about it. It’s that social organization is a prerequisite for technologies for material technologies and material technologies are a prerequisite for many kinds of social organization…

Patrick: [00:58:51] If you were to imagine someone extremely smart, thoughtful who you respect very deeply, who most disagrees with your world view. Curious, if like any actual person comes to mind, but also just like generically, like what you think the most different version of a world view is from your own that you find interesting for some reason?

Samo: [00:59:13] It’s an interesting and it’s a difficult question. I think that Peter Turchin has an interesting approach. He is an American not quite historian more like complex systems theorist. He’s founded his field called cliodynamics. You could think of it as almost Harry Seldon like.

Patrick: [00:59:31] It’s like a history.

Samo: [00:59:33] Yes, to produce like a macro mathematical model of history. He ends up finding what I think our patterns of distribution of elites applies this old sociological theory that’s actually from the 1900s. Wilfried Pareto already spoke of it, the idea of elite over production, where elites and societies stacked indirect surplus to achieve certain goals. However, people aspire to join the elites, eventually, there are too many elites for the society to sustain itself and the elites start fighting each other as to who gets to stay elite and that this is kind of the cycle of violence and piece and that you can pop these cycles over a long period of time.

It’s not so much that I think he is wrong in some of the patterns that he observes, and it’s just that I think that patterns hold until they don’t. These are long patterns that break. There is no clear statistical way to predict when these patterns break. And the breaking of the patterns we think sometimes is due to the work of great founders, where they might take a civilization that was on terminal decline which leads to self-destroy and emulate in like interminable civil wars and reorient a totally different political system directs those energy outward or you might have the periphery of declining Empire that manages to break away from that Empire to ban its independence, develop a whole new difference of social norms and it becomes the core of a totally new civilization.

These are the things that are not really well captured by these statistical models, and these are things that are not, I think, over determined. Now ultimately, we can get into debates of free will and you can say, “Oh, but every individual is a deterministic product of their society.” And sure, that’s true. If the fate of the society actually depends on a few individuals then it is not possible to study those individuals sociologically.

What’s happening inside my brain, it doesn’t make sense to use theories of the lead over production for that. Then it becomes maybe a psychological or biological question. And you know what, the brains of the great founders that shaped our world, they’re long decomposed. So we can’t actually study that. So we have to acknowledge the limits of our sociological knowledge, and it’s almost kind of like an event horizon consideration, even if theoretically, it’s fully deterministic.

So I would say that this leads me to my second disagreement. I think we lose far too much information about past societies to be able to develop such highly accurate quantitative models. We have decent quantitative information on, say, the economy of the last 100 years. And if you’ve ever tried to study even, say, economics or politics of 18th century Europe, you realize there’s all kinds of data that are really hard to access. And if you go even further back to the 14th Century, it gets even more difficult.

So really, we are operating with a very sparse data set. So yes, Peter Turchin, is very interesting. Maybe Steven Pinker rather well known. He thinks I completely disagree with the idea of a linear ratcheting development of human progress over time. I think that progress is always bounded by the civilization you find yourself in, and with these civilizations tend to be models. So there will be progress for a civilization until there’s not. And then this civilization fails.

And perhaps there’s a new civilization that picks up at a more advanced level, perhaps the new civilization is actually more primitive than what came before. I think that my view of history would not be that it’s exactly cyclical, not at all. There’s further evolution. We do think that the idea that we’ve been on a curve of very compounding material and moral progress for the last 10,000 years. I think that can be easily disproven.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.