What We’re Reading (Week Ending 09 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 June 2024:

1. Google CEO Sundar Pichai on AI-powered search and the future of the web – Nilay Patel and Sundar Pichai

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more…

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing…

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

2. Five Moat Myths (transcript here)- Robert Vinall

So we’re now on to Moat Myth number three, which is execution doesn’t matter. So there’s this idea that, like I mentioned the quote earlier, on “when a management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact.” So this is a bit of a callback to my presentation on management and it implies that as long as the moat is there, nothing can go wrong and vice versa – if the mode isn’t there, then nothing is basically going to go right. I really strongly disagree with that. Some of the best businesses, some of the best investments I’ve seen, in the companies which have really great execution and that execution tends over time to lead to a moat. So I think people get it backwards a little bit. It’s not that the moat trumps execution, it’s that the moat is the output of execution…

…So this one won’t be a surprise to you. I kind of talked about it in the summary on the management presentation but there’s this idea that management doesn’t matter. And I have two examples. So one is a crook and this is the easiest argument to make. Anyone who says management doesn’t matter, all that counts is the business and the financials, well clearly a crook can destroy a business. There’s thousands of examples of that. One springs to mind is an Indian brewer, Kingfisher where the guy effectively sells the business and buys an airline with it, which goes bust. His family went from being very wealthy to zero. So clearly management can destroy a business. I don’t think that’s a hard argument to make.

But on the positive side, clearly management can also be the difference between a great business and a failing business. And of course the most famous example of that ever is Berkshire Hathaway, the company we’re all here to see tomorrow. As many of you will know, Berkshire Hathaway was a failing textile mill and would have almost certainly gone bankrupt and is today I think one of the top 10 largest companies in the US, if not in the world. And that’s thanks to the investment decisions and the investing acumen of Warren Buffett. So clearly management does matter.

3. Getting materials out of the lab – Benjamin Reinhardt

Inventing a new material is the beginning of a long process.

Take carbon fiber composites. You’re almost certainly familiar with these, particularly if you’ve ridden a surprisingly light bike or seen its distinctive crosshatched weave pattern on a car dashboard or phone case.

Looking at carbon fiber composites through an electron microscope, you observe strands of carbon atoms arranged in a hexagonal pattern, woven into mats and layered with a resin such as epoxy. Carbon fiber’s tensile strength (the amount of load it can bear under tension before it breaks) is similar to steel, but the material is much less dense. So if you care about both weight and strength – as you do when you’re designing vehicles from a supercar to a Boeing 787 – carbon fiber is the material for you.

Modern materials like these carbon fiber composites are born in laboratories. Researchers at universities or industrial research labs do test tube–scale experiments, which can produce mind-blowing results. Carbon fiber first showed great promise in 1960 when Richard Millington patented a process to create fibers made of 99 percent carbon.

However, at lab scale, materials don’t do anything. Most people wouldn’t want a rope that is a centimeter long, or a battery that lasts three minutes. Leaving the lab requires bridging many orders of magnitude: from producing less than 0.001 kilograms (one gram) per day in a lab to more than 1,000 kilograms (one tonne) per day in a factory.

You can think of lab-scale materials as the most artisanal products in the world, painstakingly handcrafted by people with advanced degrees. Like any artisanal product, lab-scale materials are expensive. Trying to mass-produce these materials by simply increasing the number of fume hoods, test tubes, and pipette wielders would make them cost billions of dollars per kilogram. After a material is invented, we need to discover cheaper ways to produce it, since price per quantity has a dramatic effect on how much it can be used.

We call this process ‘scaling’, but to me that word is frustratingly vague. It bundles together many different problems that need to be solved to decrease cost and increase yield. The three key ones are:

Consistency. A lab can declare success if a small fraction of their material has an impressive property, but a factory needs that fraction to be much higher. A more consistent yield means less waste, and a lower price.

Standardization. Figuring out how to produce a material using conventional, industry-standard equipment avoids the cost of custom tools and enables you to make more material in an easily replicable way.

Streamlining. Moving a product through a continuous manufacturing process, as opposed to applying each of the manufacturing steps to a small, static batch drastically reduces costs. Henry Ford did this with his moving assembly line, passing cars from worker to worker rather than moving workers from car to car…

…Building an industrial-scale factory requires money – a lot of it. To justify the expense to investors, you need to answer the questions, ‘What is your material good for?’, and more importantly, ‘Who will buy it?’

The answer is far from obvious, even for great materials: carbon fiber went through a decades-long journey before it became the star it is today. At first, manufacturers sold it as low-margin home insulation material because of its low thermal conductivity. It was key to several failed products, from turbine blades to a replacement for fiberglass. It eventually found its first iconic use case when Gay Brewer won the first annual Taiheiyo Club Masters using a golf club with a carbon fiber shaft.

The search for a cost-effective use case leaves many new materials in a chicken-and-egg situation: entrepreneurs and companies can’t justify the expense of scaling because there isn’t an obviously valuable application – but that application can’t emerge without a cost-effective material that can be experimented with.

Even applications that do seem obvious can take a long time to realize. In 1968, Rolls-Royce attempted to use carbon fiber in airplane propellers, which failed spectacularly. The propellers were extremely vulnerable to impacts – the whole project became such a boondoggle that it was a significant factor in the company’s collapse into receivership in 1971. Another 40 years would pass before the first majority–carbon fiber airplane, the Boeing 787, took flight…

…Scientists, mostly working in universities, have strong incentives to focus on novelty and one-off demonstrations because these can lead to publications and positive media attention. That work can be valuable, but the search for novelty alone creates mismatches with efforts to produce useful materials at scale. Essentially, the system of discovery sets up scaling for failure by not creating materials without any consideration of their ability to scale.

The drive to focus on new discoveries over improving old ones’ capacity to scale, combined with the difficulty of mimicking real-world conditions in a lab, creates initial experiments that bear little resemblance to how people use a material in the real world.

Take the development of lithium-ion battery anodes. Researchers can demonstrate exciting leaps in power density from a new anode material using a half-cell reaction that provides functionally infinite lithium. But in a real battery with finite lithium, these anodes would reduce battery lifetimes to the point of unusability.

Similarly, carbon nanotubes have incredible tensile strength for their weight, but it’s hard to make them longer than a few centimeters. This length limit comes from carbon nanotubes’ tendency to tangle and become more susceptible to impurities as they get longer. Cable makers in the real world don’t just care about strength-to-weight ratios, but also the length over which the material maintains that strength. Yet scientists can take their headline of ‘superstrong carbon nanotubes’ and move on to the next project…

…Materials start-ups often struggle to raise venture capital financing. Venture isn’t a good fit for the capital costs and timescales of the material industry: the size, scale, and expectations of venture capital funds are well-suited to invest in software and pharmaceuticals whose revenues can skyrocket once they hit the market. Venture capital also prefers high-margin businesses that can get to market quickly, but materials often face a trade-off between margins and speed: while it’s faster and cheaper to innovate on one component of a larger production line or one material in an existing product, most of the margins come from new products…

…The long road from the lab to the material world might make the future of new materials seem bleak.

One reason for optimism is that new materials might already be on the horizon. There is a shockingly consistent timescale for materials to become useful beyond their initial niches. It took roughly 50 years between Roger Bacon’s discovery in 1958 and the flight of the first majority–carbon fiber airplane in 2009. The first lithium-ion battery was created by NASA in 1965, but most people didn’t start interacting with them until the mid 2000s. The properties of pure carbon nanotubes weren’t isolated until 1991. If there is indeed a 40- to 50-year timescale for lab-based materials to be useful in high-impact applications, we don’t need to despair about a carbon nanotube space elevator being overdue until somewhere around 2040.

4. High-Yield Was Oxy. Private Credit Is Fentanyl – Greg Obenshain and Daniel Rasmussen

Private equity assets have increased sevenfold since 2002, with annual deal activity now averaging well over $500 billion per year. The average leveraged buyout is 65 percent debt-financed, creating a massive increase in demand for corporate debt financing.

Yet just as private equity fueled a massive increase in demand for corporate debt, banks sharply limited their exposure to the riskier parts of the corporate credit market. Not only had the banks found this type of lending to be unprofitable, but government regulators were warning that it posed a systemic risk to the economy.

The rise of private equity and limits to bank lending created a gaping hole in the market. Private credit funds have stepped in to fill the gap. This hot asset class grew from $37 billion in dry powder in 2004 to $109 billion in 2010, then to a whopping $261 billion in 2019, according to data from Preqin. There are currently 436 private credit funds raising money, up from 261 only five years ago. The majority of this capital is allocated to private credit funds specializing in direct lending and mezzanine debt, which focus almost exclusively on lending to private equity buyouts.

Institutional investors love this new asset class. In an era when investment-grade corporate bonds yield just over 3 percent — well below most institutions’ target rate of return — private credit funds are offering targeted high-single-digit to low-double-digit net returns. And not only are the current yields much higher, but the loans are going to fund private equity deals, which are the apple of investors’ eyes…

…Banks and government regulators have expressed concerns that this type of lending is a bad idea. Banks found the delinquency rates and deterioration in credit quality, especially of sub-investment-grade corporate debt, to have been unexpectedly high in both the 2000 and 2008 recessions and have reduced their share of corporate lending from about 40 percent in the 1990s to about 20 percent today. Regulators, too, learned from this experience, and have warned lenders that a leverage level in excess of 6x debt/EBITDA “raises concerns for most industries” and should be avoided. According to Pitchbook data, the majority of private equity deals exceed this dangerous threshold…

…Empirical research into lending markets has typically found that, beyond a certain point, higher-yielding loans tend not to lead to higher returns — in fact, the further lenders step out on the risk spectrum, the less they make as losses increase more than yields…

…The historical experience does not make a compelling case for private credit. Public business development companies are the original direct lenders, specializing in mezzanine and middle-market lending. BDCs are Securities and Exchange Commission–regulated and publicly traded companies that provide retail investors access to private market platforms. Many of the largest private credit firms have public BDCs that directly fund their lending. BDCs have offered 8 to 11 percent yield, or more, on their vehicles since 2004 — yet returned an average of 6.2 percent, according to the S&P BDC index. BDCs underperformed high-yield over the same 15 years, with significant drawdowns that came at the worst possible times..

…Central to every private credit marketing pitch is the idea that these high-yield loans have historically experienced about 30 percent fewer defaults than high-yield bonds, specifically highlighting the seemingly strong performance during the financial crisis…

…But Cambridge Associates has raised some pointed questions about whether default rates are really lower for private credit funds. The firm points out that comparing default rates on private credit to those on high-yield bonds isn’t an apples-to-apples comparison. A large percentage of private credit loans are renegotiated before maturity, meaning that private credit firms that advertise lower default rates are obfuscating the true risks of the asset class — material renegotiations that essentially “extend and pretend” loans that would otherwise default. Including these material renegotiations, private credit default rates look virtually identical to publicly rated single-B issuers…

… If this analysis is correct and private credit deals perform roughly in line with single-B-rated debt, then historical experience would suggest significant loss ratios in the next recession. According to Moody’s Investors Service, about 30 percent of B-rated issuers default in a typical recession (versus fewer than 5 percent of investment-grade issuers and only 12 percent of BB-rated issuers)…

…Private equity firms discovered that private credit funds represented an understanding, permissive set of lenders willing to offer debt packages so large and on such terrible terms that no bank would keep them on its balance sheet. If high-yield bonds were the OxyContin of private equity’s debt binge, private credit is its fentanyl. Rising deal prices, dividend recaps, and roll-up strategies are all bad behaviors fueled by private credit…

…Lender protections have been getting progressively weaker. After analyzing just how weak these covenants have become since the financial crisis, Moody’s recently adjusted its estimate of average recovery in the event of default from the historical average of 77 cents on the dollar to 61 cents…

…Today private equity deals represent the riskiest and worst-quality loans in the market. Banks and regulators are growing increasingly worried. Yet massive investor interest in private credit has sent yields on this type of loan lower, rather than higher, as the deteriorating quality might predict. As yields have fallen, direct lenders have cooked up leveraged structures to bring their funds back to the magical return targets that investors demand. Currently, we suspect that a significant number of private equity deals are so leveraged that they can’t pay interest out of cash flow without increasing borrowing. Yet defaults have been limited because private credit funds are so desperate to deploy capital (and not acknowledge defaults). Massive inflows of capital have enabled private lenders to paper over problems with more debt and easier terms.

But that game can’t go on forever.

5. How Does the Stock Market Perform in an Election Year? – Nick Maggiulli

With the U.S. Presidential election set for a rematch in November, many investors are wondering how the U.S. stock market might perform in the months that follow. While predicting the future is never easy, using history as a guide can be useful for understanding how markets might react to a Biden or Trump victory…

…In the seven or so weeks following an election there can be lots of uncertainty around how the future might unfold. But, if we look at how markets actually perform after an election, they are typically pretty average. To start, let’s consider how U.S. stocks (i.e. the S&P 500) have performed from “election day” until the end of the year for each year since 1950. Note that when I say “election day” I mean from the Tuesday after the first Monday in November to year end, regardless of whether there was an actual election…

…while stock performance has varied quite a bit since 1950, U.S. stocks tend to rise slightly following an election (or in the same time period during a non-election year). The biggest exceptions to this were in 2008, when markets declined by nearly 11% from election day to year end, and in 1998, when they increased by almost 10% as the DotCom bubble continued to inflate.

However, if we look at the average performance in election years versus non-election years, all these differences wash out. Plotting the average performance of the 18 election years and 56 non-election years in the data, we see basically no long-term difference in performance:

While the S&P 500 tends to perform worse (on average) in the first few days following the election, there seems to be no lasting impact on stocks through year end. In fact, the average return following election day through December 31 is 2.3% in an Election Year compared to 2.4% in a Non-election Year. In other words, their returns on average are basically the same. The median (50th percentile) return is similar as well with a 2.9% return in an Election Year compared to 2.4% during a Non-election year…

…When Trump won the 2016 election to almost everyone’s surprise, many believed that U.S. stocks would crash as a result. Jane Street, a prominent quantitative trading firm, was one of them. After finding a way to get the 2016 election results minutes before the rest of the mainstream media, Jane Street still ended up losing money because they got the market’s reaction wrong. As Michael Lewis recalls in Going Infinite:

What had been a three-hundred-million-dollar profit for Jane Street was now a three-hundred-million-dollar loss. It went from single most profitable to single worst trade in Jane Street history.

This illustrates how difficult it can be to predict the reaction of markets, even for the smartest people in the room…

…Overall, U.S. stocks performed better than average after both Trump and Biden’s election victories. However, with the market increasing by 4% in 2016 and 7% in 2020, Biden is the clear post-election winner.

However, if we look at how U.S. stocks performed throughout the rest of their presidency, it seems like Trump will be the clear winner when all is said and done…

…One of the reasons I love this chart is because it illustrates that U.S. stocks tend to rise regardless of which political party is in office. This suggests that the factors that impact stock prices have less to do with who’s in office than we might initially believe.

Some of you will see the chart above and point out how the only two negative periods occurred when Republican presidents were in office. That is technically correct. However, it is also true that these negative periods occurred immediately after Democratic presidencies. So who’s to blame? The Republicans? The Democrats? Neither? No one knows…

…While the outcome of the 2024 U.S. Presidential election remains uncertain, history suggests that the stock market is likely to perform similarly regardless of who wins. In the short term, markets may react positively or negatively to the election results, but those effects tend to even out over time…

…Ultimately, the key to navigating the uncertainty of an election year is to stay informed and avoid making emotional decisions based on short-term political events. The U.S. economy and stock market have made it through countless political cycles before and will make it through this one as well. So no matter who wins in November, history suggests that staying the course is often the best course of action. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 June 2024:

1. Pierre Andurand on a Shortage of Cocoa, Surging Copper and the Outlook for Oil – Tracy Alloway, Joe Weisenthal, and Pierre Andurand

Tracy (03:29):

So maybe to begin with, talk to us about how you got interested in cocoa because my understanding is you put a big trade on a long position earlier this year, it paid off massively. But this isn’t a sort of normal type of trade for you. This is something that was a little bit different.

Pierre (03:48):

Yes. Well, generally my background is more in energy trading, but I’ve traded quite a bit of metals as well, a little bit of agricultural products.

But I have one analyst who was very good and told me in January, ‘Pierre, you should look at cocoa.’ So I’m like ‘Okay, I don’t know anything about it, tell me.’

And he gave me a really good presentation that was really interesting. So then we really digged in really deep together to really understand the fundamental market. And basically we have a massive supply shortage this year.

I mean, we see production down 17% relative to last year. Most analysts out there have it down 11%, but that’s because they tend to be very conservative. They have lots of clients and they don’t want to worry the world. So they come with relatively conservative estimates.

But really tracking the export from the main exporters, mainly Ivory Coast and Ghana, that represent together about 60% of [the] world’s production. We see basically Ivory Coast exports down 30% year to date, I mean season to date and Ghana down 41%.

So just those two countries together since the start of the season, which is the 1st of October, are down 800,000 tons. And now we have what we call the mid-crop that is starting, but that represents only 20% of the balance of the season for West Africa.

And that’s not going to be enough to really change the deficit that we have this year. So we have a deficit of 800,000 tons from those two countries. And then looking at all the other countries we have, I think there some slightly positive, some are slightly negative, but basically we get to a deficit of 800,000 tons this year. And so that’s the first time we have such, you know, a decline in supply and that’s very hard to make it fit.

So at first you eat into current inventories until you run out of inventories and then the price can go anywhere.

So when we look at, okay, what makes the price of cocoa, right? It’s always about supply versus demand. But what has been capping the price between $2,500 a ton and $3,000 a ton, it was not demand because demand is extremely inelastic. I mean you can study that historically when you have a recession or not, when prices go up a lot or not. I mean demand generally goes up.

And that’s because the amount of in dollar terms that people consume in cocoa is very small. I mean, I did a back of the envelope calculation the other day. I mean at basically $10,000 a ton, even though it’s four times the more recent historical prices, out of a market of 5 million tons of demand per year, you have like 8 billion people on the planet, so on average it means that people consume 1.7 grams of cocoa per day, which at $10,000 a ton represents 1.70 cents per day. Okay, that’s the average person. Many people eat nothing and a few eat 10 times that amount…

…Pierre (06:56):

But let’s say if you eat even one full tablet, so 125 grams a day every single day for the whole year, which is quite a lot, of a high content real cocoa because your milk and milk chocolate, you have less than 10% cocoa in it.

So the price can go up 10 times, your tablet is only going to double in price. It’s not going to react very much to the cocoa price. But if you take a high content, high chocolate content bar, like a tablet, 125 grams, that means that you probably have [a] maximum of 50 grams of cocoa beans equivalent in it. I mean it’s probably a lot less.

Then you get to an expense of $14 per month at current prices, which is an increase of $10 per month relative to when we had a more normal price. So it means that demand, like for more reasonable chocolate lovers, that increase in [the] price in cocoa just corresponds to $2 to $5 per month.

So people are not going to eat less chocolate because of that. So it means that prices really are capped by the amount of supply you get. So if you can’t get enough supply, the price can go up a lot until we get more supply.

And when do we get more supply? Well, that in part [is] due to the weather, if you have much better weather then you might get more supply of cocoa beans the next year. But we have some issues that are also structural.

So when we look at the reasons for this large decline in production this year, I mean a lot of the reasons are actually structural. I mean we can look at four reasons why cocoa bean production has gone down a lot this year.

First I should give a little bit of background of why cocoa is so concentrated in West Africa. I mean it’s mainly because it requires very specific temperature, rainfall and humidity conditions. And that’s why most of the production is concentrated around a certain latitude — so 70% in West Africa and then you have 21% in mainly Latin America and 5% in Asia and Oceania.

So the main reasons why we lost a lot of production this year is number one weather. So some of it [is] due to El Nino, we had basically a period of time when it was too hot and a period of time when we had way too much rain.

Second is climate change. So climate change is every year shifting the weather patterns generally unfavorably for cocoa productions. Then you have two diseases, you have one called the Black Spot disease that comes from the fungus and it occurs mainly during the rainy season. It’s spread by rain splash, so basically it can’t grow when it’s dry.

And then you have a virus called the Swollen Shoot disease. It’s not a new disease. It was discovered in 1936. It’s transmitted by mealybugs, but it decreases cocoa yields a lot. So basically a tree that has that Swollen Shoot disease loses 25% yield within the first year and 50% within two years, and the tree dies within three to four years. And we’ve had actually a spread of that disease over the last year.

And then also we had less usage of fertilizers, mainly in the Ivory Coast, due to high fertilizer prices and also shortages due to the Russian invasion of Ukraine. So everything is linked. So some of it might be solved if we get better weather. I mean for next year we should have La Nina and not El Nino, so that should help at the margin.

But we still have issues with climate change. We still have issues with Black Spot disease and Swollen Shoot disease and there’s no indication that we get more usage of fertilizers in [the] Ivory Coast because actually the farmers are still getting relatively low prices and they’re still struggling to make ends meet. So a lot of those supply issues are actually structural…

…Joe (12:09):

So what did your analyst see? Or how was your analyst able to see something in the supply and demand situation that he felt, and you felt, was not being identified by the analysts who cover this closely?

Pierre (12:23):

I think it’s mainly an understanding of how much prices have to move to balance the market. You know, sometimes people can trade that market for like 20 years. They’ve been used to a range of prices and they believe, okay, the top of the range is the high price for example.

But they don’t really ask themselves what makes that price, right?. And sometimes taking a step back can help. I mean what makes the price is mainly the fact that in the past you would have the supply response if prices were going up. But if now you don’t get the supply response, or the supply response takes four or five years, then you need to have a demand response.

And a lot of people look at prices in nominal terms. So you hear people saying ‘Oh, we are at all time high prices in cocoa, but that’s because they look at prices in nominal terms. [The] previous high in 1977 was $5,500 something dollars a ton of 1977 dollars, which is equivalent to $28,000 a ton of today’s dollars.

So we are still very far from previous highs. And so you have to look at a bit more history and understand in the past how prices reacted to a shortage, how long it took to recover the product shortage to actually solve itself. And what’s different today.

So there’s a ratio that we look at that most people look at, it’s actually the inventory to grindings ratio. So it’s a measure of inventory to demand, what we call grinding is basically industrial companies that take the cocoa beans and they want to make chocolate with it. So it’s a process and some of them make the end product chocolate directly. Some of them sell back the product to other chocolate makers.

And so basically a typical grinder would take cocoa beans and make cocoa butter and powder with it. And the prices of both those elements also went up even more than cocoa beans, which means that actually we probably had some destocking everywhere in the chain.

So it looks like demand, when we look at the chocolate makers, the end demand for chocolate didn’t go down at all, it looks to be flat on the year. Grindings look to be down three, three and half percent this year, despite the fact that the end demand is the same in volume, which means that they’ve been destocking cocoa beans actually.

And so we had destocking everywhere — at the end chocolate level, at the cocoa beans, at the cocoa butter and cocoa powder level. So we had this destocking everywhere on the chain and now we have the largest deficit ever on top of two previous years of deficit. And it looks like next year we will have a deficit.

So we’re in a situation where we might actually run out of inventories completely. I mean this year we think we will end up with an inventory to grinding ratio — so inventory at the end of the season — of 21%. For the last 10 years we’ve been between 35% and 40% roughly. At the previous peak in 1977 we were at 19% and that’s what drove us to $28,000 a ton, of todays’s dollars.

If we have another deficit next year, then we might go down to 13%. So I don’t think it’s actually possible. That’s when you really have real shortage of cocoa beans, you can’t get it and that’s when the price can really explode. And so understanding that you have to slow down demand and we know that demand can’t really be slowed.

So that’s when you can have an explosion [in price]. And remember that these commodity futures, you need to have, they’re actually physically settled. So if somebody wants to take delivery, they have to converge with the price of the physical. If you have no physical, somebody wants to take delivery, the price can go anywhere.

So it’s a dangerous commodity too short, right? If you have no physical against it. And actually sometimes we read news that the funds have been pushing cocoa prices. It’s actually completely untrue because the funds have been selling since February. They actually went from a length of 175,000 lots, so that’s 1.75 million tons of cocoa lengths, I think it was around like September last year in average, or a bit earlier, to 28,000 lots to 280,000 tons at the moment.

So they sold more than 80% of their length actually. And the people who’ve been buying the futures from the funds, it’s producers because they’re producing a lot less than they expected.

So what has been happening in the cocoa market is that you had a reduction of what we call the open interest, where both the longs would use their length and the shorts would use their shorts. And then we get into a market where you have less liquidity because you have less exposure, you have less longs and less shorts, and then the volatility increases.

So in the past when people were comfortable being, let’s say, having a 100 lots position now because it moves more than 10 times more than in the past, we’re going to have like a 10 lots position, right? So the market became more — due to the fact that we had a massive move and we have a massive deficit, so everybody’s reducing their positions and because of the increased volatility, we have less activity. And that’s what makes the point more volatile as well.

2. The World’s Most Secret Book About Making a Fortune in Mining – Swen Lorenz

For years, I have been trying to find a copy of an ultra-rare book published in 2008.

It told the inside story of a few mining entrepreneurs who built a company from zero and sold it for a staggering USD 2,500,000,000 (2.5 BILLION) in cash just 674 days later. That’s a quick fortune earned, if ever there was one!

The company was listed on the stock market, and public investors were taken along for some of the ride. In 2006/07, this was the single best British company to own stock of.

Somehow, though, the company’s insiders seem to have regretted publishing their detailed account. The book strangely disappeared from the market shortly after it was published. Curiously, there is ample evidence that an active effort was made to erase the book from history…

…The book in question is “UraMin – A team enriched. How to build a junior uranium mining company“.

Junior mines are companies that are still looking for resources, rather than producing the resource. As most of my readers will know, they are among the most speculative ventures one can invest in. About 99% of them make most investors lose their entire investment. The remaining 1% regularly end up making investors 10, 20, or even 100 times their money.

UraMin was primarily the brainchild of Stephen Dattels, a Canadian mining entrepreneur with a decade-long track record. The book describes the genesis of UraMin from his own perspective and that of his two partners, Ian Stalker and Neil Herbert. It was, for all intents and purposes, a real insiders’ account of the incredible success story.

UraMin produced oodles of capital gains. It was a lucrative investment not just for its pre-IPO investors, but also for those who bought into it through shares acquired on the open market post-IPO…

…It’s no surprise that the book starts with describing just how “Down and Out” the market for uranium-related investments was at the time.

At the turn of 2004/05, you would have been hard-pressed to find any investors interested in uranium. The price for the metal had been in a 26-year (!) bear market. From it 1977 peak, it had been downhill ever since. There were barely any publicly listed companies operating in the uranium industry.

You would have struggled to find anyone who even understood what the metal was used for, and how it was used.

Or as Ian Stalkers, the CEO of UraMin, is quoted in the book:

“A meeting with potential investors could literally take hours. … First, it required a full explanation of what uranium is used for (it isn’t used for ‘bombs’), a run-through of the fuel cycle (enrichment and so on), the safety record of nuclear reactors, long-term disposal issues and the balance of supply and demand. We were lucky if we managed to talk for 10 minutes about the company.”

It was not an opportunity that the mass of investors would have jumped at when it was first presented.

  • However, all the clues were there. At the end of 2004/05, three crucial developments had already taken place, all pointing towards an imminent reversal of fortunes:
  • The price of uranium had started to creep up. It went from USD 10/lb in early 2003 to USD 20/lb by the end of the following year (which was still far below the late-1970s high of USD 115/lb).
  • Existing stockpiles of the metal, which had soared during the 1990s because of a decommissioning of Soviet nuclear missiles, had dwindled to virtually zero. The oversupply that had depressed the price for so long was gone.
  • A soaring oil price, which at the time was up more than 10 times compared to its early-2000s low, provided increasing demand for cheap nuclear energy. It was only a matter of time before investment would flow towards the much cheaper source of energy.

Subsequently, the uranium price went through the roof…

…Put more bluntly, there are occasions when a management team has to concede that everyone is better off if it puts the company up for sale – which is difficult because it usually leads to the entire management team and board losing their jobs!

Also, who wants to leave a party when things are the most fun? Making the decision to call it quits and focus on maximising a buyout price for a company is an extraordinarily hard decision to take. However, it is quite regularly the one decision that a board really should have the guts and the sense of realism to take.

I wasn’t surprised to read that Dattels and his colleagues had that rare quality of knowing when to quit:

“The trend towards a smaller group of larger uranium companies had significant repercussions for UraMin, something that its management realised early on. “The sector was not a large one – it had already seen several significant mergers and more were rumoured,” notes Neil Herbert. “Despite the rapid progress we had made, we were in danger of becoming a relatively small operator.

On 19 February 2007, Reuters reported that UraMin was planning a strategic review of its assets in light of the recent consolidation of the sector.

In effect, analysts believed, the company had just put itself up for sale.”

Companies can put themselves up for sale by hiring an investment bank and making a public announcement, or they can de facto put themselves up for sale by feeding information into their industry’s rumour mill.

Steve Dattels decided that “we should take the initiative and evaluate the merger possibilities rather than wait for the telephone call.”

UraMin hired advisors and went through an official process of allowing prospective acquirers access to its internal information.

Following the process of inviting bids, the company came to an agreement with French nuclear power company, AREVA. In June 2007, UraMin’s management team agreed to a takeover offer that valued the company at USD 2.5bn. The entire purchase price was payable in cash.

Investors who had bought in at the bottom of GBp 50 per share made 8 times their money within just 12 months.

One of the earliest institutional backers of the venture reportedly made 22 times their money in just 24 months.

3. Why Utilities Are Lighting Up the Stock Market – Jason Zweig

As Bespoke Investment Group, a research firm, pointed out this week, three of this year’s five best-performing stocks in the S&P 500 are utilities: Vistra, Constellation Energy and NRG Energy. Vistra, up 143%, has even outperformed the king of AI itself, Nvidia; Constellation, up 85%, is barely behind it…

…The business of providing electricity hasn’t grown in the past couple of decades as conservation and more-efficient technology have reduced consumption. The U.S. generated slightly less electricity in 2021 than it had in 2007, according to the federal Energy Information Administration—even though the economy grew more than 3% annually over that period.

Now, however, the need for energy is finally expanding. On their April 23 earnings-announcement call, executives at NextEra estimated that electricity demand from data centers alone would grow 15% a year through the end of the decade.

AI isn’t the only reason utilities have heated up so fast. The rapid increase in demand for electricity nationwide comes from three main sources, says Maria Pope, CEO of Portland General Electric, Oregon’s biggest utility.

One is the revival of domestic manufacturing after decades of moving offshore. Another is the boom in semiconductor production, boosted by government support. But the expansion of data centers, “driven by the insatiable appetite of AI,” is the fastest-growing source of industrial demand, says Pope.

Jay Rhame, chief executive of Reaves Asset Management, which manages about $3 billion in utility stocks, thinks the only historical parallel is the boom in electricity generation that followed the widespread adoption of air conditioning in the 1960s and 1970s.

4. Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI – Nilay Patel and Shantanu Narayen

If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?

I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.

The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.

For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.

Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.

When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive…

I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.

You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?

I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.

How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?

Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.

So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.

And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.

It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?

I think the approach that we’ve taken has shown just tremendous leadership by saying … Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.

I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.

It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?

To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?

I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—

The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?

I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears…

Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?

It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.

There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?

Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.

The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?

I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?

I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.

Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?

Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.

Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?

It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.

5. The S&P 500 vs. the U.S. Economy – Ben Carlson

The S&P 500 is a big part of the U.S. economy but there are plenty of differences between the stock market and the economy.

For instance, the technology sector has an outsized impact on S&P 500 earnings growth over time:..

…Depending on the time frame, the tech sector can make up the majority of both earnings gains and losses. The same is true of sales:…

…The BEA estimates tech’s contribution to GDP to be 10%.1 That’s still close to $3 trillion but the economy is far more diversified and spread out than the stock market.

A decent chunk of sales for S&P 500 companies also comes from outside our borders:…

…The S&P 500 is a U.S. index but it is comprised of global corporations…

…S&P 500 companies are enormous but the majority of firms with $100 million or more in sales are private companies:…

…S&P 500 companies account for roughly 1 in 5 jobs in the United States:…

…But these corporations are insanely efficient and profitable, accounting for half of the profits in America:


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  We currently have a vested interest in Adobe, Apple, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 May 2024:

1. How I Think About Debt – Morgan Housel

Japan has 140 businesses that are at least 500 years old. A few claim to have been operating continuously for more than 1,000 years…

…These ultra-durable businesses are called “shinise,” and studies of them show they tend to share a common characteristic: they hold tons of cash, and no debt. That’s part of how they endure centuries of constant calamities…

…I think this is the most practical way to think about debt: As debt increases, you narrow the range of outcomes you can endure in life…

…I hope to be around for another 50 years. What are the odds that during those 50 years I will experience one or more of the following: Wars, recessions, terrorist attacks, pandemics, bad political decisions, family emergencies, unforeseen health crises, career transitions, wayward children, and other mishaps?

One-hundred percent. The odds are 100%.

When you think of it like that, you take debt’s narrowing of survivable outcomes seriously…

…I’m not an anti-debt zealot. There’s a time and place, and used responsibly it’s a wonderful tool.

But once you view debt as narrowing what you can endure in a volatile world, you start to see it as a constraint on the asset that matters most: having options and flexibility.

2. Economists Aren’t the Best at Predicting the Economy – Tyler Cowen

Out of curiosity, I recently cracked open The American Economy in Transition, published in 1980, edited by Martin Feldstein and including contributions from other Nobel-winning economists, successful business leaders and notable public servants. Though most of the essays get it wrong, I found the book oddly reassuring…

…For instance, many authors in the book are focused on capital outflow as a potential problem for the US economy. Today, of course, the more common concern is a possible excess inflow of foreign capital, combined with a trade deficit in goods and services. Another concern cited in the book is European economies catching up to the US. Again, that did not happen: The US has opened up its economic lead. Energy is also a major concern in the book, not surprisingly, given the price shocks of the 1970s. No one anticipates that the US would end up the major energy exporter that it is today.

Then there is the rise of China as a major economic rival, which is not foreseen — in fact, China is not even in the book’s index. Nether climate change nor global warming are mentioned. Financial crises are also given short shrift, as the US had not had a major one since the Great Depression. In 1980 the US financial sector simply was not that large, and the general consensus was that income inequality was holding constant. Nor do the economics of pandemics receive any attention.

So you may see why the book stoked my fears that today’s economists and analysts do not have a good handle on America’s imminent problems.

As for opportunities, as opposed to risks: The book contains no speculation about the pending collapse of the Soviet Union. Nor are the internet, crypto or artificial intelligence topics of discussion…

…Then there are the things that haven’t changed much over the decades. Peter G. Peterson, who helped to found the fiscally conservative Peterson Institute, has an essay in the book worrying about the federal deficit.

The piece that most resonated with me, contrary to expectation, is by Paul Samuelson. Samuelson is the one contributor who realizes he doesn’t understand what is going on in the world. He starts by mentioning how forecasts in 1929 and 1945 failed to see the future very clearly. He hopes that the 1980 contributions will be luckier. “The facts tell their own story,” he writes, “but it is not the simple story that so many want to hear.”

Perhaps true reassurance comes from knowing that, all things considered, the US economy has done quite well since 1980.

3. The Cazique of Poyais: a Real Estate illusion in the new world – Javier Pérez Álvarez

After fighting in the South American wars of independence, Gregor MacGregor returned home declaring himself Cazique (kind of a tribal prince) of an imaginary Central American country called “Poyais.” His utopian paradise promised unparalleled wealth and opportunities, attracting hundreds of investors who, unfortunately, not only ended up losing their fortunes but also their lives…

…Gregor MacGregor, known as the Prince of Poyais, Cazique, and His Serene Highness, was a Scottish soldier who became one of the most notorious conmen of his time. He was born on December 24, 1786, into the MacGregor Clan, a family with a strong military tradition…

…At sixteen, Gregor joined the British Army just as the Napoleonic Wars were breaking out. Serving in the 57th Foot Regiment, he quickly rose to the rank of lieutenant within a year.

In June 1805, at the age of nineteen, he married Maria Bowater, a wealthy and well-connected woman, the daughter of a Royal Navy admiral. This marriage secured his social position, and he bought the rank of captain, avoiding the traditional path of promotion that would have required seven years of hard work…

…After his wife’s death, he faced financial difficulties, and his social aspirations crumbled. It was then that his interests turned to Latin America, inspired by the Venezuelan revolutionary general Francisco de Miranda.

Selling his property in Scotland, MacGregor sailed to Venezuela in 1812, presenting himself as “Sir Gregor” and offering his services to Miranda, who appointed him colonel and commander of a cavalry battalion. Despite some initial successes, his ambition drove him to rapidly ascend the ranks, achieving the position of General of Division in the armies of Venezuela and New Granada by the age of thirty…

…Then in 1820, MacGregor came across the swampy, inhospitable coast of Nicaragua, known as the Mosquito Coast. Here he persuaded the leader of the indigenous people to give him land to create a colony. A dream of empire began to take shape.

The self-appointed Prince of Poyais reappeared in London in 1820. He was seeking investors and colonists looking for a new opportunity across the Atlantic in a new world full of possibilities…

…He commissioned a book, illustrated with engravings, describing the country with “total” accuracy…

…Taking advantage of his past as a British Army officer, he managed to gain the sympathy of high society. Nothing has ever been more important than good marketing and PR. The Crown recognized him as a foreign dignitary and, to foster relations between the two countries, honored him with the title of Sir (finally). At that time, just as it happens now, brokers didn’t care what kind of securities they sold as long as they made money from them. Thus, in 1822, Sir Gregor managed to place “Poyais State bonds for stabilization” worth £200,000. These bonds were traded alongside securities from other already recognized states, such as Colombia, which had gained its independence in 1810.

After this, MacGregor took it a step further. He opened offices throughout Great Britain that sold land to colonists who wanted to start a new life in Poyais…

…Many were convinced. Hundreds of enthusiastic colonists spent their savings buying land in Poyais and the corresponding passage overseas…

…In 1822, the first emigrants arrived on the country’s shores in two ships. At the location where the capital should have been, described in detail in the book by the “Black River,” there was nothing. The place the colonists had arrived at was known as the “Mosquito Coast.” The natives themselves avoided that place due to its terrible climate…

…Nevertheless, typical of human psychology, the colonists’ discontent turned against the ship’s captain who had brought them, for it was he who was there. Somehow, he had made a mistake, disembarking them in that godforsaken place and immediately setting sail. No one thought to doubt Sir Gregor. The few natives there could not care for the colonists. Many fell ill and died.

The survivors returned to Great Britain in the autumn of 1823. Surprisingly, no scandal occurred. The emigrants continued to believe in the word of the Prince of Poyais…

…Naturally, all those who invested their money in Poyais bonds lost it. However, it must be said that the returns on these bonds were in line with other investments made in Latin America during those years. On many occasions, the solvency of real states was no different from that of fictional countries like Poyais.

4. 4 Economic Charts That Might Surprise You – Ben Carlson

Large corporations aren’t feeling inflation’s impact. Consumers hate inflation. Small businesses aren’t a fan. Politicians don’t like it much either.

But large corporations?

They seem just fine when it comes to profit margins…

…And the explanation:

Corporations are paying higher wages and input costs but they simply raised prices to combat those higher costs.

Corporate America puts profit first, second, and third, which is one of the reasons the stock market is so resilient.

If it seems like corporations always win it’s basically true. They know how to adapt regardless of the macro environment…

…When Russia invaded Ukraine in the spring of 2022, the price of oil quickly shot up from around $90/barrel to $120/barrel.

Energy experts and macro tourists alike came out with $200/barrel predictions. It made sense at the time!

That war still rages on, along with an additional conflict in the Middle East. In the past, this would have sent oil prices skyrocketing. The oil crisis was a big reason we had stagflation in the 1970s.

Not this time around. Oil prices are back down to $80/barrel. On an inflation-adjusted basis, oil prices are essentially flat since 2019 just before the pandemic…

…The U.S. becoming the biggest oil producer in the world is one of the most important macro developments of the past 20-30 years, yet you rarely hear about it.

This is a huge deal!

5. What It’s Like to Be a Regional Fed President On the Road – Tracy Alloway, Joe Weisenthal, Tom Barkin, and many others

Tracy (11:02):

What’s the biggest constraint on your growth right now? Is it getting the materials? Is it availability of contractors? What’s stopping you from selling even more?

Albert (11:14):

I guess for us it’s going to be more financial institutions understanding our business more. I think the supply chain issue for us, it’s okay, as we have access to different supplies, but it’s more of having a backing of a financial institution, for us.

Tracy (11:37):

So credit?

Carport Central employee (11:38):

So credit. But our turnaround time in our industry, luckily it is pretty quick, but because of the fabrication time and their time schedule for commercial projects, they are not able to pay us, let’s say within maybe 90 days.

And our credit terms are, say, net 30, net 45. So basically we have to have a reserve of cash. You know, it’ll come in, but it’s just a delayed situation. So the growth that we’re seeing, we’re actually being restrained because of not having access to the capital that we need to actually move forward.

Tom (12:14):

And what are the banks telling you when you go talk to them and say ‘I got a business and I got a lot of demand and I just need a little more capital?’

Carport Central employee (12:19):

Well, I think right now it’s mostly because of the way the economy’s going. They’re really, they’re not as free telling you ‘Hey, come on in, let’s help you.’ It’s more like ‘Eh, let me see if I can, I don’t know if I can,’ that kind of situation, not like it was before.

Tom (12:34):

But it’s access rather than rate because you could say ‘Oh, they’ll give it to me. It’s just costing me too much.’

Tom Williams (12:39):

Yeah, I think it’s more access. I think people are more reserved with that…

…Joe (16:20)

So you mentioned when we talked about the sort of anecdotal learnings, the examples you gave were sort of either confirmatory or maybe inform something at the margins like, okay, maybe there’s still more juice on the public sector for [the] labor side. How often does it come up where people will start consistently saying something that, oh, this is really not showing up in the data yet, and it’s sort of an early signal of something that later on you say ‘Yep, there it is, playing out in the numbers.’

Tom (16:48):

I’d say every quarter there’s something like that. So in the fourth quarter last year, in October, you may remember the numbers were really, really frothy. And I wasn’t hearing any of that in the market, and I actually came out and said ‘t’s just not consistent with what I’m hearing.’

Joe (17:02):

The inflation numbers?

Tom (17:03):

No, the demand numbers, the consumer spending numbers, the retail sales numbers were very frothy. That’s not consistent. I’d say today we just got a retail sales report recently that was quite strong and I’m hearing decent consumer spending. I’m not hearing that strong. And maybe I’ll be proven wrong by the time this airs, but that’s what I’m hearing.

So I do hear things that are different and then I hear some number of things that are in advance. May of 2020 in Bristol, Tennessee opened, Virginia wasn’t open. It was right at the end of the first part of Covid and I talked to a developer who said ‘Oh my God, the malls are packed.’

And that was before any of us knew that the opening of the economy would lead to that kind of spending. You know, that’s a good example. I’ll also get a reasonable amount of, I’ll call it segment specific information. You know, how are higher income consumers thinking versus lower income consumers? Or what’s the job market for professionals versus skilled trades? And so the overall number may be the same, but you’ll get some insight into what’s really driving it..

…Winston-Salem Rotary Club member (20:37):

To the extent you can, can you give us any flavor of what you all discussed in your interest rate meetings? And secondly, do you have favorite economic benchmarks you find very useful?

Tom (20:49):

You know, what I’m mostly interested in is real-time information. You’re trying to figure out what’s actually happening in the marketplace. So I get credit card spending every week, year-over-year, and during Covid, I got pretty calibrated on what that means in terms of retail sales.

But that’s something I look at closely to try to get a sense of demand. Consumer spending’s 70% of the economy. On the labor market, the jobs report that comes out every month is clearly the best, most secure thing. But I take some comfort from the weekly jobless claims, because it’s at least a real time measure of whether layoffs are accelerating, which is what you’d see if the economy turned south.

And I think you kind of get the point. I’m trying to figure out is there any risk of the economy turning? That’s really what I focus on.

In terms of the meeting, maybe I’ll give you a 10-day look at it rather than just the meeting itself because the weekend before, 10 days before the meeting, the weekend 10 days before the meeting, we’ll get, the staff does a 200-page vertical text, greatest analysis of the economy you’ve ever seen. And it’ll include domestic and international and financial markets and lending markets and different scenarios for where the economy might go and different monetary policy operations. And so it’s a brilliantly done piece of work.

Tom (22:15):

At the same time, Jay Powell sends around his first draft of what the statement might be. And so we work all weekend and into the week, debating how we want to talk about the economy and whether we like that statement.

We’ll offer Jay — I’m giving you this background so you understand me — we’ll offer Jay our perspective on this statements. He always likes mine best. That’s not actually true. I’m making the point, the statement that we issue the Wednesday of the meeting has largely, not always, but largely, been pretty well-vetted by the time you get to the meeting.

So we don’t go to the meeting and try to line edit a statement. For the most part, every time that the chair has a bad press conference, that’s because we’ve line edited the statement in the meeting and we send them out there two hours after the meeting to go defend it, which is, I think in my judgment, a little bit of malpractice. But we do it sometimes in the meeting itself.

There’s often a special topic and so the staff will present some papers on the special topic and we’ll have a debate about it. Then we all go around and talk about economic conditions. So I’ll say ‘I’ve been in the district for the last seven weeks and here’s what I think I’ve learned, and here’s what I take solace from in the recent data and here’s what I think are some interesting conclusions you might not have otherwise thought about.’

Then we all talk about the statement, pretty productive meeting. It’s a reasonably formal meeting. It’s not really flippant. There’s not tons of humor in there. You know, it’s a pretty serious meeting, but it’s also, every word is transcripted. So if you’re having trouble sleeping, you can go get them from five years ago…

…Tracy (45:55):

This is another theme that comes up regularly in Tom’s meetings. Big and small companies seem to have experienced a lot of the economy of recent years in very different ways. We asked Tom about this.

Tracy (46:08)

Do you notice a big difference between what larger companies are saying versus smaller companies?

Tom (46:12):

I do. Smaller companies are still struggling to fill workforce jobs. They’re still struggling to fill jobs. And that’s in part because there was more capacity to raise wages in the larger companies than there were in the smaller companies.

And we were with one earlier today, but when you go to a smaller company, you do hear that kind of constraint being much bigger. During the supply chain shortage era, you absolutely heard that the big companies had a lot more benefit than the smaller companies. And I think when it came to the margin recapture cycle, the big companies have led the way on that. And a lot of small companies are still saying that they’re working to recapture margins.

Joe (46:54):

Being able to compete on wages isn’t the only edge that larger companies have in the current environment. Many of them have also been able to refinance their debt. Contrast that with the smaller company, Carport Central, which told Tom that bank lending is becoming a constraint on its business.

Tracy (47:10):

That might be one reason, according to Tom, that economic growth has so far defied the gravity of higher interest rates. They just haven’t flowed through to some parts of the economy just yet.

Tom (47:20):

Well, so the data that I keep coming back to is interest payments as a percent of either personal disposable income or corporate revenue. And those numbers have only now finally gotten back to 2019 levels. And that’s because a lot of individuals paid down their credit cards and refinanced their mortgages, and a lot of companies paid down their debt and refinanced their debt.

And so the in aggregate impact of having the Fed funds rate at five and a third versus where it was basically at zero hasn’t really flown through the aggregate economy. Now it’s certainly flown through to individual parts of the economy.

And the most surprising things to me, obviously, the residential market, where you’ve got the 3% mortgage holders who don’t want to trade into a 7% mortgage and are unwilling to sell their house. But behind that is that 92% of mortgages are fixed rate, okay? So that’s different than what the economy was 15 years ago.

In commercial real estate, multifamily, you hear about a set of people who really can’t develop anymore, want to turn in the keys, whatever version of it. And another set of people who are owners who are feeling actually just fine.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 May 2024:

1. Why Xi Jinping is afraid to unleash China’s consumers – Joe Leahy

Both inside and outside China, there is a strongly held view among many economists that the country could secure a further period of robust growth if it were able to boost consumption by its own citizens. Indeed, faced with a property crisis, President Xi Jinping has taken some one-off measures to stimulate consumption to offset a fall in domestic demand.

But Xi has eschewed more radical medicine, such as cash transfers to consumers or deeper economic reforms. His latest campaign is instead to unleash “new quality productive forces” — more investment in high-end manufacturing, such as EVs, green energy industries and AI.

According to analysts, the reasons for the lack of more radical action on consumption range from a need to generate growth quickly by pumping in state funds — this time into manufacturing — to the more deep-seated difficulties of reforming an economy that has become addicted to state-led investment.

Ideology and geopolitics also play roles. For Xi, China’s most powerful leader since Mao Zedong, the greater the control his country exerts over global supply chains, the more secure he feels, particularly as tensions rise with the US, analysts argue. This leads to an emphasis on investment, particularly in technology, rather than consumption.

Under Xi, security has also increasingly taken precedence over growth. Self-reliance in manufacturing under extreme circumstances, even armed conflict, is an important part of this, academics in Beijing say…

…The pressure on Beijing to find a new growth model is becoming acute, analysts say. China has become too big to rely on its trading partners to absorb its excess production.

“The exit strategy has to be, at the end of the day, consumption — there’s no point producing all this stuff if no one’s going to buy it,” says Michael Pettis, a senior fellow at the Carnegie Endowment in Beijing.

Few projects capture Xi’s vision for 21st-century Chinese development as well as Xiongan, a new city being built on marshlands about 100km from Beijing…

…Xiongan unites many of Xi’s favourite development themes. Through vast investment in mega-infrastructure projects such as a high-speed rail hub, Xiongan aims to bring state-owned enterprises, universities and entrepreneurs together to concentrate on high-technology innovation, from autonomous vehicles and life sciences to biomanufacturing and new materials. As of last year, about 1mn people were living there, $74bn had been invested and 140 companies had set up there, Beijing says.

Conspicuously absent from the city plans are strategies to encourage the thing China’s economy lacks most — domestic consumption. In guidelines released in 2019 for Xiongan by Xi’s cabinet, the State Council, there was no mention of the term “consumption”, except for “water consumption”…

…China’s investment to gross domestic product ratio, at more than 40 per cent last year, is one of the highest in the world, according to the IMF, while private consumption to GDP was about 39 per cent in 2023 compared to about 68 per cent in the US. With the property slowdown, more of this investment is pouring into manufacturing rather than household consumption, stimulating oversupply, western critics say…

…Economists suspect that behind the rhetoric, the investment in manufacturing is partly pragmatic. With the property market still falling three years after the crisis began, and many indebted provinces ordered to suspend large infrastructure projects, Xi needs to find growth somewhere to meet his 5 per cent target for this year.

“The bottom line is they want growth in output and they want the jobs associated with that growth,” says Stephen Roach, a faculty member at Yale and former chair of Morgan Stanley Asia. He says when “they’re clamping down on property, it doesn’t leave them with much choice but to go for a production-oriented growth stimulus”…

…In areas vital to China’s national security, the country needed supply chains that “are self-sufficient at critical moments”, he said. “This will ensure the economy functions normally in extreme circumstances.”

HKU’s Chen says China no longer measures its “national power” in purely economic terms “but more importantly, in terms of military . . . capacity. And this is why manufacturing is very important”.

He says in this vision of the world, consumption is a lower priority…

…The Rhodium Group argues that some of the loans that flowed into the industrial sector last year went to local government finance vehicles, the heavily indebted off-balance sheet investment holding companies of provinces and municipalities.

While large sums still went to manufacturers, they “do not have a strong appetite to expand capacity given falling prices”, Rhodium said in a report.

Economists say that for consumers to feel comfortable to spend more, particularly after the property slump, China needs to step up its development of social welfare programmes and healthcare. While China has made strides in building out its public pension and healthcare systems, they are still lacking.

But such solutions would take a long time to boost consumer confidence and would require massive new funding from government coffers that are running dry.

Greater consumption would also necessarily mean reducing the role of manufacturing or investment in the economy. This could be done by unwinding China’s intricate system of subsidies to producers, which includes government infrastructure investment, access to cheap labour, land and other credit, says Pettis.

But if that was done in a big bang fashion, the share of household consumption to GDP would increase while overall GDP would contract as manufacturers suffered. This was obviously not a politically preferable option for Xi.

2. Strategy Reviews – John H. Cochrane

After an extensive extended and collective deliberation, the Fed adopted a new strategy framework known as Flexible Average Inflation Targeting. This framework was explicitly designed by a worldview that “the federal funds rate is likely to be constrained by its effective lower bound more frequently than in the past,”  and a consequent judgement that “downward risks to employment and inflation have increased.” A shift to “inclusive” employment, a return to the old idea that economic “shortfalls” can be filled, and a promise not to preempt future inflation but rather let inflation run hot above 2% to make up past shortfalls followed. These promise of future dovishness were hoped to stimulate demand in the short run.

In short, the Fed adopted an elaborately-constructed new-Keynesian forward-guidance defense against the perceived danger of deflation and stagnation at the zero bound.

No sooner was the ink dry on this grand effort, however, than inflation shot up to 8%, and the zero bound seemed like a quaint worry. Something clearly went drastically wrong. Naturally, the first question for a strategy review is, how can we avoid having that happen again?

Inflation eased without interest rates substantially higher than inflation or a large recession. I think I have a (and the only) clear and simple explanation for that, but I promised not to digress into a fiscal theory today. Still inflation is persistently high, raising the obvious worry that it’s 1978 again. Obviously, central banks have a range of worries on which to focus a new strategy, not just a return to a long-lasting zero bound. (Though that could happen too.)…

…React or guide? It seems clear to me that policy will have to be described more in terms of how the Fed will react to events, rather than in standard forward guidance terms, unconditional promises of how the funds rate will evolve. It will involve more “data-dependent” rather than “time-dependent” policy.

In part, that must come, I think, as a result of the stunning failure of all inflation forecasts, including the Fed’s. Forecasts did not see inflation coming, did not see that it would surge up once it started, and basically always saw a swift AR(1) response from whatever it was at any moment back to 2%. Either the strategy review needs to dramatically improve forecasts, or the strategy needs to abandon dependence on forecasts to prescribe a future policy path, and thus just state how policy will react to events and very short-term forecasts. I state that as a question for debate, however…

…Fiscal limitations loom. Debt to GDP was 25% in 1980, and still constrained monetary policy. It’s 100% now, and only not 115% because we inflated away a bunch of it.  Each percentage point of real interest rate rise is now quickly (thanks to the Treasury’s decision to issue short, and the Fed’s QE which shortened even that maturity structure)  a percentage point extra interest cost on the debt, requiring a percent of GDP more primary surplus (taxes less spending). If that fiscal response is not forthcoming, higher interest rates just raise debt even more, and will have a hard time lowering inflation. In Europe, the problem is more acute, as higher interest costs could cause sovereign defaults. Many central banks have been told to hold down interest rates to make debt more sustainable. Those days can return…

…Ignorance. Finally, we should admit that neither we, nor central banks, really understand how the economy works and how monetary policy affects the economy. There is a complex verbal doctrine that bounces around central banks, policy institutions, and private analysts, asserting that interest rates have a relatively mechanical, reliable, and understood effect on “spending” through a “transmission mechanism” that though operating through “long and variable lags” gives the Fed essentially complete control over inflation in a few years. The one thing I know from 40 years of study, and all of you know as well, is that there is no respectable well-tested economic model that produces anything like that verbal doctrine. (More here.)  Knowing what you don’t know, and that nobody else does either, is knowledge. Our empirical knowledge is also skimpy, and the historical episodes underlying that experience come with quite different fiscal and financial-structure preconditions. 1980 was a different world in many ways, and also combined fiscal and microeconomic reform with high interest rates.

3. Big Tech Capex and Earnings Quality – John Huber

Capex is not only growing larger, but the rate of growth is set to accelerate this year as they invest in the AI boom. Combined capex at MSFT, GOOG and META is set to grow around 70% in 2024. As a percentage of sales, capex will grow from 13% of sales in 2023 to around 20% in 2024…

…Bottom line: the other Big Techs are getting far more capital intensive than they have in the past. Their FCF is currently lagging net income because of the large capex, and this will eventually flow through to much higher depreciation charges in the coming years.

This is not necessarily worrying — if the returns on these investments are good, then sales growth will be able to absorb these much higher expenses. But this is not a sure thing, so I like to use P/FCF metrics as I think a large majority of the assets they’re investing in will need to be replaced. This means the capex levels we see currently could be recurring. So, while the P/E ratios range from 25 to 35, the P/FCF ranges from 40-50.

Again, if the investments are able to earn good returns, then profit margins will remain intact, but one thing to notice is FCF margins (while very strong) have not kept up with GAAP profit margins: e.g. at MSFT, FCF margins have declined slightly from 28% to 26% over the last decade while net margins have expanded from 25% to 36%, leaving GAAP profit margins far in excess of FCF margins. Eventually, as growth slows these margins will tend to converge as depreciation “catches up” to cash capex spend. Whether net margins come down or FCF margins move up simply depends on the returns on capital earned and the growth it produces.

I’m not predicting a poor result, but I’m mindful of how difficult it will be given how different the companies are today. They used to grow with very little capital invested, but now they have a mountain of capital to deploy, which is obviously much harder at 7 times the size:…

…I don’t think anyone (including management) yet knows what the returns on the $150 billion of investments that these three companies will spend in 2024. They are optimistic, but it’s not clear cut to me.

Think about how much profit needs to be generated annually to earn acceptable returns on this capex: a 10% return would require $15 billion of additional after tax profits in year 1. As Buffett points out, if you require a 10% return on a $150 billion investment but get nothing in year 1, then you’d need $32 billion in year 2, and just one more year of deferred returns would require a massive $50 billion profit in year 3.

What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!

What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!

4. A Few Short Stories – Morgan Housel

Thirty-seven thousand Americans died in car accidents in 1955, six times today’s rate adjusted for miles driven.

Ford began offering seat belts in every model that year. It was a $27 upgrade, equivalent to about $190 today. Research showed they reduced traffic fatalities by nearly 70%.

But only 2% of customers opted for the upgrade. Ninety-eight percent of buyers preferred to remain at the mercy of inertia.

Things eventually changed, but it took decades. Seatbelt usage was still under 15% in the early 1980s. It didn’t exceed 80% until the early 2000s – almost half a century after Ford offered them in all cars.

It’s easy to underestimate how social norms stall change, even when the change is an obvious improvement. One of the strongest forces in the world is the urge to keep doing things as you’ve always done them, because people don’t like to be told they’ve been doing things wrong. Change eventually comes, but agonizingly slower than you might assume…

…When Barack Obama discussed running for president in 2005, his friend George Haywood – an accomplished investor – gave him a warning: the housing market was about to collapse, and would take the economy down with it.

George told Obama how mortgage-backed securities worked, how they were being rated all wrong, how much risk was piling up, and how inevitable its collapse was. And it wasn’t just talk: George was short the mortgage market.

Home prices kept rising for two years. By 2007, when cracks began showing, Obama checked in with George. Surely his bet was now paying off?

Obama wrote in his memoir:

George told me that he had been forced to abandon his short position after taking heavy losses.

“I just don’t have enough cash to stay with the bet,” he said calmly enough, adding, “Apparently I’ve underestimated how willing people are to maintain a charade.”

Irrational trends rarely follow rational timelines. Unsustainable things can last longer than you think…

…John Nash is one of the smartest mathematicians to ever live, winning the Nobel Prize. He was also schizophrenic, and spent most of his life convinced that aliens were sending him coded messages.

In her book A Beautiful Mind, Silvia Nasar recounts a conversation between Nash and Harvard professor George Mackey:

“How could you, a mathematician, a man devoted to reason and logical proof, how could you believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world?” Mackey asked.

“Because,” Nash said slowly in his soft, reasonable southern drawl, “the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

This is a good example of a theory I have about very talented people: No one should be shocked when people who think about the world in unique ways you like also think about the world in unique ways you don’t like. Unique minds have to be accepted as a full package.

5. An Interview with Databricks CEO Ali Ghodsi About Building Enterprise AI – Ben Thompson and Ali Ghodsi

So you said you came over to the U.S. in 2009. Did you go straight to UC Berkeley? There’s some great videos of you giving lectures on YouTube. You’re still an adjunct professor there. Do you ever teach anymore or is this a, “Homeboy made good, we’ll give him the title forever”, sort of situation?

AG: No, I teach about a class a year and I still enjoy really doing that. I imagine if I had nothing to do, that’s a job I would actually enjoy doing.

So yeah, I came to the United States just to stay here one year and do research at UC Berkeley and just ended up staying another year, another year, another year. And the timing was — we didn’t know it at the time, but Dave Patterson, who was a professor at UC Berkeley, and now Turing Award winner, which is the Nobel Prize in computer science essentially, said at the time, “We’ve had Moore’s Law, but we no longer know how to make the computers faster and cramming more transistors. That era is over, so computers are not going to get any faster”, and we know he was right, they’re all between two and four gigahertz since then.

So we need the new computer, and the new computer is the cloud, and it also needs new software, so we built all this software stack — the era of data and AI. So it was the perfect time. I always regretted, “Why was I not born in the ’50s or ’60s when computers happened?” — well, actually it kind of happened again in ’08, ’09, ’10, and Berkeley was at the forefront of that. So we were super lucky to see that kind of revolution and being part of that…

…The general idea is you mentioned you started out with Mesos where you needed to compute in parallel instead of serially so you have to have a cluster of computers, not just one. Spark lets you basically do the same thing with data, spread it out over a huge number of computers. You can end up with massive amounts of data, structured, unstructured, people will call it like a “data lake”. There’s a data lake, there’s a data warehouse, there’s a Data Lakehouse. Walk me through the distinction and where that applies to Databricks and its offering.

AG: At the time, the world was kind of split. Those that have structured data, structured data are things that you can represent in tables with rows and columns, those were in data warehouses and you could connect your BI tools, business intelligence tools, that lets you ask questions about the past from those rows and columns. “What was my revenue last week in different regions, by different products, by different SKUs?”, but you couldn’t ask questions about the future.

Then at the same time, we had these future looking workloads, which were, “Okay, we have all kinds of text, images, and unstructured data that’s coming into the enterprise,” and that you couldn’t store in these structured tables, they cannot be represented as tables of rows and columns, those you stored in what’s called data lakes. But then the good news was if you knew what you were doing, you could ask questions about the future, “What’s my revenue going to be next week? Which customer is going to churn next?”. But these worlds were living completely separately and securing them was very hard and there was a lot of redundant stacks that were being built up at the same time.

Our idea was how do we, 1) unify this and 2) how do we disrupt the existing ecosystem? How do we create the company that’s disruptive? And our idea was what if we have open source technology, everybody stores all their data, both the structured and unstructured data in the lake, which is just basically almost free storage by the cloud vendors, but we standardize the format in an open source format, so it almost becomes like USB — you can plug anything in there. Then we build an engine that can do both the BI stuff, backwards looking questions, and the futuristic AI stuff, and that’s what we call the Lakehouse, which is a portmanteau of data lakes and their warehouses. The marketing firms we talked to and anyone we’d ask said, “This is a terrible idea”…

So you’ve been using the word AI a lot. Did you use the word AI a lot five years ago?

AG: I think we used the word ML quite a bit.

Yeah, machine learning. That’s right, there’s a big word branding moment. I mean, there was the ChatGPT moment, so I guess there’s two questions. Number one, did that shift how you were thinking about this space, or was this already super clear to you? But then number two, I have to imagine it fundamentally changed the way your customers were thinking about things and asking about things.

AG: Yeah, from day one we were doing machine learning. We actually built MLlib as part of Spark already before we actually started Databricks. Actually the first use case of Spark in 2009 was to participate in the Netflix competition of recommending the best movie, and we got the second prize actually, we didn’t get the first prize.

The whole point about being able to distribute broadly and do things in a highly parallel manner, I mean we’re basically in that world.

AG: Exactly. Well, a lot of people also use that parallel worlds to just do backwards processing, like a data warehouse that tells you, “Tell me everything about the past”, and it’s great to see trend lines about the past, but to use this kind of more advanced statistical approach, that’s when you venture into machine learning. We were doing it already in 2012, ’13, I tried to push the company to use the phrase AI instead of ML, the most hardcore academics in the company were against it. They said that AI was a buzzword but I said, “No, I think that’s actually what resonates with people”. But at the same time we were seeing more and more deep neural networks so these neural networks are getting stacked do better and better.

Around 2018 is when we started seeing especially language processing, natural language processing, getting more and more applications on the platform. We saw insurance companies using them to analyze huge amounts of texts to assess risks, we saw translation starting to happen, we saw pharma companies analyzing big amounts of electronic medical records that were written, unstructured text. So it was pretty clear that something is going on with NLP [Natural Language Processing] and that just accelerated during the pandemic. So we saw it, we already had over a thousand customers using these kind of transformer models. So when ChatGPT came out, we kind of thought it’s a nothing burger, but of course we were wrong in that it was an absolute awareness revolution.

Yes, exactly.

AG: What we took for granted was not what the rest of the world was taking for granted. So we feel like the world woke up to AI in November 2022 with ChatGPT, though the truth is it had been going on for 20 years.

That’s what strikes me. That’s the biggest impact is number one, you had the total rebranding of everything to AI, my washing machine now has AI, what a miracle. But just the fact that you went through this when you started with Spark, you thought this is a great idea, no one knows what it is. Now suddenly people are asking you, knocking on your door, “We have data on your thing, can we run ChatGPT on it?” — is that how those conversations went?

AG: Yeah, I mean literally before ChatGPT, I would tell the marketing department to tone down the AI language because customers would say, “Hey, this AI stuff is futuristic, we have concrete problems right now with data that we need to solve”, so I actually shot down a marketing campaign and marketing was really upset about it, which said, “Customer X is a data and AI company, Customer Y is a data and AI company”. They had it ready to go and I shot it down and I said, “We don’t want to push so hard on AI because people don’t really want AI”, and then literally after ChatGPT happened, I told them, “Hey, that campaign from a couple of years ago, maybe we should run it now” — which we did actually and people loved it. So yeah, it’s just the market was just not ready…

All right, number three, Databricks solves Mosaic ML’s need to build a sales force and Mosaic ML solves Databricks need to build a sustainable differentiated business around an open source project.

AG: Yes, I think you are 99% right? I would modify that last sentence to say —

I didn’t give you enough credit for how much you had differentiated to date?

AG: No, I actually think that you kind of were spot on, but I would say with open source, I would say that it was Mosaic ML having a research team that really was deep in LLM research and AI, it was hard to come by at the time and it was very, very hard actually to hire those researchers that really gave us that. And then the know-how to customize LLMs on your data in a secure way.

How does that work? How do you do that?

AG: So this is what their specialty was. When everybody else was building one giant model or a few giant models that are supposed to be very smart, these guys, their business model was, “We build it again and again and again, custom either from scratch or from an existing checkpoint, you tell us or we can fine tune it, but we can help you build an LLM for you and we will give you the intellectual property of that LLM and its weights”. That way you as a customer can compete with your competitors and in the long run you become a data and AI leader just like our billboards that I had banned a few years earlier say. You’re going to be a data and AI company. It doesn’t matter if you’re a pharma company or a finance company or a retail company, you’re actually going to be a data and AI company, but for that you need intellectual property. Elon Musk is not just going to call OpenAI for his self-driving capabilities, he needs to have his own. Same thing is going to be true for you in finance, retail, media. So that was their specialty, but we had the data.

Is that actually true though? Do they actually need to have their own intellectual property or is there a sense — my perception, and I picked up on this, I was at some sort of conference with a bunch of CEOs, it struck me how they had this perception of, “We’ve had this data for years, we were right to hold onto it, this is so valuable!”, and I’m almost wondering, are you now so excited about your own data that you’re going to be over protective of it? You’re not going to want to do anything, you’re actually going to sort of paralyzed by, “We have so much value here, we have to do it ourselves”, and miss out on leveraging it sooner rather than later because you’re like, “It has to be just us”.

AG: No, I do think that people have now realized how valuable their data is, there’s no doubt about that and it is also true, I believe in it. The way I think of it is that you can think of the world as two kind of parallel universes that coexist these days with LLMs. We’re super focused on one, which is the kind of open Internet and the whole crawl of everything that’s in it and all of the history of mankind that has been stored there. Then you’re building LLMs that’s trained on that and they become intelligent and they can reason and understand language, that’s what we’re focused on.

But we’re ignoring this other parallel universe, which is every company on the planet that you join have you sign an NDA, an employee agreement, and then that gives you access to all this proprietary data that they have on the customers and everything else, and they have always been protective of that. The LLMs today that we are training and we’re talking about, they don’t understand that data, they do not understand the three letter acronyms in any organization on the planet.

So we do the boring LLMs and the boring AI for those enterprises. We didn’t have quite the muscle to do it without Mosaic, they really understood how to build those LLMs, we had the data already. So we had the data and we had the sales force, Mosaic did not have the data, they did not have the sales force, they did have the know-how of how to build those custom models.

I don’t think that the companies are hamstrung and they’re not going to do anything with it, they want to do things with it. I mean, people are ready to spend money to do this. It’s just that I feel like it’s a little bit of a 2007 iPhone moment. iPhone comes out, every company on the planet says, “We have to build lots of iPhone apps, we have to”. Then later it turns out, “Well, okay, every company building a flashlight app is maybe not the best use of resources, in fact, maybe your iPhone will just have a flashlight in it”. So then it comes back to what special data do you have that no one else has, and how can we actually monetize that?

How does it actually work to help companies leverage that? So you released a state-of-the-art open LLM, DBRX, pretty well regarded. Do you do a core set of training on open data on whatever might exist and then you’d retrain it with a few extra layers of the company’s proprietary data and you have to do that every time? How modular is that? How does that actually work in practice?

AG: Yeah, there’s a whole slew of different techniques ranging from very, very lightweight fine tuning techniques. The most popular one is called LoRA, low rank adaptation, to actually training a chunk of the model. So you take an existing model that’s already trained and it already works and you customize a bunch of the layers to what’s called CPT, continuous pre-training, in which case you actually train all of the layers of the model, an existing model that’s already baked and ready, but you train all of the layers. It costs more to do that to all the way if you’re doing something really different. So if the domain that you’re using for the data set is significantly different, then you want actually what’s called pre-train, which is train the model from scratch. If you’re a SaaS application and LLMs is the core of the offering, you probably want to have a pre-trained model from scratch, so we can do all of those.

I would say the industry is not actually a hundred percent, the research is not a hundred percent clear today of when should you use which, where. We have a loose idea that if you don’t have huge amounts of data and it’s kind of similar in domain to what the LLM already can do, then you can probably use the more lightweight ones, and if your data is very different and it’s significant, then probably the lightweight mechanisms are not good for you, and so on. So we have a research team that really can do this really, really well for enterprises. But I think a lot of progress is going to happen in the next few years to determine how can we do this automatically? How do we know when to use them? And there might be new techniques also that are developed.

What’s the trade-off? I imagine you talk to a company, we absolutely want the most accurate model for sure, we want it totally customized to us. And then you’re like, “Okay, that’s going to cost XYZ, but then also to serve it is going to cost ABC”. The larger a model is the more expensive it is to serve and so your inference costs are just going to overwhelm even the upfront costs. What’s that discussion like and trade-off like that you’re having with your customers?

AG: Well, the vast majority have lots of specific tasks that they want to do. So again, a lot of people are thinking of things like ChatGPT, which are sort of completely general purpose open-ended, ask me anything. But enterprise typically have, “Okay, I want to extract labels from all this core piece of data, I want to do it every day like ten times a day”, or, “I want to tag all of these articles with the right tags and I want to do that very accurately”. So then actually for those specific tasks, it turns out you can have small models. The size of the model helps you actually be much cheaper and that matters at scale and then they are really, really concerned about quality and accuracy of that, but for that specific task, it doesn’t need to nail a super balanced answer to the question of whether there was election fraud or not in 2020.

(laughing) Right.

AG: It just needs to really extract those tags really, really well, so then there are techniques you can use to that. There is a way where you can actually have your cake and eat it too, assuming that the task you want to do is somewhat narrow.

But we also have customers that are, “No, I’m building a complete interactive general-purpose application in say, many of the Indian dialects in India, and I want to do that, and existing models are not very good at that, help me do that”. Then you have to go for a bigger model but bigger is usually more expensive. Of course, we are using the mixture of experts architecture, which we think is where the world is headed and which is what people also think what GPT-4 was based on, but we’ve also seen with Llama 3 from Meta that dense models, that are not mixture of experts, are also excellent and they’re doing really, really well…

Is there a difference between domestic and international in terms of the aggressiveness with which they’re approaching AI?

AG: Yeah, I would say that China is moving very, very fast on AI. Some Asian countries, there’s less regulation. Europe, I feel is lagging always, has been lagging a few years behind the United States, and they’re concerned about — there’s also competitive concerns with so many American companies, cloud companies and so on from Europe. So Europe is a little bit more regulated and a few years usually lagging United States.

That’s what we’re seeing, but there’s regional differences. Like India is very interesting because it’s moving so fast, there’s no signs of anything that’s recession-like over there. There are markets like Brazil and so on that are doing really well. So really, you have to go case-by-case, country-by-country. We have significant portion now of our business in Europe as well, and also now a growing business in Asia and also Latin America.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time. Holdings are subject to change at any time.

What We’re Reading (Week Ending 12 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 12 May 2024:

1. From Blueprint to Therapy: The Evolution and Challenges of Nucleic Acid Interventions – Biocompounding

Nucleic Acid Therapies (NATs) offer a targeted approach to rectify these underlying genetic issues. By employing strategies like antisense oligonucleotides, mRNA therapy, RNA interference, or CRISPR-based gene editing, NATs can directly modify or regulate the expression of genes responsible for the disease. These therapies can repair or silence defective genes, replace missing genes, or modulate gene expression, thereby addressing the root cause of the disease at the molecular level. This precision in targeting genetic defects makes NATs a promising and revolutionary approach in modern medicine, potentially offering cures or significant treatment improvements for numerous genetic and acquired diseases.

The modalities of NATs vary based on their mechanism of action, type of nucleic acid used, and therapeutic goals. Here’s an introduction to the different modalities of NATs:

  1. Antisense Oligonucleotides (ASOs): These are short, synthetic strands of DNA or RNA that are designed to bind to specific RNA molecules within a cell. By binding to their target RNA, ASOs can interfere with the process of protein production. They can inhibit the expression of a gene, modify RNA splicing, or promote the degradation of the RNA molecule. ASOs are used in conditions like Duchenne Muscular Dystrophy and Spinal Muscular Atrophy. Example: Sarepta Therapeutics
  2. RNA Interference (RNAi): This modality uses small interfering RNA (siRNA) or microRNA (miRNA) to silence specific genes. RNAi works by degrading the mRNA of a target gene, preventing it from being translated into a protein. This approach is particularly useful in diseases where inhibiting the expression of a certain gene can be therapeutic. RNAi has been explored for various applications including cancer therapy and viral infections. Currently FDA-approved siRNAs are used in conditions such as Hereditary transthyretin-mediated amyloidosis. Example: Alnylam Pharmaceuticals
  3. AAV Gene Therapy: Adeno-associated virus (AAV) vectors are commonly used in gene therapy. AAVs are small viruses that can deliver genetic material into cells without causing disease. In AAV gene therapy, the therapeutic gene is packaged into an AAV vector, which then delivers the gene into patient’s cells. This modality is useful for treating genetic disorders, such as Hemophilia A, by providing a functional copy of a defective or missing gene. Example: Spark Therapeutics
  4. mRNA Therapy: mRNA therapies involve the use of messenger RNA to produce therapeutic proteins inside the body. Unlike traditional gene therapy that alters the DNA within cells, mRNA therapy delivers mRNA that is translated into the desired protein, offering a temporary but effective treatment. This approach has gained significant attention, especially in the development of COVID-19 vaccines. Currently there are several attempts to develop cancer vaccines by the key players in this space. Example: Moderna, BioNTech, Pfizer
  5. CRISPR/Cas9 and Genome Editing: This revolutionary technology enables precise editing of the genome. CRISPR/Cas9 can be used to add, delete, or alter specific DNA sequences in the genome, offering the potential to correct genetic defects at their source. While still in the experimental stages for many applications, it holds promise for treating a range of genetic disorders. In Dec 2023, the first ever FDA-approved CRISPR-based gene therapy was used to treat sickle cell disease. Example: CRISPR Therapeutics, Vertex

2. China Is Still Rising – Nicholas Lardy

Those who doubt that China’s rise will continue point to the country’s weak household spending, its declining private investment, and its entrenched deflation. Sooner than overtake the United States, they argue, China would likely enter a long recession, perhaps even a lost decade.

But this dismissive view of the country underestimates the resilience of its economy. Yes, China faces several well documented headwinds, including a housing market slump, restrictions imposed by the United States on access to some advanced technologies, and a shrinking working-age population. But China overcame even greater challenges when it started on the path of economic reform in the late 1970s. While its growth has slowed in recent years, China is likely to expand at twice the rate of the United States in the years ahead.

Several misconceptions undergird the pessimism about China’s economic potential…

…A second misconception is that household income, spending, and consumer confidence in China is weak. The data do not support this view. Last year, real per capita income rose by 6 percent, more than double the growth rate in 2022, when the country was in lockdown, and per capita consumption climbed by nine percent. If consumer confidence were weak, households would curtail consumption, building up their savings instead. But Chinese households did just the opposite last year: consumption grew more than income, which is possible only if households reduced the share of their income going to savings…

…Another misconception concerns the potential for a collapse in property investment. These fears are not entirely misplaced; they are supported by data on housing starts, the number of new buildings on which construction has begun, which in 2023 was half what it was in 2021. But one has to look at the context. In that same two-year period, real estate investment fell by only 20 percent, as developers allocated a greater share of such outlays to completing housing projects they had started in earlier years. Completions expanded to 7.8 billion square feet in 2023, eclipsing housing starts for the first time. It helped that government policy encouraged banks to lend specifically to housing projects that were almost finished; a general easing of such constraints on bank loans to property developers would have compounded the property glut…

…By 2014, private investment composed almost 60 percent of all investment—up from virtually zero percent in 1978. As private investment is generally more productive than that of state companies, its expanding share of total investment was critical to China’s rapid growth over this period. This trend went into reverse after 2014 when Xi Jinping, having just assumed the top leadership position, aggressively redirected resources to the state sector. The slowdown was modest at first, but by 2023, private investment accounted for only 50 percent of total investment…

…But here again, the pessimism is not supported by the data. First, almost all the decline in the private share of total investment after 2014 resulted from a correction in the property market, which is dominated by private companies. When real estate is excluded, private investment rose by almost ten percent in 2023. Although some prominent Chinese entrepreneurs have left the country, more than 30 million private companies remain and continue to invest.

3. The Cloud Under The Sea – Josh Dzieza

In the family tree of professions, submarine cable work occupies a lonely branch somewhere between heavy construction and neurosurgery. It’s precision engineering on a shifting sea using heavy metal hooks and high-tension lines that, if they snap, can cut a person in half. In Hirai’s three decades with Kokusai Cable Ship Company (KCS), he had learned that every step must be followed, no matter how chaotic the situation. Above all else, he often said, “you must always be cool.”…

…The world’s emails, TikToks, classified memos, bank transfers, satellite surveillance, and FaceTime calls travel on cables that are about as thin as a garden hose. There are about 800,000 miles of these skinny tubes crisscrossing the Earth’s oceans, representing nearly 600 different systems, according to the industry tracking organization TeleGeography. The cables are buried near shore, but for the vast majority of their length, they just sit amid the gray ooze and alien creatures of the ocean floor, the hair-thin strands of glass at their center glowing with lasers encoding the world’s data.

If, hypothetically, all these cables were to simultaneously break, modern civilization would cease to function. The financial system would immediately freeze. Currency trading would stop; stock exchanges would close. Banks and governments would be unable to move funds between countries because the Swift and US interbank systems both rely on submarine cables to settle over $10 trillion in transactions each day. In large swaths of the world, people would discover their credit cards no longer worked and ATMs would dispense no cash. As US Federal Reserve staff director Steve Malphrus said at a 2009 cable security conference, “When communications networks go down, the financial services sector does not grind to a halt. It snaps to a halt.”…

…Fortunately, there is enough redundancy in the world’s cables to make it nearly impossible for a well-connected country to be cut off, but cable breaks do happen. On average, they happen every other day, about 200 times a year. The reason websites continue to load, bank transfers go through, and civilization persists is because of the thousand or so people living aboard 20-some ships stationed around the world, who race to fix each cable as soon as it breaks.

The industry responsible for this crucial work traces its origins back far beyond the internet, past even the telephone, to the early days of telegraphy. It’s invisible, underappreciated, analog. Few people set out to join the profession, mostly because few people know it exists…

…Once people are in, they tend to stay. For some, it’s the adventure — repairing cables in the churning currents of the Congo Canyon, enduring hull-denting North Atlantic storms. Others find a sense of purpose in maintaining the infrastructure on which society depends, even if most people’s response when they hear about their job is, But isn’t the internet all satellites by now? The sheer scale of the work can be thrilling, too. People will sometimes note that these are the largest construction projects humanity has ever built or sum up a decades-long resume by saying they’ve laid enough cable to circle the planet six times…

…The world is in the midst of a cable boom, with multiple new transoceanic lines announced every year. But there is growing concern that the industry responsible for maintaining these cables is running perilously lean. There are 77 cable ships in the world, according to data supplied by SubTel Forum, but most are focused on the more profitable work of laying new systems. Only 22 are designated for repair, and it’s an aging and eclectic fleet. Often, maintenance is their second act. Some, like Alcatel’s Ile de Molene, are converted tugs. Others, like Global Marine’s Wave Sentinel, were once ferries. Global Marine recently told Data Centre Dynamics that it’s trying to extend the life of its ships to 40 years, citing a lack of money. One out of 4 repair ships have already passed that milestone. The design life for bulk carriers and oil tankers, by contrast, is 20 years.

“We’re all happy to spend billions to build new cables, but we’re not really thinking about how we’re going to look after them,” said Mike Constable, the former CEO of Huawei Marine Networks, who gave a presentation on the state of the maintenance fleet at an industry event in Singapore last year. “If you talk to the ship operators, they say it’s not sustainable anymore.”

He pointed to a case last year when four of Vietnam’s five subsea cables went down, slowing the internet to a crawl. The cables hadn’t fallen victim to some catastrophic event. It was just the usual entropy of fishing, shipping, and technical failure. But with nearby ships already busy on other repairs, the cables didn’t get fixed for six months. (One promptly broke again.)

But perhaps a greater threat to the industry’s long-term survival is that the people, like the ships, are getting old. In a profession learned almost entirely on the job, people take longer to train than ships to build.

“One of the biggest problems we have in this industry is attracting new people to it,” said Constable. He recalled another panel he was on in Singapore meant to introduce university students to the industry. “The audience was probably about 10 university kids and 60 old gray people from the industry just filling out their day,” he said. When he speaks with students looking to get into tech, he tries to convince them that subsea cables are also part — a foundational part — of the tech industry…

…To the extent he is remembered, Cyrus Field is known to history as the person responsible for running a telegraph cable across the Atlantic Ocean, but he also conducted what at the time was considered an equally great technical feat: the first deep-sea cable repair.

Field, a 35-year-old self-made paper tycoon, had no experience in telegraphy — which helps explain why, in 1854, he embarked on such a quixotic mission…

…“When it was first proposed to drag the bottom of the Atlantic for a cable lost in waters two and a half miles deep, the project was so daring that it seemed to be almost a war of the Titans upon the gods,” wrote Cyrus’ brother Henry. “Yet never was anything undertaken less in the spirit of reckless desperation. The cable was recovered as a city is taken by siege — by slow approaches, and the sure and inevitable result of mathematical calculation.”

Field’s crew caught the cable on the first try and nearly had it aboard when the rope snapped and slipped back into the sea. After 28 more failed attempts, they caught it again. When they brought it aboard and found it still worked, the crew fired rockets in celebration. Field withdrew to his cabin, locked the door, and wept.

Cable repair today works more or less the same as in Field’s day. There have been some refinements: ships now hold steady using automated dynamic positioning systems rather than churning paddle wheels in opposite directions, and Field’s pronged anchor has spawned a medieval-looking arsenal of grapnels — long chains called “rennies,” diamond-shaped “flat fish,” spring-loaded six-blade “son of sammys,” three-ton detrenchers with seven-foot blades for digging through marine muck — but at its core, cable repair is still a matter of a ship dragging a big hook along the ocean floor. Newfangled technologies like remotely operated submersibles can be useful in shallow water, but beyond 8,000 feet or so, conditions are so punishing that simple is best…

…Debates about the future of cable repair have become a staple of industry events. They typically begin with a few key facts: the ships are aging; the people are aging; and it’s unclear where the money will come from to turn things around.

For much of the 20th century, cable maintenance wasn’t a distinct business; it was just something giant, vertically integrated telecom monopolies had to do in order to function. As they started laying coaxial cables in the 1950s, they decided to pool resources. Rather than each company having its own repair vessel mostly sitting idle, they divided the oceans into zones, each with a few designated repair ships.

When the telcos were split up at the turn of the century, their marine divisions were sold off. Cable & Wireless Marine became Global Marine. AT&T’s division is now the New Jersey-based SubCom. (Both are now owned by private equity companies; KCS remains a subsidiary of KDDI.) The zone system continued, now governed by contracts between cable owners and ship operators. Cable owners can sign up with a nonprofit cooperative, like the Atlantic Cable Maintenance & Repair Agreement, and pay an annual fee plus a day rate for repairs. In exchange, the zone’s three ships — a Global Marine vessel in Portland, UK, another in Curaçao, and an Orange Marine vessel in Brest, France — will stand ready to sail out within 24 hours of being notified of a fault.

This system has been able to cope with the day-to-day cadence of cable breaks, but margins are thin and contracts are short-term, making it difficult to convince investors to spend $100 million on a new vessel.

“The main issue for me in the industry has to do with hyperscalers coming in and saying we need to reduce costs every year,” said Wilkie, the chair of the ACMA, using the industry term for tech giants like Google and Meta. “We’d all like to have maintenance cheaper, but the cost of running a ship doesn’t actually change much from year to year. It goes up, actually. So there has been a severe lack of investment in new ships.”

At the same time, there are more cables to repair than ever, also partly a result of the tech giants entering the industry. Starting around 2016, tech companies that previously purchased bandwidth from telcos began pouring billions of dollars into cable systems of their own, seeking to ensure their cloud services were always available and content libraries synced. The result has been not just a boom in new cables but a change in the topology of the internet. “In the old days we connected population centers,” said Constable, the former Huawei Marine executive. “Now we connect data centers. Eighty percent of traffic crossing the Atlantic is probably machines talking to machines.”…

…In 2022, the industry organization SubOptic gathered six cable employees in their 20s and 30s for a panel on the future of the industry. Most of them had stumbled into their jobs inadvertently after college, and the consensus was that the industry needed to be much better about raising public awareness, especially among the young.

“I don’t know if anyone saw, but during the pandemic, submarine cables actually went viral on TikTok,” said one panelist, a young cable engineer from Vodafone. “People didn’t know they existed, and then suddenly, out of nowhere, they were viral. I think it’s engaging with youth and children through their own avenues — yes, you can have science museums and things like that, but they are online, they are on their iPads, they’re on their phones.”

“We’ve got some pretty senior decision-makers and influencers in the subsea cable industry here,” said one audience member. “Did any of us know that we went viral on TikTok?” he asked, to laughter.

“As this panel rightfully said upfront, it’s not that we have a brand problem,” said another audience member, “we just don’t have a brand at all.”

4. Looking for AI use-cases – Benedict Evans

I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.

The one really big use-case that took off in 2023 was writing code, but I don’t write code. People use it for brainstorming, and making lists and sorting ideas, but again, I don’t do that. I don’t have homework anymore. I see people using it to get a generic first draft, and designers making concept roughs with MidJourney, but, again, these are not my use-cases. I have not, yet, found anything that matches with a use-case that I have. I don’t think I’m the only one, either, as is suggested by some of the survey data – a lot of people have tried this, especially since you don’t need to spend $12,000 on a new Apple II, and it’s very cool, but how much do we use it, and what for?…

…Suppose you want to analyse this month’s customer cancellations, or dispute a parking ticket, or file your taxes – you can ask an LLM, and it will work out what data you need, find the right websites, ask you the right questions, parse a photo of your mortgage statement, fill in the forms and give you the answers. We could move orders of magnitude more manual tasks into software, because you don’t need to write software to do each of those tasks one at a time. This, I think, is why Bill Gates said that this is the biggest thing since the GUI. That’s a lot more than a writing assistant.

It seems to me, though, that there are two kinds of problem with this thesis.

The narrow problem, and perhaps the ‘weak’ problem, is that these models aren’t quite good enough, yet. They will get stuck, quite a lot, in the scenarios I suggested above. Meanwhile, these are probabilistic rather than deterministic systems, so they’re much better for some kinds of task than others. They’re now very good at making things that look right, and for some use-cases this is what you want, but for others, ‘looks right’ is different to ‘right’…

…The deeper problem, I think, is that no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this…

…The cognitive dissonance of generative AI is that OpenAI or Anthropic say that we are very close to general-purpose autonomous agents that could handle many different complex multi-stage tasks, while at the same time there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL. Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.

I often compared the last wave of machine learning to automated interns. You want to listen to every call coming into the call centre and recognise which customers sound angry or suspicious: doing that didn’t need an expert, just a human (or indeed maybe even a dog), and now you could automate that entire class of problem. Spotting those problems and building that software takes time: machine learning’s breakthrough was over a decade ago now, and yet we are still inventing new use-cases for it – people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.

You could propose the current wave of generative AI as giving us another set of interns, that can make things as well as recognise them, and, again, we need to work out what. Meanwhile, the AGI argument comes down to whether this could be far, far more than interns, and if we had that, then it wouldn’t be a tool anymore.

5. TIP622: Finding Certainty In An Uncertain World w/ Joseph Shaposhnik – Clay Finck and Joseph Shaposhnik

[00:29:29] Joseph Shaposhnik: I think of the credit bureaus and I think of a partner of theirs, which we’ll spend a minute talking about in a second, but. As you may know, the credit bureaus, there’s three of them in the United States. And they run an incredible oligopoly. If you want to secure a mortgage, get a car loan, rent a home, they’re involved in all of those decision making situations by the owners of those assets.

[00:29:55] Joseph Shaposhnik: As an example, if you go for a mortgage, all three credit bureaus will be pinned to get a score on you. All of them will be paid a couple of dollars for that score and all of that information that they’re pulling is contributory data. So there’s a relatively insignificant amount of incremental cost to generate that score and deliver it to the customer.

[00:30:21] Joseph Shaposhnik: You know, it’s a 95% incremental margin business. I mean, this is an incredible business. It’s basically an override on all economic activity in the United States and outside the United States where they play. And they’re just incredible businesses. But surprisingly not incredible stocks. You know, how could that be?

[00:30:40] Joseph Shaposhnik: It’s shocking to give you a sense organic growth if you look back, the last 5 years for the businesses have been approximately 7% a year. So, 3 or 4 times. Global GDP or U.S. GDP. They’ve outgrown the S&P the average S&P business over that period of time. They started with 30% EBITDA margins at the beginning of the 5 year period, so very profitable businesses.

[00:31:09] Joseph Shaposhnik: Yet over the last five years, two out of the three credit bureaus have underperformed the S&P, and over a 10 year period, they’ve been just in line performers with the S&P and so, I mean, they run an oligopoly. How could that possibly be? I used to be the credit bureau analyst at TCW, so I’m very familiar with these businesses, and they’re just incredible companies.

[00:31:33] Joseph Shaposhnik: And what happened is all three of these businesses spent more money on M&A than they generated in free cash flow over that five year period of time. They spent more money on M&A than all of the free cash flow they generated over the last five years. And they generate a lot of free cash flow. And let me say, let me just tell you, this is not on synergistic M&A.

[00:31:57] Joseph Shaposhnik: This was, I mean, they would call it synergistic, but it’s very difficult to synergize a near utility that they operate. And instead of just sticking to their knitting, they decided to acquire a lot of different data assets. that were incredibly expensive, generally from private equity, which doesn’t give assets away.

[00:32:18] Joseph Shaposhnik: And those returns are always, the returns on those businesses are always going to be lower than the returns on this incredible oligopoly that they run. And so, as interestingly as that, so of course, margins have been under pressure, returns have gone way down for these businesses because of all the acquisitions, these poor acquisitions at high multiples.

[00:32:42] Joseph Shaposhnik: And one of the most surprising things is we looked at the data on this, two out of the three businesses engaged in near zero share of purchases over that five year period of time. So you have this incredible business, you know, these three businesses that run an oligopoly, basically just an override on all economic activity.

[00:33:03] Joseph Shaposhnik: And they find all of these other businesses more attractive to allocate capital to than their own business, which is a 95% incremental margin business. Incredible. No wonder the stocks have not performed well, even though those businesses and those stocks should be like shooting fish in a barrel.

[00:33:20] Joseph Shaposhnik: So it’s incredible they bought back no, no stock, two out of the three businesses bought back no meaningful amount of stock. And not surprisingly, those businesses underperformed. In contrast to that, they have a partner, which is Fair Isaacs. And so Fair Isaacs, which is, the ticker is FICO provides the formula to the credit bureaus, which generates the score.

[00:33:44] Joseph Shaposhnik: The credit bureaus contribute the data. and the data with the formula creates a score that they can then sell to their end customers. So the bureaus pay FICO a fee for the formula, and they take the formula, and they generate a score, and they sell it to their customer. So you would think that FICO is basically in this ecosystem, has similar growth dynamics, has similar returns going into that 5 year period of time, similar EBITDA margins, tied to the same end markets, relatively similar company.

[00:34:17] Joseph Shaposhnik: Yet, over that five year period of time, FICO took all of its free cash flow, all of it, and used it to repurchase its shares. And so over the last five years, FICO has reduced share count by 20%, has engaged in no meaningful acquisitions to dilute its incredible franchise, and has generated a five bagger over the last five years.

[00:34:44] Joseph Shaposhnik: compared to the bureaus that have generated 15 to 100% return, total return over that 5 year period of time. So, a 5 bagger, which has outperformed the market by a ton, compared to an underperforming or an inline performance for the bureaus, I think just tells the tale of how important great capital allocation decision making is, how important it is to be aligned with a management team that understands how to generate value for shareholders.

[00:35:13] Joseph Shaposhnik: And I think for us and for everybody, serves as a warning when we think about investing with teams that are acquiring businesses in general and certainly acquiring businesses That are not as attractive as the core business. So capital allocation makes or breaks stories all the time, and incentives generally drive these decisions, but often times it just takes and an investor oriented CEO to see the big opportunity, which is usually in its core and not far afield.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 May 2024:

1. Karen Karniol-Tambour on investment anomalies at Sohn 2024 (transcript here) – Karen Karniol-Tambour and Jawad Mian

Jawad Mian (00:30): So 6 months ago equities were rallying in anticipation of lower interest rates but now we’ve seen year-to-date equities are rallying despite higher bond yields. So with a strong economy and inflation less of an issue, are you reverting to the typical inverse relationship between equities and bonds.

Karen Karniol-Tambour (00:49): The relationship between equities and bonds – it’s not an immutable fact of life. It’s not just a thing that occurs. It’s a function of the fundamental building blocks in stocks and bonds. When you look at stocks and bonds, they have a lot of things in common. They’re all future cash flows you’re discounting to today. So if you raise that, it’s bad for both and they both don’t do great when inflation is strong. The real inverse comes from their reaction to growth for the reason you’re saying. If growth is strong, then you can get equities rising and at the same time you can actually get the central bank tightening in response to that growth, which is bad for the bonds. And actually, the anomaly has been the years leading up to 2022, where inflation was just just a non-factor and the only dominant macro issue was growth. And so we’ve gotten really used to the idea that stocks and bonds have this inverse relationship. But that’s actually the anomaly. It’s not that normal to have a world where inflation just doesn’t matter. And finally, we live through this period where it’s like, “Wait a minute, inflation, its gravitational pull was at such a low level it was irrelevant – it’s becoming relevant again.” And we got this positive correlation where they both did badly, because you need to tighten in response to that inflation rearing its head.

Today – knock on wood – we look like we’re back to a world where inflation is not a non-issue, but it’s not a dominant issue, where we can have the kind of market action we’ve enjoyed so far in 2024, where we find out growth’s pretty damn resilient, growth’s doing great, companies can do well, earnings can do well, and at the same time the FED can ease less-than-expected or tighten relative to expectations at the same time. If they were tightening to stop very bad inflation, that would be a very different outcome. So the fundamental question as an investor is sort of where is the gravitational pull of inflation going to be? Is this going to be a major topic that then lead stocks and bonds sometimes to act the same way? Or is it going to go back to being kind of a non-issue?…

…Mian (02:53): A second anomaly. For the last 50 years, we’ve seen the US budget deficit average around 3% and it’s projected to be 6% over the next decade. So far we have seen markets being willing to finance these record deficits, in contrast to the UK for example. How come?

Karniol-Tambour (03:11): I think the best answer to this starts with actually the current account deficits, because obviously that’s part of who’s buying all the bonds we’re issuing. And it is a really weird anomaly because the United States is buying way more foreign goods than they’re buying ours. And typically if countries do that, their currency is weak because they have to convince someone to hold all the currency on the other side of that, so they have to attract all this financing. That the United States is running a massive current account deficit and yet the dollar is strong because what’s happening on the other end is people are just so enthusiastic about buying dollar financial assets. It’s so extreme that I think the United States has kind of a version of a Dutch disease.

So the classic Dutch disease is, you’re Saudi Arabia, you have oil. No one’s buying oil because you’re Saudi Arabia. No one’s thinking, “I really want Saudi oil.” They just need to fill up their car. So whatever the gas, is the gas is. But as Saudi Arabia, you get uncompetitive outside of it because money’s flooding in just for your oil, for nothing else. The United States has kind of become that on financial assets, which is people aren’t really thinking “I just want US financial assets.” It’s just that United States financial assets have done so well, they’re the dominant part of the index in stocks and in bonds. So anyone that needs to save any money around the world just ends up in US assets. As long as you care at all about market cap – which anyone reasonable would – and you’re going to the big market around the world, if you’re saving, you’re giving the United States money. And so we’re ending up with this flood of money that is a huge anomaly where we actually have a rising currency making everything else kind of uncompetitive, because people just want to buy stocks and bonds and no one else enjoys that. So we can run these huge deficits and sort of not worry about it.

2. Remembering Daniel Kahneman: A Mosaic of Memories and Lessons – Evan Nesterak and many others

To be continued …

By Richard Thaler, Professor of Behavioral Science and Economics, University of Chicago

My fondest memories of working with Danny come from 1984 to ’85 when I spent a year visiting him in Vancouver at The University of British Columbia. Danny had just begun a new project with Jack Knetsch on what people think is fair in market transactions and they invited me to join them. We had the then-rare ability to ask survey questions to a few hundred randomly selected Canadians each week. We would draft three versions of five questions, fax them to Ottawa Monday morning, get the results faxed back to us Thursday afternoon. Who needs Mturk! We then spent the weekend digesting the results and writing new questions.

We learned that raising the price of snow shovels the morning after a blizzard might make sense to an economist, but would make customers angry. Danny displayed two of his most prominent traits. He was always a skeptic, even (especially?) about his own ideas, so we stress-tested everything. And he was infinitely patient in that pursuit. Was our finding just true for snow shovels? What about water after a hurricane? Flu medicine? How about late-season discounts (which of course are fine). It was total immersion; meeting in person several times a week and talking constantly. We were in the zone.

Although we spent another year together in New York seven years later, we were unable to recreate that intensity. We had too many other balls in the air. But we continued our conversations and friendship until the end. Every conversation ended the same way: “To be continued.”…

...I’m more like a spiral than a circle

By Dan Lovallo, Professor of Strategy, Innovation and Decision Sciences, University of Sydney

Many people have heard that Danny changes his mind—a lot. This is certainly true. I have never written even a 5,000-word essay with him that didn’t take a year. Let me add another dimension to the discussion. During our last working dinner at a bistro in New York, and possibly out of mild frustration, I said, “Danny, you know you change your mind a lot.” It wasn’t a question. He continued chewing. I continued my line of non-question questioning: “And often you change it back to what it was at the beginning.”

Danny, having finished his bite and without missing a beat, looked up and in his characteristic lilt said, “Dan, that’s when I learn the most.” Then using his finger he drew a circle in space. “I don’t go around and around a problem. It might seem like it, but I am getting deeper and deeper.” The circle morphed into a three-dimensional spiral. “So, you’re missing all the learning,” he explained, as he displayed the invisible sculpture. “I’m more like a spiral than a circle.” Happy with this new idea, Danny grinned as only Danny could…

A case in character

By Angela Duckworth, Professor of Psychology, University of Pennsylvania

One evening, more than twenty years ago, I was the last one in the lab when the phone rang. “Hello?” I said, I hope not brusquely. I was a Ph.D. student at the time and eager to get back to my work. “Hello?” came the reply of an uncommonly polite older gentleman, whose accent I couldn’t quite place. “I’m so sorry to trouble you,” he continued. “I believe I’ve just now left my suitcase there.” Ah, this made sense. We’d hosted an academic conference that day. “It’s a terrible inconvenience, I know, but might you keep it somewhere until I can return to pick it up?” “Sure,” I said, cradling the receiver and grabbing a notepad. “How do you spell your name?” “Thank you so very much. It’s K-A-H-N-E-M-A-N.” I just about fainted. “Yes, Dr. Kahneman,” I said, coming to my senses, likely more deferentially than when I’d first picked up.

When I hung up, I thought to myself, Oh, it’s possible to be a world-famous genius—the most recently anointed Nobel laureate in economics, among other honors—and interact with anybody and everybody with utmost respect and dignity, no matter who they are. In the years that followed, I got to know Danny Kahneman much better, and when I did, that view was only confirmed. Confirmation bias? Halo effect? No and no. What then? Character. The world is mourning the loss of Danny Kahneman the genius, as we should, but I am missing Danny Kahneman the person…

Anxious and unsure

By Eric Johnson, Professor of Business, Columbia University

A few months before the publication of Thinking, Fast and Slow in 2011, the Center for Decision Sciences had scheduled Danny to present in our seminar series. We were excited because he had decided to present his first “book talk” with us. Expecting a healthy crowd, we scheduled the talk in Uris 301, the biggest classroom in Columbia Business School.

I arrived in the room a half hour early to find Danny, sitting alone in the large room, obsessing over his laptop. He confided that he had just changed two-thirds of the slides for the talk and was quite anxious and unsure about how to present the material. Of course, after the introduction, Danny presented in his usual charming, erudite style, communicating the distinction between System 1 and System 2 with clarity to an engaged audience. Afterwards, I asked him how he thought it went, and he said, “It was awful, but at least now I know how to make it better.” Needless to say, the book went on to become an international bestseller.

This was not false modesty. Having studied overconfidence throughout his career, Danny seemed immune to its effects. While surely maddening to some coauthors, this resulted in work that was more insightful and, most importantly to Danny and to us, correct. He was not always right, but always responsive to evidence, supportive or contradictory. For example, when some of the evidence cited in the book was questioned as a result of the replication crisis in psychology, Danny revised his opinion, writing in the comments of a critical blog: “I placed too much faith in underpowered studies.”

The best tribute to Danny, I believe, is adopting this idea, that science and particularly the social sciences, is not about seeming right, but instead, being truthful…

Practical problem solving

By Todd Rogers, Professor of Public Policy, Harvard University

I was part of a group helping some political candidates think about how to respond to untrue attacks by their political rivals. We focused on what cognitive and social psychology said about persuasive messaging. Danny suggested a different emphasis I hadn’t considered.

He directed us to a literature in cognitive psychology on cognitive associations. Once established, associations cannot simply be severed; attempting to directly refute them often reinforces them, and logical arguments alone can’t undo them. But these associations can be weakened when other competing associations are created.

For instance, if falsely accused of enjoying watching baseball, I’d be better off highlighting genuine interests—like my enjoyment of watching American football or reality TV—to dilute the false association with baseball. This anecdote is one small example of the many ways Danny’s profound intellect has influenced practical problem-solving. He’ll be missed and remembered.

Premortems

By Michael Mauboussin, Head of Consilient Research, Morgan Stanley

The opportunity to spend time with Danny and the chance to interview him were professional delights. One of my favorite lessons was about premortems, a technique developed by Gary Klein that Danny called one of his favorite debiasing techniques. In a premortem, a group assumes that they have made a decision (which they have yet to do), places themselves in the future (generally a year from now), and pretends that it worked out poorly. Each member independently writes down the reasons for the failure.

Klein suggested that one of the keys to premortems was the idea of prospective hindsight, that putting yourself into the future and thinking about the present opens up the mind to unconsidered yet relevant potential outcomes. I then learned that the findings of the research on prospective hindsight had failed to replicate—which made me question the value of the technique.

Danny explained that my concern was misplaced and that prospective hindsight was not central to the premortem. Rather, it was that the technique legitimizes dissent and allows organizations the opportunities to consider and close potential loopholes in their plans. That I had missed the real power of the premortem was a revelation and a relief, providing me with a cherished lesson…

Eradicating unhappiness

By George Loewenstein, Professor of Economics and Psychology, Carnegie Mellon University

For Danny, research was intensely personal. He got into intellectual disputes with a wide range of people, and these would hurt him viscerally, in part because it pained him that people he respected could come to different conclusions from those he held so strongly. He came up with, or at least embraced, the concept of “adversarial collaboration” in which researchers who disagreed on key issues would, however, agree upon a definitive test to determine where reality lay. A few of these were successful, but others (I would say most) ended with both parties unmoved, perhaps reflecting Robert Abelson’s insight that “beliefs are like possessions,” and, hence subject to the endowment effect.

I was spending time with Danny when he first got interested in hedonics—happiness—and that was a personal matter as well. His mother was declining mentally in France, and he agonized about whether to visit her; the issue was that she had anterograde amnesia, so he knew that she would forget his visit as soon as it ended. The criterion for quality of life, he had decided, should be the integral of happiness over time; so that—although she would miss out on the pleasure of remembering it—his visit would have value if she enjoyed it while it was happening.

Showing the flexibility of his thinking, and his all-too-rare willingness to learn from the data, his perspective changed as he studied happiness. He became more concerned about the story a life tells, including, notably, its peak and end; he concluded that eradicating unhappiness was a more important goal than fostering happiness, and began to draw a sharp distinction between happiness and life satisfaction, perhaps drawing, again, on his own experience. He always seemed to me to be extremely high in life satisfaction, but considerably less so in happiness.

3. Paradox of China’s stock market and economic growth – Glenn Luk

Joe Weisenthal of Bloomberg and the Odd Lots posed this question on Twitter/X:

“Given that the stock market hasn’t been especially rewarding to the volume-over-profits strategy undertaken by big Chinese manufacturers, what policy levers does Beijing have to sustain and encourage the existing approach?”

Many people may have noticed that despite the impressive growth of Chinese manufacturers in sectors like electric vehicles, the market capitalizations of these companies are dwarfed by Tesla. This seeming paradox lies at the heart of the the question posed by Joe.

In 2020, I shared an observation that China cares a lot more about GDP than market capitalization. I was making this observation in the context of Alibaba1 but would soon broaden the observation to encapsulate many more situations. In sharp contrast to Americans, Beijing just does not seem to care that much about equity market valuations but do seem to very much care about domestic growth and economic development…

…With respect to private sector market forces, Chinese policymakers tend to see its role as coordinators of an elaborate “game” that is meant to create an industry dynamic that drives desired market behaviors. The metaphor I sometimes use is as the Dungeon Master role in Dungeons & Dragons.

These “desired market behaviors” tend to overwhelmingly revolve around this multi-decade effort to maximize economic development and growth. Beijing has been very consistent about the goal to become “fully developed” by the middle of the 21st century.

To date, I would say that Chinese policymakers have been relatively successful using the approaches and principles described above to drive economic growth:

  • Priority on labor over capital / wage growth over capital income growth. Prioritizing labor is a key pillar of China’s demand-side support strategy. Growth in household income drives growth in domestic demand (whether in the form of household gross capital formation or expenditures).
  • Setting up rules to foster the create competitive industry dynamics and motivate economic actors to reinvest earnings back into growth.
  • Periodic crackdowns to disrupt what is perceived to be rent-seeking behavior, particularly from private sector players that have accumulated large amounts of equity capital (vs. small family businesses):
    • Anti-competitive behavior (e.g. Alibaba e-commerce dominance in the late 2010s)
    • Regulatory arbitrage (moral hazards inherent in Ant Financial’s risk-sharing arrangement with SOE banks)
  • Societal effects (for-profit education driving “standing on tiptoes” approach to childhood education)
  • Supply-side support to encourage dynamic, entrepreneurial participation from private sector players like in the clean energy transition to drive rapid industry through scale and scale-related production efficiencies. China has relied on supply-side strategies to support economic for decades despite repeated exhortations by outsiders to implement OECD-style income transfers.
  • Encouraging industry consolidation (vs. long drawn-out bankruptcies) once sectors have reached maturity although there are often conflicting motivations between Beijing and local governments.

A consistent theme is Beijing’s paranoia to rent-seeking behavior by capitalists (especially those who have accumulated large amounts of capital). It is sensitive to the potential stakeholder misalignment when capitalists — who are primarily aligned with one stakeholder class (fiduciary duty to equity owners).

It would prefer that rent-seeking behavior be handled by the party instead, whose objective (at least in theory) is to distribute these rents back to “The People” — although naturally in practice it never turns out this way; Yuen Yuen Ang has written multiple volumes about the prevalence of Chinese-style corruption and its corrosive economic effects.

So to bring it back to Joe’s question, the answer on whether Chinese policymakers can continue these policies going forward very much revolves around this question of rent-seeking: is it better to be done by the government or by private sector capitalists? What should be abundantly clear is that Beijing is definitive on this question: the party will maintain a monopoly on rent-seeking.

4. What Surging AI Demand Means for Electricity Markets – Tracy Alloway, Joe Weisenthal, and Brian Janous

Brian (09:58):

Yeah, and you’re right, I mean it’s not like we didn’t know that Microsoft had a partnership with OpenAI and that AI was going to consume energy. I think everyone though was a bit surprised at just how quickly what ChatGPT could do just captured the collective consciousness.

You probably remember when that was released. I mean it really sort surprised everyone and it became this thing where suddenly, even though we sort of knew what we were working on, it wasn’t until you put it out into the world that you realize maybe what you’ve created. That’s where we realized we are running up this curve of capability a lot faster than we thought. A number of applications that are getting built on this and the number of different ways that it’s being used and how it’s just become sort of common parlance. I mean, everyone knows what Chat GPT-3 is, and no one knew what it was the month before that.

So there was a bit, I think of a surprise in terms of just how quickly it was going to capture the collective consciousness and then obviously lead to everything that’s being created as a result. And so we just moved up that curve so quickly and I think that’s where the industry maybe got, certainly the utilities were behind because as you may have seen there, a lot of them are starting to restate their low-growth expectations.

And that was something that was not happening right before that. And so we’ve had massive changes just in the last two years of how utilities are starting to forecast what forecast. So if you take a look at a utility like Dominion in Virginia, so that’s the largest concentration of data centers in the United States. So they’re pretty good representative of what’s happening. If you go back to 2021, they were forecasting load growth over a period of 15 years of just a few percent.

I mean it was single-digit growth over that entire period. So not yearly growth, but over 15 years, single-digit growth. By 2023, they were forecasting to grow 2X over 15 years. Now keep in mind this is an electric utility. They do 10-year planning cycles. So because they have very long lead times for equipment for getting rights of away for transmission lines, they aren’t companies that easily respond to a 2X order of magnitude growth changed over a period of 15 years.

I mean, that is a massive change for electric utility, particularly given the fact that the growth rate over the last 15 to 20 years has been close to zero. So there’s been relatively no load growth in 15 to 20 years. Now suddenly you have utilities having to pivot to doubling the size of their system in that same horizon.

Tracy (13:10):

I want to ask a very basic question, but I think it will probably inform the rest of this conversation, but when we say that AI consumes a lot of energy, where is that consumption actually coming from? And Joe touched on this in the intro, but is it the sheer scale of users on these platforms? Is it, I imagine the training that you need in order to develop these models. and then does that energy usage differ in any way from more traditional technologies?

Brian (13:43):

Yeah, so whenever I think about the consumption of electricity for AI or really any other application, I think you have to start at sort of the core of what we’re talking about, which is really the human capacity for data, like whether it’s AI or cloud, humans have a massive capacity to consume data.

And if you think about where we are in this curve, I mean we’re on some form of S-curve of human data consumption, which then directly ties to data centers, devices, energy consumption ultimately, because what we’re doing is we’re turning energy into data. We take electrons, we convert them to light, we move them around to your TV screens and your phones and your laptops, etc. So that’s the uber trend that we’re riding up right now. And so we’re climbing this S-curve. I don’t know that anyone has a good sense of how steep or how long this curve will go.

If you go back to look at something like electricity, it was roughly about a hundred year. S-curve started in the beginning of last century. And it really started to flat line, as I mentioned before, towards the beginning of this century. Now we have this new trajectory that we’re entering, this new S-curve that we’re entering that’s going to change that narrative. But that S-curve for electricity took about a hundred years.

No one knows where we are on that data curve today. So when you inject something like AI, you create a whole new opportunity for humans to consume data, to do new things with data that we couldn’t do before. And so you accelerate us up this curve. So we were sitting somewhere along this curve, AI comes along and now we’re just moving up even further. And of course that means more energy consumption because the energy intensity of running an AI query versus a traditional search is much higher.

Now, what you can do with AI obviously is also much greater than what you can do with a traditional search. So there is a positive return on that invested energy. Oftentimes when this conversation comes up, there’s a lot of consternation and panic over ‘Well, what are we going to do? We’re going to run out of energy.’

The nice thing about electricity is we can always make more. We’re never going to run out of electricity. Not to say that there’s not times where the grid is under constraint and you have risks of brownouts and blackouts. That’s the reality. But we can invest more in transmission lines, we can invest more in power plants and we can create enough electricity to match that demand.

Joe (16:26):

Just to sort of clarify a point and adding on to Tracy’s question, you mentioned that doing an AI query is more energy intensive than, say, if I had just done a Google search or if I had done a Bing search or something like that. What is it about the process of delivering these capabilities that makes it more computationally intensive or energy intensive than the previous generation of data usage or data querying online?

Brian (16:57):

There’s two aspects to it, and I think we sort of alluded to it earlier, but the first is the training. So the first is the building of the large language model. That itself is very energy intensive. These are extraordinarily large machines, collections of machines that use very dense chips to create these language models that ultimately then get queried when you do an inference.

So then you go to ChatGPT and you ask it to give you a menu for a dinner party you want to have this weekend, it’s then referencing that large language model and creating this response. And of course that process is more computationally intensive because it’s doing a lot more things than a traditional search does. A traditional search just matched the words you put into a database of knowledge that it had put together, but these large language models are much more complex and then therefore the things you’re asking it to do is more complex.

So it will almost by definition be a more energy intensive process. Now, that’s not to say that it can’t get more efficient and it will, and Nvidia just last week was releasing some data on some of its next generation chips that are going to be significantly more efficient than the prior generation.

But one of the things that we need to be careful of is to think that because something becomes more efficient, then therefore we’re going to use less of the input resource. In this case, electricity. That’s not how it works, because going back to the concept of human capacity for consuming data, all we do is we find more things to compute. And this is, you’ve probably heard of Jevons paradox, and this is the idea that, well, if we make more efficient steam engines, he was an economist in the 1800s and he said ‘Well, if make more efficient steam engines, then we’ll use less coal.’

And he is like ‘No, that’s not what’s going to happen. We’re going to use more coal because we’re going to mechanize more things.’ And that’s exactly what we do with data just because we’ve had Moore’s Law for years, and so chips has become incredibly more efficient than they were decades ago, but we didn’t use less energy. We used much more energy because we could put chips in everything.

So that’s the trend line that we’re on. It’s still climbing that curve of consumption. And so no amount of efficiency is going to take us at this point, at least because I don’t believe we’re anywhere close to the bend in that S-curve. No amount of efficiency is going to take us off of continuing to consume more electricity, at least in the near term…

…Brian (22:35):

Well, this is where it gets a little concerning is that you have these tech companies that have these really ambitious commitments to being carbon neutral, carbon negative, having a hundred percent zero carbon energy a hundred percent of the time, and you have to give them credit for the work they’ve done.

I mean, that industry has done amazing work over the last decade to build absolutely just gigawatts upon gigawatts of new renewable energy projects in the United States all over the world. They’ve been some of the biggest drivers in the corporate focus on decarbonization. And so you really have to give that industry credit for all it’s done and all the big tech companies have done some amazing work there.

The challenge though that we have is the environment that they did that in was that no growth environment we were talking about. They were all growing, but they were starting from a relatively small denominator 10 or 15 years ago. And so there was a lot of overhang in the utility system at that time because the utilities had overbuilt ahead of that sort of flatlining. So there was excess capacity on the system.

They were growing inside of a system that wasn’t itself growing on a net basis. So everything they did, every new wind project you brought on, every new solar project you bought on, those were all incrementally reducing the amount of carbon in the system. It was all net positive.

Now we get into this new world where their growth rates are exceeding what the utilities had ever imagined in terms of the absolute impact on the system. The utilities’ response is ‘The only thing we can do in the time horizon that we have is basically build more gas plants or keep online gas plants or coal plants that we were planning on shuttering.’

And so now that the commitments that they have to zero carbon energy to be carbon negative, etc., are coming into contrast with the response that the utilities are laying out in their what’s called integrated resource plans or IRPs.

And we’ve seen this recently just last week in Georgia. We’ve seen it in Duke and North Carolina, Dominion and Virginia. Every single one of those utilities is saying ‘With all the demand that we’re seeing coming into our system, we have to put more fossil fuel resources on the grid. It’s the only way that we can manage it in a time horizon we have.’ Now, there’s a lot of debate about whether that is true, but it is what’s happening…

…Brian (30:29):

That’s right. And that’s the big challenge that good planners have today is what loads do you say yes to and what are the long-term implications of that? And we’ve seen this play out over the rest of the globe where you’ve had these concentrations of data centers. This is a story that we saw in Dublin, we’ve seen it in Singapore, we’ve seen it in Amsterdam.

And these governments start to get really worried of ‘Wait a minute, we have too many data centers as a percentage of overall energy consumption.’ And what inevitably happens is a move towards putting either moratoriums on data center build out or putting very tight restrictions on what they can do and the scale at which they can do it. And so we haven’t yet seen that to any material degree in the United States, but I do think that’s a real risk and it’s a risk that the data center industry faces.

I think somewhat uniquely in that if you’re the governor of a state and you have a choice to give power to a say new EV car factory that’s going to produce 1,500, 2,000 jobs versus a data center that’s going to produce significantly less than that, you’re going to give it to the factory. The data centers are actually the ones that are going to face likely the most constraints as governments, utilities, regulators start wrestling with this trade-off of ‘Ooh, we’re going to have to say no to somebody.’…

…Tracy (36:36):

What are the levers specifically on the tech company or the data center side? Because again, so much of the focus of this conversation is on what can the utilities do, what can we do in terms of enhancing the grid managing supply more efficiently? But are there novel or interesting things that the data centers themselves can do here in terms of managing their own energy usage?

Brian (37:02):

Yes. There’s a few things. I mean, one is data centers have substantial ability to be more flexible in terms of the power that they’re taking from the grid at any given time. As I mentioned before, every data center or nearly every data center has some form of backup generation. They have some form of energy storage built into this.

So the way a data center is designed, it’s designed like a power plant with an energy storage plant that just happens to be sitting next to a room full of servers. And so when you break it down into those components, you say, okay, well how can we better optimize this power plant to be more of a grid resource? How can we optimize the storage plant to be more of a grid resource? And then in terms of even the servers themselves, how can we optimize the way the software actually operates and is architected to be more of a grid resource?

And that sort of thinking is what is being forced on the industry. Frankly, we’ve always had this capability. I mean, we were doing, I mean we did a project like 2016 with a utility where we put in flexible gas generators behind our meter because the utility was going to have to build a new power plant if we didn’t have a way to be more flexible.

So we’ve always known that we can do this, but the industry has never been pressurized to really think innovatively about how can we utilize all these assets that we have inside of the data center plant itself to be more part of the grid. So I think the most important thing is really thinking about how data centers become more flexible. There’s a whole ‘nother line of thinking, which is this idea of, well, utilities aren’t going to move fast enough, so data centers just need to build all their own power plants.

And this is where you start hearing about nuclear and SMRs and infusion, which is interesting, except it doesn’t solve the problem this decade. It doesn’t solve the problem that we’re facing right now because none of that stuff is actually ready for prime time. We don’t have an SMR that we can build today predictably on time, on budget.

So we are dependent on the tools that we have today, which are things like batteries, grid enhancing technologies, flexible load, reconductoring transmission lines to get more power over existing rights of ways. So there’s a number of things we can do with technologies we have today that are going to be very meaningful this decade and we should keep investing in things that are going to be really meaningful next decade. I’m very bullish on what we can do with new forms of nuclear technology. They’re just not relevant in the time horizon. The problem we’re talking about [now].

Joe (39:52):

At some point, we’re going to do an Odd Lots episode specifically on the promise of small modular reactors and why we still don’t have them despite the seeming benefits. But do you have a sort of succinct answer for why this sort of seeming solution of manufacturing them faster, etc., has not translated into anything in production?

Brian (40:14)

Well, quite simply, we just forgot how to do it. We used to be able to build nuclear in this country. We did it in the seventies, we did it in the eighties, but every person that was involved in any one of those projects is either not alive or certainly not still a project manager at a company that would be building nuclear plants, right?

I think we underestimate human capacity to forget things. Just because we’ve done something in the past doesn’t mean that we necessarily can do it. Again, we have to relearn these things, and as a country, we do not have a supply chain. We don’t have a labor force. We don’t have people that manage construction projects that know how to do any of these things.

And so when you look at what South Korea is doing, you look at what China’s doing, they’re building nuclear plants with regularity. They’re doing it at a very attractive cost. They’re doing it on a predictable time horizon, but they have actually built all of those resources that we just simply don’t have in this country that we need and we need to rebuild that capability. It just doesn’t exist today…

…Brian (41:50):

Absolutely. And so if you go back to the era that we’ve been in of relative no load growth, if you’re a utility regulator and utility comes and asks you for a billion dollars for new investment and you’re used to saying ‘no,’ you’re used to saying ‘Well, wait a minute. Why do you need this? What is this for? How is this going to help manage again, reliability, cost, predictability, etc.?’

Now you’re in this whole new world and going back to this concept of we easily forget things — no one who’s a regulator today or the head of utility today has ever lived through an environment where we’ve had this massive expansion of the demand for electricity. So everyone now, including the regulators are having to relearn, okay, how do we enable utility investment in a growth environment? It’s not something they’ve ever done before. And so they’re having to figure out, okay, how do we create the bandwidth for utilities to make these investments?

Because one of the fundamental challenges that utilities have is that they struggle to invest if there’s no customer sitting there asking for the request, so they can’t sort of invest. I mean, if I’m Nvidia and I’m thinking about the world five years from now and think ‘Wow, how many chips do I want to sell in 2030?’ I can go out and build a new factory. I can go out and invest capital and I can go do all, I mean, I don’t need to have an order from a Microsoft or an Amazon or a Meta to go do that. I can build speculatively.

Utilities can’t really do that. They’re basically waiting for the customer to come ask for it. But when you have all this demand show up at the same time, well, what happens? The lead time start to extend. And so instead of saying ‘Yeah, I’ll give you that power in a year or two years,’ it’s now like, ‘Well, I’ll give it to you in five to seven years.’ And so that’s an unsustainable way to run the electric utility grid. So we do need regulators to adapt and evolve to this new era of growth.

5. Reflections from the heart of Japan’s ancient cedar forest – Thomas Chua

Yakushima was particularly memorable, an island near Kagoshima famous for its wildlife and ancient cedar forests. These majestic cedars, some of the oldest trees in the world, grow steadily through centuries, unaffected by the transient storms and seasonal fluctuations.

This is Sennensugi, which means a thousand-year-old cedar tree even though it’s still young. Yakushima’s oldest tree (and the oldest tree in Japan) is Jōmon Sugi, which is estimated to be between 2,170 and 7,200 years old.

This resonates deeply with my investment strategy. Just as these enduring cedars are not swayed by the fleeting changes in their environment, I focus on “Steady Compounders”—companies with significant economic moats and consistent intrinsic value growth.

When friends learn about my extensive travels, they often ask, “What about your investments? Don’t you need to monitor them constantly?” What they usually mean about ”monitoring” isn’t analyzing quarterly business results, but rather obsessively tracking stock prices and consuming every tidbit of news to stay perpetually informed.

However, I liken such constant vigilance to setting up a camera in a forest to watch the trees grow, this approach isn’t just tedious—it’s unnecessary and potentially harmful, often prompting rash decisions.

Everyone invests to grow wealth, but understanding why you invest is crucial. For me, it serves to enrich my curiosity and intellect, rewards my eagerness to learn, and more importantly, grants me the freedom to live life on my terms and cherish moments with my loved ones.

Therefore, I don’t pursue obscure, unproven companies which require intensive monitoring. Instead, I look for Steady Compounders — firms with a significant economic moat that are growing their intrinsic value steadily.

Like the steady growth of Yakushima’s cedars, these firms don’t need constant oversight; they thrive over long periods through economic cycles, much as the cedars endure through seasonal changes. Investing in such companies gives me the freedom to explore the world, knowing my investments are growing steadily, mirroring the quiet, powerful ascent of those ancient trees.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 April 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 28 April 2024:

1. 10 Questions with Chris Beselin – Michael Fritzell and Chris Beselin

Today, I’ll be interviewing Chris Beselin, who runs a Vietnam-focused activist fund and two tech businesses from his base in Ho Chi Minh City…

3. Why do you Vietnam has been so successful as an economy – why has it developed faster than almost any other nation on earth?

There are a range of factors, of course, but just to outline a few:

It’s a balanced economy and growth model – it’s not your typical emerging market, where the economy is overly dependent on one or a handful of commodities.

Rather, the Vietnamese growth model has multiple core engines: its one of the most trade-focused economies in the world (measured as (export+import)/GDP) with free trade agreements signed with countries representing 60% of global GDP, it has a young and well-educated population where English proficiency is on par with e.g. India and South Korea, it has a sizeable and confident middle class that is rapidly growing and it has a stable government that has been focused on pro-market deregulations for the past 35 years.

And in contrast to what many people think from the outset, Vietnamese exports are primarily engineering-driven (as opposed to lower value-add textiles and similar). Around 45% of the exports are electronics, smartphones, laptops and machinery components. In this sense, my conviction is that Vietnam is much more the next South Korea or Japan than the next China.

To me, this all boils down to the fact that the number one asset of the country is its young, savvy and hungry engineering population (ca. 100,000 engineers are educated per year in Vietnam, of which around 50% are within software). The attractiveness of the Vietnamese engineering talent pulls foreign capital to invest in onshore engineering-centered manufacturing, which in turn has vast ripple effects on the employment of thousands of additional factory workers around the engineers…

5. What misconceptions do you think foreigners typically have about the country?

I think there are many. Just to name a few:

The first one is perhaps “Vietnam is almost like China, but smaller and less developed”. I went through a bit of the difference in the fabric of the economies and demographics previously, but then there is also the very important difference in politics. Geopolitically, Vietnam is not and will never be or perceive itself to be a global superpower like China – it doesn’t have any geopolitical ambitions outside its own borders like China has.

Vietnam is primarily interested in developing its economy through trade and FDI, this in turn means that Vietnam in practice benefits from being geopolitically neutral between East and West and by trading/being friends with “everyone”. So far the country has managed this balance very astutely for decades.

Another common misconception (particularly for Westerners growing up during the Vietnam War) is that “Vietnam is just getting back on its feet after the recent war”. Obviously, this perspective is wildly outdated, but it’s still surprisingly common among foreign visitors. To put it into perspective, perhaps a suitable analogy is if you would have been saying/thinking similar things about France or the UK in the mid 90s… (also then ca. 50 years from the end of the Second World War, just like Vietnam is today 50 years away from its war ending in 1975).

2. Private Credit Offers No Extra Gains After Fees, New Study Finds – Justina Lee

A trio of academics has a bold take on the booming $1.7 trillion private credit market: after accounting for additional risks and fees, the asset class delivers virtually no extra return to investors…

…“It’s not a panacea for investors where they can earn 15% risk-free,” said Michael Weisbach, a finance professor at Ohio State University who co-wrote the research with Isil Erel and Thomas Flanagan. “Once you adjust for the risk, they basically are getting the amount they deserve, but no more.”

Behind the research is complex math to try to untangle the alpha part of a return that’s down to skill, and the beta part that might just come from stumbling into a bull market. While comparing stock pickers to a market benchmark like the S&P 500 is standard by now, it’s not obvious what the right yardstick is for private-credit funds, which make idiosyncratic and opaque loans to a wide array of companies…

..The three economists dissected MSCI data on 532 funds’ cash flows, covering their incoming capital and distributions to investors. They compare the industry’s performance to stock and credit portfolios with similar characteristics, whose fluctuations end up explaining the majority of private-credit returns. The study makes the case that these private credit funds also carry some equity risk, since around 20% of their investments contain equity-like features such as warrants.

After accounting for those risks, they find that there’s still alpha left on the table — which only vanishes once fees paid to these managers are deducted…

…As private markets boom, some quants — most notably Cliff Asness of AQR Capital Management — have suggested that investors are being misguided by returns that mask volatility and may be less impressive than they appear.

True at Adams Street Partners, who co-wrote one of the first papers on private-credit performance, cautions that until the industry faces its first downturn it may be hard to determine real alpha. But he says the NBER study is a good step toward digging beneath the surface of private-credit returns.

3. Americans are still not worried enough about the risk of world war – Noah Smith

When Americans think of World War 2, we usually think of the roughly four years of the war that we participated in, from late 1941 through 1945. Those years were indeed the most climactic and destructive of the war, by far, but the war actually began earlier. In fact, although the official start date is September 1, 1939, it’s easy to make an argument that the war began long before that…

…But throughout the 1930s, there were a number of conflicts that led into World War 2, and eventually merged with that overall conflict, like tributaries emptying into a great river. Let’s do a quick timeline of these.

In 1931-32, Japan seized Manchuria from China, an act that led inexorably to a wider war between the two powers. The Manchurian war and occupation also set Japan on a path toward militarism, bringing to power the regime that would ultimately prosecute WW2 itself.

In 1935-36, fascist Italy invaded and conquered Ethiopia. The League of Nations halfheartedly tried to stop the war and failed, leading to the League being discredited and the post-WW1 order being greatly weakened. That emboldened the fascist powers. Ethiopian resistance to Italian rule would eventually become a minor theater of WW2.

From 1935 through 1939, Japan and the Soviet Union fought an undeclared border war, ultimately culminating in major battles in 1939, which the USSR won. That led to Japan seeking an alliance with Nazi Germany, and eventually led to the Soviets’ entry into the war against Japan at the very end of WW2. (The realization that Japan couldn’t defeat the Soviets and conquer Siberian oil fields also prompted Japan to try to take Southeast Asian oil instead, when it needed oil to prosecute its war against China; this led to Pearl Harbor and the Pacific War.)

From 1936 through 1939, Nazi Germany, fascist Italy, and the Soviet Union fought each other in a proxy war: the Spanish Civil War. Units from all three powers officially or unofficially engaged in the fighting. When the Nationalists won, it emboldened the fascist powers even further.

In 1937, Japan invaded the remainder of China, in what’s called the Second Sino-Japanese War. This became a major theater of World War 2, accounting for almost as many deaths as the Nazi invasion of the USSR. It also prompted Japan to go to war with Britain and the U.S., in order to seize the oil fields of Indonesia to support the invasion of China. (The fact that we don’t count this as the start of WW2 seems like pure eurocentrism to me.)

In 1939, before the Soviet Union joined World War 2, it invaded Finland in what’s known as the Winter War, seizing some territory at great cost. This war would continue all the way through WW2 itself.

So there were no fewer than six wars in the 1930s that ended up feeding into World War 2 itself!..

…So if you were living at any point in 1931 through 1940, you would already be witnessing conflicts that would eventually turn into the bloodiest, most cataclysmic war that humanity has yet known — but you might not realize it. You would be standing in the foothills of the Second World War, but unless you were able to make far-sighted predictions, you wouldn’t know what horrors lurked in the near future.

In case the parallel isn’t blindingly obvious, we might be standing in the foothills of World War 3 right now. If WW3 happens, future bloggers might list the wars in Ukraine and Gaza in a timeline like the one I just gave.

Or we might not be in the foothills of WW3. I think there’s still a good chance that we can avert a wider, more cataclysmic war, and instead have a protracted standoff — Cold War 2 — instead. But I’m not going to lie — the outlook seems to be deteriorating. One big reason is that China appears to be ramping up its support for Russia…

…So it makes sense to view the Ukraine War as a European proxy conflict against Russia. But what’s more ominous is that it also makes an increasing amount of sense to view it as a Chinese proxy conflict against Europe.

A little over a year ago, there were reports that China was sending lethal aid to Russia. Believing these reports, I concluded — perhaps prematurely — that China had gone “all in” on Russia’s military effort. Some of the reports were later walked back, but the fact is, it’s very hard to know how much China is doing to provide Russia with drones and such under the table. But now, a year later, there are multiple credible reports that China is ramping up aid in various forms.

For example, the U.S. is now claiming that China is providing Russia with both material aid and geospatial intelligence (i.e. telling Russia where Ukrainian units are so Russia can hit them):

The US is warning allies that China has stepped up its support for Russia, including by providing geospatial intelligence, to help Moscow in its war against Ukraine.

Amid signs of continued military integration between the two nations, China has provided Russia with satellite imagery for military purposes, as well as microelectronics and machine tools for tanks, according to people familiar with the matter.

China’s support also includes optics, propellants to be used in missiles and increased space cooperation, one of the people said.

President Joe Biden raised concerns with Xi Jinping during their call this week about China’s support for the Russian defense industrial base, including machine tools, optics, nitrocellulose, microelectronics, and turbojet engines, White House National Security Council spokesperson Adrienne Watson said.

This is very similar to the aid that Europe and the U.S. are providing Ukraine. We also provide geospatial intelligence, as well as materiel and production assistance. If reports are correct — and this time, they come from the U.S. government as well as from major news organizations — then China is now playing the same role for Russia that Europe and the U.S. have been playing for Ukraine.

In other words, the Ukraine War now looks like a proxy war between China and Europe…

…Of course, World War 3 will actually begin if and when the U.S. and China go to war. Almost everyone thinks this would happen if and when China attacks Taiwan, but in fact there are several other flashpoints that are just as scary and which many people seem to be overlooking.

First, there’s the South China Sea, where China has been pressing the Philippines to surrender its maritime territory with various “gray zone” bullying tactics…

…The U.S. is a formal treaty ally of the Philippines, and has vowed to honor its commitments and defend its ally against potential threats.

And then there’s the ever-present background threat of North Korea, which some experts believe is seriously considering re-starting the Korean War. That would trigger a U.S. defense of South Korea, which in turn might bring in China, as it did in the 1950s.

It’s also worth mentioning the China-India border. China has recently reiterated its claim to the Indian state of Arunachal Pradesh, calling it “South Tibet” and declaring that the area was part of China since ancient times. India has vigorously rejected this notion, of course. An India-China border war might not start World War 3, but the U.S. would definitely try to help India out against China, as we did in 2022…

…America hasn’t mustered the urgency necessary to resuscitate its desiccated defense-industrial base. China is engaging in a massive military buildup while the U.S. is lagging behind. This is from a report from the Center for Strategic and International Studies:

The U.S. defense industrial base…lacks the capacity, responsiveness, flexibility, and surge capability to meet the U.S. military’s production needs as China ramps up defense industrial production. Unless there are urgent changes, the United States risks weakening deterrence and undermining its warfighting capabilities against China and other competitors. A significant part of the problem is that the U.S. defense ecosystem remains on a peacetime footing, despite a protracted war in Ukraine, an active war in the Middle East, and growing tensions in the Indo-Pacific in such areas as the Taiwan Strait and Korean Peninsula.

The United States faces several acute challenges.

First, the Chinese defense industrial base is increasingly on a wartime footing and, in some areas, outpacing the U.S. defense industrial base…Chinese defense companies…are producing a growing quantity and quality of land, maritime, air, space, and other capabilities. China…is heavily investing in munitions and acquiring high-end weapons systems and equipment five to six times faster than the United States…China is now the world’s largest shipbuilder and has a shipbuilding capacity that is roughly 230 times larger than the United States. One of China’s large shipyards, such as Jiangnan Shipyard, has more capacity than all U.S. shipyards combined, according to U.S. Navy estimates.

Second, the U.S. defense industrial base continues to face a range of production challenges, including a lack of urgency in revitalizing the defense industrial ecosystem…[T]here is still a shortfall of munitions and other weapons systems for a protracted war in such areas as the Indo-Pacific. Supply chain challenges also remain serious, and today’s workforce is inadequate to meet the demands of the defense industrial base.

“The Chinese defense industrial base is increasingly on a wartime footing.” If that isn’t a clear enough warning, I don’t know what would be. You have now been warned!

4. Memory Shortage and ASML – Nomad Semi

Although we are down to 3 major DRAM manufacturers, memory has always been a commodity since the 1970s. The wild swing in gross margin for SK Hynix in a year proves the point. Supply is the underlying driver of the memory cycle. Memory prices should always trend down over time with cost as memory manufacturers migrate to the next process node that allows for higher bit per wafer. There is always a duration mismatch in demand and supply due to the inelasticity of supply. When there is supernormal profit, capex will go up and supply will follow in 2 to 3 years. Supply will exceed demand and DRAM prices should fall to cost theoretically. This post will instead focus on the current upcycle and how it could actually be stronger than the 2016 cycle.

How we get here today is a result of the proliferation of AI and the worst downcycle since the GFC. To be fair, the 2022 downcycle was driven by a broad-based inventory correction across all the verticals rather than very aggressive capex hike from the memory producer. The downcycle led to negative gross margin for SK Hynix, Micron and Kioxia. Negative gross margin led to going concern risk, which led to Hynix and Micron cutting their capex to the lowest level in the last 7 years. This is despite the fact that we have moved from 1x nm to 1b nm which will require higher capex per wafer.

HBM (High Bandwidth Memory) has become very important in AI training, and you can’t run away from talking about HBM if you are looking at DRAM…

…HBM affects current and future DRAM supply in 2 different ways. The 1st is cannibalization of capex from DRAM and NAND where Fabricated Knowledge gave a very good analogy. The 2nd is as Micron mentioned in the last call that “HBM3E consumes approximately three times the wafer supply as DDR5 to produce a given number of bits in the same technology node”. In fact, this ratio will only get worse in 2026 as HBM4 can consume up to 5x the wafer supply as DDR5. The way it works is a HBM die size is double that of DDR5 (which already suffers from single digit die size penalty vs DDR4). HBM die size will only get bigger as more TSV is needed. Yield rate of HBM3e is below 70% and will only get harder as more dies are stacked beyond 8-hi. Logic base die of the HBM module is currently produced in-house by Micron and Hynix although this could be outsourced to TSMC for HBM4. In summary, not only is HBM consuming more of current wafer supply, but it is also cannibalizing the capex for future DRAM and NAND capacity expansion

In past upcycles, capex will often go up as memory producers gain confidence. Nobody wants to lose market share as current capex = future capacity → future market share. However, SK Hynix and Micron will be unable to expand their DRAM wafer capacity meaningfully in 2024 and 2025.

SK Hynix has limited cleanroom space available for DRAM expansion (~45k wpm) at M16 and this will be fully utilized by 2025. The company will have to wait till 2027 before Yong-In fab can be completed. Even when the balance sheet situation for SK Hynix improves in 2025, it will be limited by its cleanroom space.

For Micron, the situation is slightly better. Taichung fab also has limited space available for capacity expansion, but this will likely be earmarked for HBM production. Micron will have to wait until the new Boise fab is ready in 2026. Both Micron and Hynix will be limited in capacity expansion in 2025 against their will.

5. Artificial intelligence is taking over drug development – The Economist

The most striking evidence that artificial intelligence can provide profound scientific breakthroughs came with the unveiling of a program called AlphaFold by Google DeepMind. In 2016 researchers at the company had scored a big success with AlphaGo, an AI system which, having essentially taught itself the rules of Go, went on to beat the most highly rated human players of the game, sometimes by using tactics no one had ever foreseen. This emboldened the company to build a system that would work out a far more complex set of rules: those through which the sequence of amino acids which defines a particular protein leads to the shape that sequence folds into when that protein is actually made. AlphaFold found those rules and applied them with astonishing success.

The achievement was both remarkable and useful. Remarkable because a lot of clever humans had been trying hard to create computer models of the processes which fold chains of amino acids into proteins for decades. AlphaFold bested their best efforts almost as thoroughly as the system that inspired it trounces human Go players. Useful because the shape of a protein is of immense practical importance: it determines what the protein does and what other molecules can do to it. All the basic processes of life depend on what specific proteins do. Finding molecules that do desirable things to proteins (sometimes blocking their action, sometimes encouraging it) is the aim of the vast majority of the world’s drug development programmes.

Because of the importance of proteins’ three-dimensional structure there is an entire sub-discipline largely devoted to it: structural biology. It makes use of all sorts of technology to look at proteins through nuclear-magnetic-resonance techniques or by getting them to crystallise (which can be very hard) and blasting them with x-rays. Before AlphaFold over half a century of structural biology had produced a couple of hundred thousand reliable protein structures through these means. AlphaFold and its rivals (most notably a program made by Meta) have now provided detailed predictions of the shapes of more than 600m.

As a way of leaving scientists gobsmacked it is a hard act to follow. But if AlphaFold’s products have wowed the world, the basics of how it made them are fairly typical of the sort of things deep learning and generative AI can offer biology. Trained on two different types of data (amino-acid sequences and three-dimensional descriptions of the shapes they fold into) AlphaFold found patterns that allowed it to use the first sort of data to predict the second. The predictions are not all perfect. Chris Gibson, the boss of Recursion Pharmaceuticals, an AI-intensive drug-discovery startup based in Utah, says that his company treats AlphaFold’s outputs as hypotheses to be tested and validated experimentally. Not all of them pan out. But Dr Gibson also says the model is quickly getting better…

…A lot of pharma firms have made significant investments in the development of foundation models in recent years. Alongside this has been a rise in AI-centred startups such as Recursion, Genesis Therapeutics, based in Silicon Valley, Insilico, based in Hong Kong and New York and Relay Therapeutics, in Cambridge, Massachusetts. Daphne Koller, the boss of Insitro, an AI-heavy biotech in South San Francisco, says one sign of the times is that she no longer needs to explain large language models and self-supervised learning. And Nvidia—which makes the graphics-processing units that are essential for powering foundation models—has shown a keen interest. In the past year, it has invested or made partnership deals with at least six different AI-focused biotech firms including Schrodinger, another New York based firm, Genesis, Recursion and Genentech, an independent subsidiary of Roche, a big Swiss pharmaceutical company.

The drug-discovery models many of the companies are working with can learn from a wide variety of biological data including gene sequences, pictures of cells and tissues, the structures of relevant proteins, biomarkers in the blood, the proteins being made in specific cells and clinical data on the course of disease and effect of treatments in patients. Once trained, the AIs can be fine tuned with labelled data to enhance their capabilities.

The use of patient data is particularly interesting. For fairly obvious reasons it is often not possible to discover the exact workings of a disease in humans through experiment. So drug development typically relies a lot on animal models, even though they can be misleading. AIs that are trained on, and better attuned to, human biology may help avoid some of the blind alleys that stymie drug development.

Insitro, for example, trains its models on pathology slides, gene sequences, MRI data and blood proteins. One of its models is able to connect changes in what cells look like under the microscope with underlying mutations in the genome and with clinical outcomes across various different diseases. The company hopes to use these and similar techniques to find ways to identify sub-groups of cancer patients that will do particularly well on specific courses of treatment.

Sometimes finding out what aspect of the data an AI is responding to is useful in and of itself. In 2019 Owkin, a Paris based “AI biotech”, published details of a deep neural network trained to predict survival in patients with malignant mesothelioma, a cancer of the tissue surrounding the lung, on the basis of tissue samples mounted on slides. It found that the cells most germane to the AI’s predictions were not the cancer cells themselves but non-cancerous cells nearby. The Owkin team brought extra cellular and molecular data into the picture and discovered a new drug target. In August last year a team of scientists from Indiana University Bloomington trained a model on data about how cancer cells respond to drugs (including genetic information) and the chemical structures of drugs, allowing it to predict how effective a drug would be in treating a specific cancer.

Many of the companies using AI need such great volumes of high quality data they are generating it themselves as part of their drug development programmes rather than waiting for it to be published elsewhere. One variation on this theme comes from a new computational sciences unit at Genentech which uses a “lab in the loop” approach to train their AI. The system’s predictions are tested at a large scale by means of experiments run with automated lab systems. The results of those experiments are then used to retrain the AI and enhance its accuracy. Recursion, which is using a similar strategy, says it can use automated laboratory robotics to conduct 2.2m experiments each week…

…The world has seen a number of ground breaking new drugs and treatments in the past decade: the drugs targeting GLP-1 that are transforming the treatment of diabetes and obesity; the CAR-T therapies enlisting the immune system against cancer; the first clinical applications of genome editing. But the long haul of drug development, from discerning the biological processes that matter to identifying druggable targets to developing candidate molecules to putting them through preclinical tests and then clinical trials, remains generally slow and frustrating work. Approximately 86% of all drug candidates developed between 2000 and 2015 failed to meet their primary endpoints in clinical trials. Some argue that drug development has picked off most of biology’s low-hanging fruit, leaving diseases which are intractable and drug targets that are “undruggable”.

The next few years will demonstrate conclusively if AI is able to materially shift that picture. If it offers merely incremental improvements that could still be a real boon. If it allows biology to be deciphered in a whole new way, as the most boosterish suggest, it could make the whole process far more successful and efficient—and drug the undruggable very rapidly indeed. The analysts at BCG see signs of a fast-approaching AI-enabled wave of new drugs. Dr Pande warns that drug regulators will need to up their game to meet the challenge. It would be a good problem for the world to have. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google DeepMind), ASML, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 April 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 April 2024:

1. The Anguish of Central Banking – Arthur F. Burns

Why, in particular, have central bankers, whose main business one might suppose is to fight inflation, been so ineffective in dealing with this worldwide problem?

To me, as a former central banker, the last of these questions is especially intriguing. One of the time-honored functions of a central bank is to protect the integrity of its nation’s currency, both domestically and internationally. In monetary policy central bankers have a potent means for fostering stability of the general price level. By training, if not also by temperament, they are inclined to lay great stress on price stability, and their abhorrence of inflation is continually reinforced by contacts with one another and with like-minded members of the private financial community. And yet, despite their antipathy to inflation and the powerful weapons they could wield against it, central bankers have failed so utterly in this mission in recent years. In this paradox lies the anguish of central banking…

…Analyses of the inflation that the United States has experienced over the past fifteen years frequently proceed in three stages. First are considered the factors that launched inflation in the mid-1960s, particularly the governmental fine tuning inspired by the New Economics and the loose financing of the war in Vietnam. Next are considered the factors that led to subsequent strengthening of inflationary forces, including further policy errors, the devaluations of the dollar in 1971 and 1973, the worldwide economic boom of 1972-73, the crop failures and resulting surge in world food prices in 1973-74, the extraordinary increases in oil prices that became effective in 1974, and the sharp deceleration of productivity growth from the late 1960s onward. Finally, attention is turned to the process whereby protracted experience with inflation has led to widespread expectations that it will continue in the future, so that inflation has acquired a momentum of its own.

I have no quarrel with analyses of this type. They are distinctly helpful in explaining the American inflation and, with changes here and there, that in other nations also. At the same time, I believe that such analyses overlook a more fundamental factor: the persistent inflationary bias that has emerged from the philosophic and political currents that have been transforming economic life in the United States and elsewhere since the 1930s. The essence of the unique inflation of our times and the reason central bankers have been ineffective in dealing with it can be understood only in terms of those currents of thought and the political environment they have created…

…the period from World War II to the mid-1960s was marked not only by a dampening of the business cycle but also by persistent increases in the prosperity of American families…

…This experience of economic progress strengthened the public’s expectations of progress. What had once been a quiet personal feeling that the long future would be better than the past, particularly for one’s children, was transformed during the postwar years into an articulate and widespread expectation of steady improvement in living standards—indeed, into a feeling of entitlement to annual increases in real income.

But the rapid rise in national affluence did not create a mood of contentment. On the contrary, the 1960s were years of social turmoil in the United States, as they were in other industrial democracies…

…In the innocence of the day, many Americans came to believe that all of the new or newly discovered ills of society should be addressed promptly by the federal government. And in the innocence of the day, the administration in office attempted to respond to the growing demands for social and economic reform while waging war in Vietnam on a rising scale. Under the rubric of the New Economics, a more activist policy was adopted for the purpose of increasing the rate of economic growth and reducing the level of unemployment…

…The interplay of governmental action and private demands had an internal dynamic that led to their concurrent escalation. When the government undertook in the mid-1960s to address such “unfinished tasks” as reducing frictional unemployment, eliminating poverty, widening the benefits of prosperity, and improving the quality of life, it awakened new ranges of expectation and demand. Once it was established that the key function of government was to solve problems and relieve hardships—not only for society at large but also for troubled industries, regions, occupations, or social groups—a great and growing body of problems and hardships became candidates for governmental solution…

…Many results of this interaction of government and citizen activism proved wholesome. Their cumulative effect, however, was to impart a strong inflationary bias to the American economy. The proliferation of government programs led to progressively higher tax burdens on both individuals and corporations. Even so, the willingness of government to levy taxes fell distinctly short of its propensity to spend. Since 1950, the federal budget has been in balance in only five years. Since 1970, a deficit has occurred in every year. Not only that, but the deficits have been mounting in size. Budget deficits have thus become a chronic condition of federal finance; they have been incurred when business conditions were poor and also when business was booming. But when the government runs a budget deficit, it pumps more money into the pocketbooks of people than it withdraws from their pocketbooks; the demand for goods and services therefore tends to increase all around. That is the way the inflation that has been raging since the mid-1960s first got started and later kept being nourished.

The pursuit of costly social reforms often went hand in hand with the pursuit of full employment. In fact, much of the expanding range of government spending was prompted by the commitment to full employment. Inflation came to be widely viewed as a temporary phenomenon—or, provided it remained mild, as an acceptable condition. “Maximum” or “full” employment, after all, had become the nation’s major economic goal— not stability of the price level. That inflation ultimately brings on recession and otherwise nullifies many of the benefits sought through social legislation was largely ignored…

…And so I finally come to the role of central bankers in the inflationary process. The worldwide philosophic and political trends on which I have been dwelling inevitably affected their attitudes and actions. In most countries, the central bank is an instrumentality of the executive branch of government—carrying out monetary policy according to the wishes of the head of government or the ministry of finance. Some industrial democracies, to be sure, have substantially independent central banks, and that is certainly the case in the United States. Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted the money supply and created sufficient strains in financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture…

…Facing these political realities, the Federal Reserve was still willing to step hard on the monetary brake at times—as in 1966, 1969, and 1974—but its restrictive stance was not maintained long enough to end inflation. By and large, monetary policy came to be governed by the principle of undernourishing the inflationary process while still accommodating a good part of the pressures in the marketplace. The central banks of other industrial countries, functioning as they did in a basically similar political environment, appear to have behaved in much the same fashion.

In describing as I just have the anguish of central banking in a modern democracy, I do not mean to suggest that central bankers are free from responsibility for the inflation that is our common inheritance. After all, every central bank has some room for discretion, and the range is considerable in the more independent central banks. As the Federal Reserve, for example, kept testing and probing the limits of its freedom to undernourish the inflation, it repeatedly evoked violent criticism from both the Executive Branch and the Congress and therefore had to devote much of its energy to warding off legislation that could destroy any hope of ending inflation. This testing process necessarily involved political judgments, and the Federal Reserve may at times have overestimated the risks attaching to additional monetary restraint…

…Monetary theory is a controversial area. It does not provide central bankers with decision rules that are at once firm and dependable. To be sure, every central banker has learned from the world’s experience that an expanding economy requires expanding supplies of money and credit, that excessive creation of money will over the longer run cause or validate inflation, and that declining interest rates will tend to stimulate economic expansion while rising interest rates will tend to restrict it; but this knowledge stops short of mathematical precision…

…It is clear, therefore, that central bankers can make errors—or encounter surprises—at practically every stage of the process of making monetary policy. In some respects, their capacity to err has become larger in our age of inflation. They are accustomed, as are students of finance generally, to think of high and rising market interest rates as a restraining force on economic expansion. That rule of experience, however, tends to break down once expectations of inflation become widespread in a country. At such a time, lenders expect to be paid back in cheaper currency, and they are therefore apt to demand higher interest rates. Since borrowers have similar expectations, they are willing to comply. An “inflation premium” thus gets built into nominal interest rates. In principle, no matter how high the nominal interest rate may be, as long as it stays below or only slightly above the inflation rate, it very likely will have perverse effects on the economy; that is, it will run up costs of doing business but do little or nothing to restrain overall spending. In practice, since inflationary expectations, and therefore the real interest rates implied by any given nominal rate, vary among individuals, central bankers cannot be sure of the magnitude of the inflation premium that is built into nominal rates. In many countries, however, these rates have at times in recent years been so clearly below the ongoing inflation rate that one can hardly escape the impression that, however high or outrageous the nominal rates may appear to observers accustomed to judging them by a historical yardstick, they have utterly failed to accomplish the restraint that central bankers sought to achieve. In other words, inflation has often taken the sting out of interest rates— especially, as in the United States, where interest payments can be deducted for income tax purposes…

…There is a profound difference between the effects of mistaken judgments by a central bank in our age of inflation and the effects of such judgments a generation or two ago. In earlier times, when a central bank permitted excessive creation of money and credit in times of prosperity, the price level would indeed tend to rise. But the resulting inflation was confined to the expansion phase of the business cycle; it did not persist or gather force beyond that phase. Therefore, people generally took it for granted that the advance of prices would be followed by a decline once a business recession got under way. That is no longer the case.

Nowadays, businessmen, farmers, bankers, trade union leaders, factory workers, and housewives generally proceed on the expectation that inflation will continue in the future, whether economic activity is booming or receding. Once such a psychology has become dominant in a country, the influence of a central bank error that intensified inflation may stretch out over years, even after a business recession has set in. For in our modern environment, any rise in the general price level tends to develop a momentum of its own. It stimulates higher wage demands, which are accommodated by employers who feel they can recover the additional costs through higher prices; it results in labor agreements in key industries that call for substantial wage increases in later years without regard to the state of business then; and through the use of indexing formulas, it leads to automatic increases in other wages as well as in social security payments, various other pensions, and welfare benefits, in rents on many properties, and in the prices of many commodities acquired under long-term contracts…

…If my analysis of central banking in the modern environment is anywhere near the mark, two conclusions immediately follow. First, central banks have indeed been participants in the inflationary process in which the industrial countries have been enmeshed, but their role has been subsidiary. Second, while the making of monetary policy requires continuing scrutiny and can stand considerable improvement, we would look in vain to technical reforms as a way of eliminating the inflationary bias of industrial countries. What is unique about our inflation is its stubborn persistence, not the behavior of central bankers. This persistence reflects the fundamental forces on which I dwelt earlier in this address—namely, the philosophic and political currents of thought that have impinged on economic life since the Great Depression and particularly since the mid-1960s…

…The precise therapy that can serve a nation best is not easy to identify, and what will work well in one country may work poorly in another. In the case of the American inflation, which has become a major threat to the well-being of much of the world as well as of the American people, it would seem wise to me at this juncture of history for the government to adopt a basic program consisting of four parts. The first of these would be a legislative revision of the federal budgetary process that would make it more difficult to run budget deficits and that would serve as the initial step toward a constitutional amendment directed to the same end. The second part would be a commitment to a comprehensive plan for dismantling regulations that have been impeding the competitive process and for modifying others that have been running up costs and prices unnecessarily. The third part would be a binding endorsement of restrictive monetary policies until the rate of inflation has become substantially lower. And the fourth part would consist of legislation scheduling reductions of business taxes in each of the next five years—the reduction to be quite small in the first two years but to become substantial in later years. This sort of tax legislation would release powerful forces to improve the nation’s productivity and thereby exert downward pressure on prices; and it would also help in the more immediate future to ease the difficult adjustments forced on many businesses and their employees by the adoption of the first three parts of the suggested program.

2. Two Things I’m Not Worried About – Ben Carlson

Here are two things a lot of other people are worried about but not me:

Stock market concentration. Here’s a chart from Goldman Sachs that shows by one measure, the U.S. stock market is as concentrated as it has ever been:

To which my reply is: So what?

Yes, the top 10 stocks make up more than one-third of the S&P 500. All this tells me is that the biggest and best companies are doing really well. Is that a bad thing?

Stock markets around the globe are far more concentrated than the U.S. stock market. Emerging markets rose to their highest level since June 2022 yesterday. Out of an index that covers 20+ countries, a single stock (Taiwan Semiconductor) accounted for 70% of the move.

Stock market returns over the long run have always been dominated but a small minority of the biggest, best-performing companies…

… Bloomberg is out with a new report that sounds the alarm on U.S. government debt levels:

With uncertainty about so many of the variables, Bloomberg Economics has run a million simulations to assess the fragility of the debt outlook. In 88% of the simulations, the results show the debt-to-GDP ratio is on an unsustainable path — defined as an increase over the next decade.

In the end, it may take a crisis — perhaps a disorderly rout in the Treasuries market triggered by sovereign US credit-rating downgrades, or a panic over the depletion of the Medicare or Social Security trust funds — to force action. That’s playing with fire.

I’ll believe it when I see it.

People have been sounding the alarm on government debt in this country for decades. There has been no panic. No financial crisis. No debt default…

… Interest expense relative to the size of the economy has shot higher in recent years from the combination of more debt and higher rates:

But we’re still well below the highs from the 1980s and 1990s. And when you look at the absolute numbers here, going from 1.5% of GDP to 3% of GDP isn’t exactly the end of the world…

…Debt-to-GDP is now as high as it was in World War II:

That seems scary until you realize in Japan, debt-to-GDP is closer to 300%. I’m not saying we should test our limits but there is no pre-set line in the sand on these things.

3. The inequity method of accounting – Sujeet Indap

The fundamental bargain of M&A seems pretty simple. At the closing of a deal, the buyer pays the seller, and gets a business in return.

It hasn’t been so straightforward for the family who agreed in 2022 to sell its California supermarket Save Mart to the private equity firm Kingswood Capital Management, which valued the grocery chain at $245mn.

Three months after the papers were signed, Kingswood demanded that Save Mart’s prior owners, the Piccinini family, fork back over $109mn after already surrendering the company. In effect, Kingswood wanted to receive a net $77mn payment to take over Save Mart.

And thanks to some ballsy lawyering and nebulous bookkeeping, it seems the PE firm might actually succeed, its gambit upheld by a controversial arbitration ruling in September 2023…

…When Kingswood signed the deal for Save Mart, it was really acquiring two separate businesses. One was the Save Mart grocery chain, comprised of 200 stores and more than $4bn in annual revenue. Save Mart separately held a majority stake in Superstore Industries (SSI), a successful food wholesaler/distributor that had two other owners…

…The two sides agreed that Save Mart’s equity stake in SSI, the joint venture, would be valued at $90mn, a significant step up from the ~$22.5mn value that Save Mart had assigned the investment on its books.

The increase reflected SSI’s valuable land portfolio, according to one person familiar with the transaction. And it enables Kingswood to lower SSI’s tax basis should it ever want to sell SSI, according to a person involved in the transaction.

Those seem reasonable enough. Still, the accounting of SSI’s value is what laid the foundation for this dispute.

For context, a company’s investments can be recorded on its balance sheet in three ways: cost method, equity method, and full consolidation.

Save Mart selected the equity method for its SSI stake.

To explain a bit further: Let’s imagine a company with $100 of asset value and $60 in liabilities, which leaves it with an equity value of $40. Say this company has a 50-per-cent owner, meaning it owns $20 in equity. The owner’s balance sheet would list that $20 as a single line item, called “equity in unconsolidated affiliates”. That account would grow with the subsidiary’s proportional net income, and decrease with any net losses or dividends.

Save Mart’s stake in SSI was listed as a single line on its balance sheet — worth $22.5mn…

…In March 2022, Kingswood and Save Mart closed their deal with the PE firm sending payments based on the family’s proposed accounting. That then set off a final round of post-closing negotiations, where Kingswood got 90 days to argue with the Piccinini’s maths…

…But Kingswood dropped in one massive adjustment with the boilerplate.

It added back $109mn of gross SSI debt, and asserted that the figure counted as official “Indebtedness”. And it argued it should be paid back for all that additional debt.

The PE firm pointed to the language in the deal contract, and said the definition of “Indebtedness” included any Save Mart “group” debt…

…Arbitrator Joseph Slights III, a lawyer in private practice who was formerly a Delaware Vice Chancellor, did not ultimately buy any of what the Piccinnis were selling.

He wrote in the arbitration decision:  “Delaware law is more contractarian than most, and Delaware courts will enforce the letter of the parties’ contract without regard for whether they have struck a good deal or bad deal . . . the result is not absurd or commercially unreasonable.”…

…The Piccinnis, understandably, believe writing a cheque for $109mn is indeed “absurd” and “commercially unreasonable”. They have accused Kingswood of “bad faith” and “gamesmanship” in their court papers.

They will now appeal to the Delaware Supreme Court, pointing to a 2017 decision that said in a post-closing adjustment dispute, the legal system should aim to uphold the broader spirit of the contract instead of narrow contract definitions…

…Kingswood had believed, all along prior to signing and closing, that the gross SSI debt belonged on Save Mart’s main balance sheet. But they decided to keep quiet about that until after the deal closed.

One implication is that they were happy to close on the Piccininis’ terms, and winning on the SSI debt issue would be a bonus, given that there was no guarantee of winning the arbitration.

The firm’s equity check on the $240mn transaction was just $60mn (see the sources and uses table above). If Kingswood is eventually paid the $109mn, it will receive nearly two times their equity contribution by weaponising accounting and legal technicalities.

4. Don’t Be Afraid – Michael Batnick

All-time highs are interesting in the emotions they elicit. Some people might be euphoric as their accounts reach dollar amounts never seen before. Others might fear this is as good as it’s going to get and worry about a trap-door scenario.

Your emotional state might also depend on your asset allocation. If you’re sitting on a large cash pile, it’s understandable that you might be hesitant to go “all in” at a record price. It might not “feel” right.

The good news is the data doesn’t support those feelings. On average since 1970, the S&P 500 has done better 1, 3, and 5 years after making an all-time high than picking a random day.

5. An Interview with Google Cloud CEO Thomas Kurian About Google’s Enterprise AI Strategy – Ben Thompson and Thomas Kurian

You did mention that, “People are moving out of proof-of-concept into actually doing products”. Is that actually happening? What are the actual use cases that companies are actually rolling out broadly as opposed to doing experiments on what might be possible?

TK: Broad-brush, Ben, we can break it into four major categories. One category is streamlining internal processes within the organization, streamlining internal processes. In finance, you want to automate accounts receivable, collections, and cashflow prediction. In human resources, you want to automate your human help desk as well as improve the efficiency with which you can do benefits matching, for example. In procurement and supply chain, you want for example, look at all my suppliers, their contracts with me and tell me which ones have indemnification and warranty protection, so I can drive more volume to those that give me indemnification and warranties and less to those that don’t, for example. These are all practical cases we have customers live in deployment with.

Second is transforming the customer experience. Transforming the customer experiences, how you market, how you merchandise, how you do commerce, how you do sales and service. An example is what Mercedes-Benz CEO Ola Källenius talked about how they’re building a completely new experience for the way that they market and sell and service their vehicles.

Third is that some people are integrating it into their products, and when I say re-imagining their products, re-imagining their core products using AI. We had two examples of companies who are in the devices space. One is Samsung and the other one is Oppo, and they’re re-imagining the actual device itself using AI with all the multimodality that we provide.

There are quite a few companies now re-thinking that if a model can change the way that I see it, that I can process multimodal information. For example, in media we have people saying, “If your model can read as much information as it can, can it take a long movie and shrink it into highlights? Can I take a sports recording of the NCAA basketball final and say, ‘find me all the highlights by this particular player’?” and not have to have a human being sit there and splice the video, but have it do it and I can create the highlights reel really quickly. So there are lots of people re-imagining the product offerings that they have.

And finally, there are some people saying, “With the cost efficiency of this, I can change how I enter a brand new market because, for example, I can do personalized offers in a market where I may not have a physical presence, but I can do much higher conversion rate for customers with online marketing and advertising because now I can do highly tailored campaigns because the cost of creating the content is much lower.” So broad-brush, streamline the core processes and back office, transform the customer experience and it doesn’t mean call centers or chatbots, it can be actually transferring the product itself, transforming the nature of the product you build and enter new markets.

Is it fair to say then when you talk about, “Moving from proof-of-concept to actual production”, or maybe that’s not the words you used, but people are saying, “Okay, we’re going to build this” because this stuff’s not showing up yet, in the real world. Is it the case that, “We see that this could be valuable, now we’re in”, and that’s why you’re emphasizing the platform choice now because they’ve committed to AI broadly, and now it’s like, “Where are we going to build it”?

TK: We have people experimenting, but we also have people actually live deployment and directing traffic. Orange, the telecom company, was talking about how many customers they’re handling online, Discover Financial was talking about how their agents are actually using AI search and AI tools to discover information from policy and procedure documents live. So there are people actually literally running true traffic through these systems and actually using them to handle real customer workload.

Are you seeing the case in a lot of in customers, or maybe you’re hearing from potential customers, that AI is rolling out, if that’s the right word, in an employee arbitrage situation? Where there’s individual employees that are taking on themselves to use these tools and they are personally benefiting from the increased productivity — maybe they’re doing less work or maybe they’re getting more done — and the companies want to capture that more systematically. Is that a theme that you’re seeing?

TK: We’re seeing three flavors. Flavor one is a company has, we’re going to try eight or nine, what they call customer journeys or use cases, we’re going to pick the three that we see as the maximum return, meaning value and value does not mean cost savings always. It could be, for example, we have one who is handling 1 million calls a day through our customer service system. Now a million calls a day, if you think about it, Ben, an average person can do about 250 calls a day, that’s a certain volume in an eight-hour day. If you handled a million, that is a lot of people, so the reality is that several of them were not being answered and people never called because the wait time was so long. So in that case, it was not about cost savings, it’s the fact that they’re getting able to reach many more customers than they could do before. So that’s one. One part is people saying, “I have a bunch of scenarios, I’m going to pick the three”, and in many cases, they’re actually augmenting something they’re doing or doing something they couldn’t do before, that’s scenario one.

Scenario two was I have, for example, there’s a large insurance company that’s working with us. Today, when they do claims and risk calculation, it takes a long time to handle the claims and the risk, particularly the risk calculation, because there’s thousands of pages of documents, there’s a lot of spreadsheets going back and forth. They put it into Gemini and it was able to run the calculations much, much more quickly. So second is I’m picking a very high value use case for my organization, which is the core function, and I’m going to implement it because I can get a real competitive advantage. In their case, it’s the fact that they can both get more accurate scoring on the risk and they can also do a much more accurate job, faster job in responding.

And the third scenario is what you said. “Hey, we’ve got a bunch of people, we’re going to give it to a certain number of developers”. For example, our coding tool, “They are going to test it, they say it helps me generate much better unit tests, it helps me write better quality code”. Wayfair’s CTO was talking about what their experience is, and then they say, “Let’s go broadly”, so all three patterns are being seen…

Do you see AI, though, in all this talk about, “You need to choose a platform? Sure, our platform’s going to be open, you can use it anywhere” — but do you see this as a wedge to be like, “Okay, this is a reboot broadly for the industry as far as cloud goes, and sure, your data may be in AWS, or in Azure, or whatever it might be, but if you have a platform going forward, you should start with us”? Then maybe we’ll look up in ten, fifteen years, and all the center of gravity shifted to wherever the platforms are?

TK: For sure. I mean, it’s a change in the way that people make purchase decisions, right? Ten years ago, you were worried about commodity computing, and you were like, “Who’s going to give me the lowest cost for compute, and the lowest cost for storage, and the lowest cost for networking?”. Now the basis of competition has changed and we have a very strong position, given our capability both at the top, meaning offering a platform, offering models, et cetera, and building products that have long integrated models.

Just as an example, Ben, integrating a model into a product is not as easy as people think; Gmail has been doing that since 2015. On any daily basis, there are over 500 million operations a day that we run and to do it well, when a partner talked about the fact that 75% of people who generate an image for slides actually end up presenting it, it’s because we have paid a lot of attention over the years on how to integrate it.

So we play at the top of the stack, and we have the infrastructure and scale to do it really well from a cost, performance, and global scale that changes the nature of the competition. So we definitely see this, as you said, as a reset moment for how customers thinking of choosing their cloud decision.

If you’re talking about a lot of choices about models, and customers were over-indexed on choosing the correct model, that implies that models are maybe a commodity, and that we’ve seen with GPT-4 prices are down something like 90% since release. Is that a trend you anticipate continuing, and is it something that you want to push and actually accelerate?

TK: Models — whether they’re a commodity or not, time will tell, these are very early innings. All we’re pointing out is every month, there’s a new model from a new player, and the existing models get better on many different dimensions. It’s like trying to pick a phone based on a camera, and the camera’s changing every two weeks, right? Is that the basis on which you want to make your selection?

Well, but if you make that basis, then you might be locked into the operating system.

TK: That’s right, and so that’s why we say you should choose an open platform, and you should be able to use a collection of different models, because it’s changing, and don’t lock into a particular operating system at a time when the applications on top of it are changing, to use your analogy.

Why is your platform open as compared to others? Microsoft has announced you can use other models, not just OpenAI models. Amazon is sort of, to the extent you can ascertain a strategy, it’s like, “Look, we’re not committing to anything, you could do whatever you want.” Why do you feel comfortable saying, “No, we’re the open one,” and they’re not?

TK: Well, first of all, the completeness of our platform; Vertex has a lot more services than you can get with the other platforms. Secondly, in order to improve a platform, you have to have your own model, because there’s a bunch of things you do when you engineer services with that model.

I’ll give you a really basic example. You use a model, you decide to ground the answers. Grounding improves quality, but can also introduce latency. How do you make sure that when you’re grounding, you’re not serially post-processing a model’s answer to add latency? Unless you have your own model, you wouldn’t even get to that. So because we have our own model, we’re able to engineer these things, but we make them available as services with other models, so you can use enterprise grounding as a very specific example. There are lots of customers using it with Mistral and with Llama and with Anthropic.

Second thing, we are not just offering models, but we’re actually helping the third party go to customers with us. I met a lot of customers today jointly with [CEO] Dario [Amodei] from Anthropic, and it’s a commitment to make sure we’re not just giving you our infrastructure, we’re not just training, integrating a model into Vertex, we’re not just making it a first-class model, but we’re actually bringing it to clients together.

I think that’s what we mean by open. One of the other players has no models of their own, so naturally they’re offering a bunch of models, and the other player has outsourced their model development to a third party…

How important is that million context window in the story you are telling? My perception is, there’s a lot of stuff you could do if you build a lot of infrastructure around it, whether it be RAG or other implementations, but it feels like with Gemini 1.5 there are jack-of-all-trades possibilities that seem to open up to a much greater extent, and there’s a bit where, you had that compliance bit, the statements of work and they had to compare it to the 100-page compliance document. I got some comments like, “Maybe companies shouldn’t have 100-page compliance notebooks or whatever it might be”, but the reality is, that’s the case, the world has that. My perception of the keynote is, that was the killer feature, that seemed to undergird everything. Was that the correct perception?

TK: Yeah, there are two reasons. Just to be perfectly clear, Ben, the long context window allows you to do three things that are important. First of all, when you look at high definition video, for example, and other modalities, and just imagine you’re dumping a high definition video in and you want to create out of the NCAA final, which just happened, the highlight reel but you don’t want to specify every attribute about what you want spliced into the highlight reel. The model has to digest it and because it has to process it, it’s a fairly dense representation of the video because there are objects, there are people moving, there are actions, like I’m throwing a pass. They could be, I have my name on the back of my t-shirt, there could be a score like, “When did they change from 24 to 26 points? Did they score three pointers?”, so there are many, many, many dimensions. So reasoning becomes a lot better when you can take a lot more context, that’s one, and it’s particularly true of modality.

The second is, today people don’t use models to maintain state or memory, meaning they ask it a question, the next time they think, “Hey, it may not remember”, so when you’re able to maintain a longer context, you can maintain more state, and therefore you can do richer and richer things rather than just talk back-and-forth with a very simplistic interface. You see what I mean?

The third thing is, there are certainly complex scenarios, it’s the unfortunate reality, there’s lots of policies and procedure books that are even longer than what we showed, and so there are scenarios like that that we have to be able to deal with. But in the longer term, the real breakthrough is the following. Context length, if you can decouple the capabilities of the model and the latency to serve a model from the context length, then you can fundamentally change how quickly you can scale a model.

Is this ultimately, from your perspective, a question of infrastructure, and that just leans into Google’s biggest advantage?

TK: It’s a question of global infrastructure, but also optimizations at every layer in the infrastructure, which we can co-engineer with DeepMind…

Sundar Pichai mentioned in his video greeting, he emphasized the number of AI startups, and particularly AI unicorns using Google Cloud. To go back to the reboot idea, do you view the AI Era as a restart in terms of capturing the next generation of companies? I mean, obviously, AWS had a huge advantage here as far as general cloud computing, the entire mobile app ecosystem was by and large built on AWS. In the enterprise era, you have to deal with what’s there, what they’ve already dealt with, you have to have the integrations. Do you see yourself as having this as a big focus, “We’re going to own this era of startups”?

TK: Yes. And by the way, every one of those startups is being pursued by the other two, and the fact that 90% of the unicorns and 60% of all AI-funded startups, up in each case by ten points in eight months, and they are the most discerning ones. I mean, just to be frank, the unicorns, for them, it is the really biggest cost of goods sold in their P&L.

So what’s the driver there?

TK: The efficiency of our infrastructure.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon (parent of AWS), Microsoft, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 April 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 April 2024:

1. Perplexity is ready to take on Google – Alex Heath and Aravind Srinivas

What’s it like on the frontlines of the AI talent war right now?

I made mistakes in chasing the wrong people. Recently there was a really senior backend engineer who ended up joining X.AI. He was talking to us, too.

I was talking to Patrick Collison for advice on this, and he said, “Why are you even in this race? Why are you trying to compete with these people? Go after people who want to actually build the stuff that you’re building and don’t chase AI clout.”

There are a lot of good engineers who are applying to us and Anthropic and OpenAI and X.AI and Character.ai. These are the top five choices of AI startups. And people normally just go to the highest bidder. Whoever has the highest valuation will be able to win this race all the time because, on paper, you’re always going to be able to offer the same amount of shares but the dollar value is going to be much higher…

...Have you taken any kind of lesson away from the Gemini diversity scandal? I saw you recently integrated photo generation into Perplexity.

Factfulness and accuracy is what we care about. Google has many other cultural things that they care about, and that’s why they made their products that way. They should only prioritize one aspect, which is giving an accurate answer. They don’t do that for whatever reasons. They have all these other people in the room trying to make decisions.

If I learned one thing, it’s that it’s better to be neutral. Don’t try to have any values you inject into the product. If your product is an answer engine, where people can ask questions and get answers, it better respond in a scholarly way. There’s always a nerd in your classroom who’s just always right, but you don’t hate them for having a certain political value, because they are just going to give you facts. That’s what we want to be. And Google’s trying to be something different. That’s why they got into trouble.

What are you hearing generally about the state of Google from people there right now?

The researchers are still pretty excited about what they’re doing. But the product team messes up their releases. The Gemini product team was fine-tuning all these models to put in the product. There’s a lot of bureaucracy, basically.

I know Sergey Brin being there is making things faster and easier for them. You might have seen the video that was circulating of him being at some hackathon. He brushed it [the Gemini diversity scandal] off as just some kind of a small bug, right?

It’s not a small bug. It’s actually poor execution. The image generation thing is actually very easy to catch in testing. They should have caught it in testing. When you consider Google as the place for de facto information and correctness, when they make mistakes it changes the way you perceive the company…

How much of your tech is in-house versus fine-tuning all these models that you work with? What’s your tech secret sauce?

In the beginning, we were just daisy-chaining GPT-3.5 and Bing. Now, we post-train all these open-source models ourselves. We also still use OpenAI’s model.

We are never going to do the full pre-training ourselves. It’s actually a fool’s errand at this point because it takes so much money to even get one good model by pre-training yourself. There are only four or five companies that are capable of doing that today. And when somebody puts out these open-source models, there’s no reason for you to go and recreate the whole thing.

There is a new term that has emerged in this field called post-training. It’s actually like fine-tuning but done at a much larger scale. We are able to do that and serve our models ourselves in the product. Our models are slightly better than GPT-3.5 Turbo but nowhere near GPT-4. Other than Anthropic and Gemini, nobody has actually gotten to that level yet.

How are you doing to solve AI hallucination in your product? Can you?

The reason why we even have sources at the top of the answer is because we want to make sure that users have the power to go verify the answer. We precisely tell you which link to go to versus showing ten blue links and you not being sure which to read.

The other way is constantly improving the authority of which sources we use to cite the answer and then getting rid of the bad ones. When you don’t have sufficient information, it’s better to say you don’t know rather than saying something you made up.

2. Book Summary: Our Investing Strategy, who does the market smile upon – Made In Japan

He goes by the name of Tatsuro Kiyohara, who was the CIO of Tower Investment Management, which ran the flagship K-1 fund that compounded 20% annually during his 25-year run (that’s 9300%). Compare this to the TOPIX which did an annualized return of roughly 3%.

But its not just the numbers that he posted that were inspiring, the journey to get there was a tumultuous one that would be almost impossible for us to replicate. He is built differently. Who else is willing to pour in almost their entire net worth when the fund is down -72% in an attempt to save his fund not just for his sake, but for the clients that decided to stick with him amid all the redemptions?..

…During his stint in New York, his clients included the familiar hedge funds we’d all heard of. One of which was Julian Robertson’s Tiger Management. Meeting Tiger was perhaps the first instance he decided that he wanted to do something with hedge funds, he would spend his days talking stock at their office. Tiger appreciated him too – one time he realized that Tiger was short a stock, Kawasaki Steel, and also realized Nomura was attempting to ‘promote’ the stock (what you call ‘pump’ these days), he almost had to fight with Tiger to convince exiting and that the stock was going up regardless of fundamentals which they finally obliged. This stock 3xed not long after. Not a surprise that he was invited to Tiger’s annual party to be awarded best salesman of the year…

…The game is to figure out when you’re in the minority. This doesn’t always mean that because everyone is bullish, that you being bullish isn’t a variant perception. If for example the market expects a company to grow by 10% a year for the next 5 years, and you believe it will be more like 30%, you are still in the minority.

It is more difficult to thus have a good investment idea in large caps because its harder to figure out what’s priced in.

From an opportunity cost standpoint for an individual investor, the return on time is so much better researching cheap micro & small-caps with the potential to grow. If you had an hour with the CEO of a large company or a small one the likelihood you will get more valuable insights (your alpha) is much higher in the latter. In general, you also won’t lose much if you’re wrong here, these companies aren’t priced for growth anyway. For him personally, he tries to avoid investing in stocks that have already gone up as much as possible…

…One of his first investments with the fund that falls into this archetype was Nitori, which is a household name today but when he first invested nobody touched it. It was a Hokkaido-based company, and the furniture market in Japan was shrinking. What he realized though was that the market was very fragmented, and he saw Nitori as the one to take market share over time with an exceptional founder. Which proved to be correct. His investment in NItori 10xed all the while the market halved. The lesson here was that even with a shrinking market if you find the right company, you can generate strong returns. No doubt there are some diamonds in the rough…

…In the end, 90% of investing in Small/Micro-caps is about Management
Heres what he looks for:

  1. Operators with a growth mindset
  2. Talented employees that are aligned with the CEO’s Vision and Mission
  3. Doesn’t get crushed by competitors
  4. A widening core competence (read = Moat) as it grows
  5. Not in an industry where you’re pulling forward demand (There is a finite pile of demand that once absorbed will be gone, he points out the Japanese M&A industry as one)
  6. The management is impeccable with their word that is, they do what they say
  7. Companies that have positive feedback loops…

…With one company, every time he requested a meeting with Investor Relations, the CEO showed up every time without fail which he found strange. Eventually, though this business got caught in an accounting scandal and went under. Maybe if the CEO shows up too readily you need to be careful. Another business had zero interest in doing IR, showing up in their factory uniform, and wasn’t too friendly. One day however they show up in suits! This business also didn’t too well…

…There are mavericks among them. Founder of Zensho Holdings (the operator of Beef bowl chain Sukiya). This was an unpopular stock that IPOd. The first thing he saw when visiting the founder’s office was a bench press. He was ‘ripped like Popeye’. At the meeting, all he talked about was how superior a food a ‘beef bowl’ (Gyuudon) was. If Japanese people had eaten enough beef bowls and benchpressed enough Japan wouldn’t have lost the war (LOL).

One time one of Kiyohara-sans employees told this founder he’s also been going to the gym, he immediately challenged said employee and tested him with a ‘lariat’ a type of wrestling tackle. The key is to find a CEO who knows his wrestling moves.

The IR was also interesting. In its mid-term plan, they even included the P/E multiple the company should be trading at in 5 years…

…As he once reflected on his portfolio, he realized that the business and its management was like looking at himself in the mirror. If assessing management correctly is the key to investing, also understand that there is a self-selection bias. If the Founder and CEO is that much more brilliant than you – you won’t even realize how brilliant he is. That said you’d never invest in a ‘dumb’ CEO, so ultimately you end up selecting people ‘on your level’. This it appears, to be the reality for investing in microcaps…

…He was adamant about that whether large or small – he had no interest in buying an expensive business. If the P/E was too high, that was a pass.

Over the years he tried building various growth models and realized this had almost no benefit to making money so just stopped. Was a waste of time.

He screens for such companies by finding a high net cash ratio which was just net cash over market cap. (So basically net-nets).

He also liked to invert the problem, by looking at the current P/E of the stock you can figure out the kind of earnings growth it implied. No rocket science here – he tries to figure out the Terminal Multiple with a Perpetual Growth model. For example, if the risk-free rate is 2% and the P/E is 10x that implies a terminal growth of -8.2% all else equal if earnings growth was -3.1% instead then the P/E should be 20x. (Yes this is all negative growth)

The 40-year average of the risk-free rate is about 1.7% in Japan so that sounds fair to him. Also, he says, forget about the concept of equity risk premium – this is just a banker’s term to underwrite uncertainty. If you’re uncertain just model that into your earnings projections…

…He doesn’t look at P/B where Investors may be calculating the liquidation price which can be inaccurate.

The point he thinks many miss – if a company is loss-making would the hypothetical buyer buy the assets at face value? And say that the business is a decent profitable business – no one’s going to be looking at the P/B they’ll be fixated on the P/E…

…Risks of small/micro caps:

  • Illiquidity discount
  • Many businesses are small suppliers to a much larger company
  • It operates in an industry with low entry barriers
  • Limited talent within the organization
  • Succession issues and nepotism – the son of the owner can be a dumb ass
  • Because no one notices them – the likelihood of a fraud/scandal is higher
  • When the owner retires, this person may pay out a massive retirement bonus
  • Because it’s an owner-operator and harder to take over, no one keeps them in check and may screw up
  • When there is an accounting fraud, the damage will be large
  • They don’t have the resources to expand overseas
  • They have an incentive to keep their valuation low to minimize the inheritance tax.

3. Three reasons why oil prices are remarkably stable – The Economist

Shouldn’t oil prices be surging? War has returned to the Middle East. Tankers in the Red Sea—through which around 12% of seaborne crude is normally shipped—are under attack by Houthi militants. And opec, a cartel of oil exporters, is restricting production. Antony Blinken, America’s secretary of state, has invoked the spectre of 1973, when the Yom Kippur war led to an Arab oil embargo that quadrupled prices in just three months. But oil markets have remained calm, trading mostly in the range of $75 and $85 per barrel for much of last year…

…Oil production is now less concentrated in the Middle East than it has been for much of the past 50 years. The region has gone from drilling 37% of the world’s oil in 1974 to 29% today. Production is also less concentrated among members of OPEC… That is partly because of the shale boom of the 2010s, which turned America into a net energy exporter for the first time since at least 1949…

…Another reason for calm is opec members’ ample spare production capacity (ie, the amount of oil that can be produced from idle facilities at short notice)…

…America’s Energy Information Administration (eia) estimates that opec’s core members have around 4.5m barrels per day of spare capacity—greater than the total daily production of Iraq…

…The world still has a big appetite for oil: according to the eia demand hit a record in 2023 and will be higher still in 2024, thanks in part to growth in India. But that is unlikely to push prices much higher. Global growth is not at the levels seen in the early 2000s. China, long the world’s biggest importer of oil, is experiencing anaemic economic growth. Structural changes to its economy also make it less thirsty for the stuff: next year, for example, half of all new cars sold in the country are expected to be electric.

4. How We’ll Reach a 1 Trillion Transistor GPU – Mark Liu and H.S. Philip Wong

All those marvelous AI applications have been due to three factors: innovations in efficient machine-learning algorithms, the availability of massive amounts of data on which to train neural networks, and progress in energy-efficient computing through the advancement of semiconductor technology. This last contribution to the generative AI revolution has received less than its fair share of credit, despite its ubiquity.

Over the last three decades, the major milestones in AI were all enabled by the leading-edge semiconductor technology of the time and would have been impossible without it. Deep Blue was implemented with a mix of 0.6- and 0.35-micrometer-node chip-manufacturing technology. The deep neural network that won the ImageNet competition, kicking off the current era of machine learning, was implemented with 40-nanometer technology. AlphaGo conquered the game of Go using 28-nm technology, and the initial version of ChatGPT was trained on computers built with 5-nm technology. The most recent incarnation of ChatGPT is powered by servers using even more advanced 4-nm technology. Each layer of the computer systems involved, from software and algorithms down to the architecture, circuit design, and device technology, acts as a multiplier for the performance of AI. But it’s fair to say that the foundational transistor-device technology is what has enabled the advancement of the layers above.

If the AI revolution is to continue at its current pace, it’s going to need even more from the semiconductor industry. Within a decade, it will need a 1-trillion-transistor GPU—that is, a GPU with 10 times as many devices as is typical today…

…Since the invention of the integrated circuit, semiconductor technology has been about scaling down in feature size so that we can cram more transistors into a thumbnail-size chip. Today, integration has risen one level higher; we are going beyond 2D scaling into 3D system integration. We are now putting together many chips into a tightly integrated, massively interconnected system. This is a paradigm shift in semiconductor-technology integration.

In the era of AI, the capability of a system is directly proportional to the number of transistors integrated into that system. One of the main limitations is that lithographic chipmaking tools have been designed to make ICs of no more than about 800 square millimeters, what’s called the reticle limit. But we can now extend the size of the integrated system beyond lithography’s reticle limit. By attaching several chips onto a larger interposer—a piece of silicon into which interconnects are built—we can integrate a system that contains a much larger number of devices than what is possible on a single chip…

…HBMs are an example of the other key semiconductor technology that is increasingly important for AI: the ability to integrate systems by stacking chips atop one another, what we at TSMC call system-on-integrated-chips (SoIC). An HBM consists of a stack of vertically interconnected chips of DRAM atop a control logic IC. It uses vertical interconnects called through-silicon-vias (TSVs) to get signals through each chip and solder bumps to form the connections between the memory chips. Today, high-performance GPUs use HBM extensively…

…With a high-performance computing system composed of a large number of dies running large AI models, high-speed wired communication may quickly limit the computation speed. Today, optical interconnects are already being used to connect server racks in data centers. We will soon need optical interfaces based on silicon photonics that are packaged together with GPUs and CPUs. This will allow the scaling up of energy- and area-efficient bandwidths for direct, optical GPU-to-GPU communication, such that hundreds of servers can behave as a single giant GPU with a unified memory. Because of the demand from AI applications, silicon photonics will become one of the semiconductor industry’s most important enabling technologies…

…We can see the trend already in server GPUs if we look at the steady improvement in a metric called energy-efficient performance. EEP is a combined measure of the energy efficiency and speed of a system. Over the past 15 years, the semiconductor industry has increased energy-efficient performance about threefold every two years. We believe this trend will continue at historical rates. It will be driven by innovations from many sources, including new materials, device and integration technology, extreme ultraviolet (EUV) lithography, circuit design, system architecture design, and the co-optimization of all these technology elements, among other things.

Largely thanks to advances in semiconductor technology, a measure called energy-efficient performance is on track to triple every two years (EEP units are 1/femtojoule-picoseconds).

In particular, the EEP increase will be enabled by the advanced packaging technologies we’ve been discussing here. Additionally, concepts such as system-technology co-optimization (STCO), where the different functional parts of a GPU are separated onto their own chiplets and built using the best performing and most economical technologies for each, will become increasingly critical.

5. The illusion of moral decline – Adam Mastroianni

In psychology, anything worth studying is probably caused by multiple things. There may be lots of reasons why people think morality is declining when it really isn’t.

  • Maybe people say that morality is declining because they think it makes them look good. But in Part I, we found that people are willing to say that some things have gotten better (less racism, for instance). And people still make the same claims when we pay them for accuracy.
  • Maybe because people are nice to you when you’re a kid, and then they’re less nice to you when you’re an adult, you end up thinking that people got less nice over time. But people say that morality has declined since they turned 20, and that it’s declined in the past four years, and all that is true for old people, too.
  • Maybe everybody has just heard stories about how great the past is—like, they watch Leave It to Beaver and they go “wow, people used to be so nice back then.” But again, people think morality has declined even in the recent past. Also, who watches Leave It to Beaver?
  • We know from recent research that people denigrate the youth of today because they have positively biased memories of their own younger selves. That could explain why people blame moral decline on interpersonal replacement, but it doesn’t explain why people also blame it on personal change.

Any of these could be part of the illusion of moral decline. But they are, at best, incomplete.

We offer an additional explanation in the paper, which is that two well-known psychological phenomena can combine to produce an illusion of moral decline. One is biased exposure: people pay disproportionate attention to negative information, and media companies make money by giving it to us. The other is biased memory: the negativity of negative information fades faster than the positivity of positive information. (This is called the Fading Affect Bias; for more, see Underrated ideas in psychology).

Biased exposure means that things always look outrageous: murder and arson and fraud, oh my! Biased memory means the outrages of yesterday don’t seem so outrageous today. When things always look bad today but brighter yesterday, congratulations pal, you got yourself an illusion of moral decline.

We call this mechanism BEAM (Biased Exposure and Memory), and it fits with some of our more surprising results. BEAM predicts that both older and younger people should perceive moral decline, and they do. It predicts that people should perceive more decline over longer intervals, and they do. Both biased attention and biased memory have been observed cross-culturally, so it also makes sense that you would find the perception of moral decline all over the world.

But the real benefit of BEAM is that it can predict cases where people would perceive less decline, no decline, or even improvement. If you reverse biased exposure—that is, if people mainly hear about good things that other people are doing—you might get an illusion of moral improvement. We figured this could happen in people’s personal worlds: most people probably like most of the people they interact with on a daily basis, so they may mistakenly think those people have actually become kinder over time.

They do. In another study, we asked people to answer those same questions about interpersonal replacement and personal change that we asked in a previous study, first about people in general, and then about people that they interact with on a daily basis. When we asked participants about people in general, they said (a) people overall are less moral than they were in 2005, (b) the same people are less moral today than in 2005 (personal change) and (c) young people today are less moral than older people were in 2005 (interpersonal replacement). Just as they did before, participants told us that morality declined overall, and that both personal change and interpersonal replacement were to blame.

But we saw something new when we asked participants about people they know personally. First, they said individuals they’ve known for the past 15 years are more moral today. They said the young folks they know today aren’t as moral as the old folks they knew 15 years ago, but this difference was smaller than it was for people in general. So when you ask people about a group where they probably don’t have biased exposure—or at least not biased negative exposure—they report less moral decline, or even moral improvement.

The second thing that BEAM predicts is that if you turn off biased memory, the illusion of moral decline might go away. We figured this could happen if you asked people about times before they were born—you can’t have memories if you weren’t alive. We reran one of our previous studies, simply asking participants to rate people in general today, the year in which they turned 20, the year in which they were born, 20 years before that, and 40 years before that.

People said, basically, “moral decline began when I arrived on Earth”:

Neither of these studies mean that BEAM is definitely the culprit behind the illusion of moral decline, nor that it’s the only culprit. But BEAM can explain some weird phenomena that other accounts can’t, and it can predict some data that other accounts wouldn’t, so it seems worth keeping around for now.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 07 April 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 April 2024:

1. China’s capitalist experiment – Michael Fritzell

I just read a great new book by analyst Anne Stevenson-Yang. It’s called Wild Ride and is available for pre-order on Amazon.

The book tells the story of China’s economic miracle from the late 1970s until today - how Deng Xiaoping’s reforms unleashed a wave of entrepreneurship and led to China’s economy becoming one of the largest in the world.

However, it also discusses some of the system’s fragilities and how the country now seems to be turning inwards again…

…China under Mao Zedong was a closed-off, repressive society. Meat was a once-in-a-week luxury. Cooking was done outside. And personal freedoms were more or less non-existent…

…After Mao died in 1976, a power struggle ensued. Ultimately, Mao’s former ally, Deng Xiaoping, emerged victorious from this struggle. One of his first tasks was to open up the economy to the outside world. For this, he would need hard currency.

Practical considerations took priority in those early years. When Deng Xiaoping travelled to the United States in 1979, he ordered an inventory of all hard currency in China’s banks. He came up with only US$38,000 - hardly enough to pay for his delegation.

This was a low point for the Chinese economy. Deng recognized that China needed exports. Japan, Korea, and Taiwan became wealthy by promoting the export of manufactured goods. So Deng adopted a twin strategy of promoting exports in special economic zones while shielding ordinary Chinese from foreign cultural influences…

..Deng’s special economic zones were newly incorporated entities acting as quasi-governments. What made them different was that their managers were rewarded by meeting targets focused on the scale of capital investment and gross tax revenues…

…Initially, foreign influence was kept at bay. Foreign nationals were required to live in special compounds, use separate medical facilities, and even use special currencies. Romantic relationships between foreigners and Chinese were forbidden as well.

The special economic zones in the Southern parts of the Guangdong province, such as Shenzhen, were particularly successful. One of the reasons was that they were near port facilities. But perhaps even more importantly, they had access to financial powerhouse Hong Kong, with its banks and talented entrepreneurs. While, of course, having access to hundreds of millions of workers from inland provinces.

In fact, Shenzhen became a model for the China that was about to develop. It was the first city to abolish the food coupon system, thus allowing residents to buy food with their own money. And residents were soon allowed to lease their own land…

…Another important part of Deng’s reforms was allowing farmers to grow whatever they pleased after meeting some quota. They could then sell any surplus in newly established markets. This unleashed immense rural income growth of 12% per year throughout the 1980s.

A similar system was later introduced to state-owned enterprises as well. They were now allowed to retain profits, either for reinvestment or pay them out as bonuses to employees. Managers suddenly realized they had incentives to increase revenues and profits, and some became wealthy…

…But beneath the surface, discontent was growing. Students were devouring books brought in from overseas. They were clamoring not only for economic gains but also for political reforms. By 1987, Beijing students regularly held marches from the university districts to Tian’anmen Square to protect against political restrictions…

…The crackdown on the student demonstrations in Beijing in June 1989 led to a significant political shift. For two years after the massacre, the country closed off, and dissidents were hunted down and jailed. Anyone who participated in the protests was either disappeared, jailed, demoted or unable to attend university or get a good job.

After the student protests, the Communist Party shifted its strategy to maintaining control. It upped its propaganda efforts, conveying that if the party were to collapse, China would end up in total anarchy.

In the aftermath of Tian’anmen, a communication system was established that improved the party’s control over the provinces. Tax collection and audits were tightened, and a criminal detection and surveillance system was developed…

…One of Deng’s buzzwords during this era was “to get rich is glorious” (致富光荣). You no longer had to be ashamed of pursuing wealth; it was promoted from the top down.

The Communist Party bet that as long as people felt their livelihoods improved, they would not rock the boat. The restive students who protested at Tian’anmen Square would now focus on economic opportunity rather than spiritual dissatisfaction.

After his come-back in the early 1990s, Deng picked out young talent Zhu Rongji to push for further reforms. In a long list of achievements, Zhu Rongji managed to:

  • Cut the government bureaucracy in half
  • Privatize housing
  • Sell off 2/3 of the companies in the state sector
  • Unify the dual currencies used prior to 1994
  • Introduce a nationwide tax system
  • Take control of the appointment of all provincial-level governors…

…After the reforms of the 1990s, China’s economic growth really took off. Exporters in China’s coastal regions benefitted from the country’s admission into the WTO, and Chinese returnees started businesses left and right…

…It was also during the 2000s that the property boom really kicked into high gear. In the late 1990s, Zhu Rongji instituted reforms that allowed state-owned enterprises to sell worker housing back to tenants for a pittance. As prices rose throughout the 2000s, tenants now held significant household equity, which they could then leverage to buy new, even fancier, commodity housing.

A change in the tax structure also incentivized local governments to promote construction. In the mid-1990s, the central government established its own offices to collect taxes directly. In other words, local governments had less ability to raise taxes themselves, instead relying on remittances from the central government. Local governments thus became cash-poor.

To fund their spending programs, they instead set up local government financing vehicles (LGFVs), which used land as collateral for borrowing. And since they were government entities, they were seen as quasi-sovereign borrowers enjoying full access to loans from state banks. Over time, the number of LGFVs grew to over 10,000. They operate urban infrastructure, subway systems, water and gas utilities, etc. Some of them are profitable, but many of them are not…

…The privatization of China’s housing market, which provided collateral for new loans, created one of the biggest credit booms the world has ever seen. Later on, in just five years, more credit was created than the entire value of the US banking system…

…After the Great Financial Crisis of 2008, the Communist Party leadership unleashed a CNY 4 trillion stimulus program that brought forward demand for infrastructure and spending targets.

At this point, it was already becoming clear that the capital stock for infrastructure was starting to exceed those of most other developing or even developed economies. By 2012, China had 8x the length of highways per unit of GDP as that of Japan. At the time, more than 70% of China’s airports were failing to cover their own costs, even though such costs tend to be modest…

…Meanwhile, with the state pushing for big stimulus packages, the government increasingly directed economic resources. Concepts such as “advance of the state, retreat of the private sector” (国进民退) became more common, reflecting a shift in the economy away from private sector entrepreneurship…

…And indeed, with the emergence of Xi Jinping, the state has started to reassert control. State companies are now receiving most of the loans from China’s banks. State media is now talking of “national rejuvenation”, trying to unite the country around nationalist sentiment and acceptance of a “moderately prosperous lifestyle” (小康社会). This is a clear break from the era of Deng Xiaoping’s reforms when getting rich was perhaps the greatest virtue in life…

…Further, she believes that a Russia-Iran-China bloc is currently being formed and that China’s financial system could serve as a bedrock for trade within the bloc:

“If, however, China were someday to shrink its network of trading partners to other dictatorships like Russia and North Korea, its dedicated financial system could become the principal one used for trade among those nations.”

In other words, Anne believes that China is withdrawing from its informal pact with Western nations about open trade, with the experiment in Western-style capitalism that commenced in 1979 over. The Chinese economy is now morphing into a different system, one where the state reigns supreme and will become an influential partner in a new trading bloc formed by China’s current geopolitical allies.

2. 20 Lessons From 20 Years of Managing Money – Ben Carlson

1. Experiences shape your perception of risk. Your ability and need to take risk should be based on your stage in life, time horizon, financial circumstances and goals.

But your desire to take risk often trumps all that, depending on your life experiences. If you worked at Enron or Lehman Brothers or AIG or invested with Madoff, your appetite for risk will be forever altered.

And that’s OK as long as you plan accordingly.

2. Intelligence doesn’t guarantee investment success. Warren Buffett once wrote, “Investing is not a game where the guy with the 160 IQ beats the guy with the 130 IQ. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing.”

I’ve met so many highly educated individuals who are terrible investors. They can’t control their emotions because their academic pedigree makes them overconfident in their abilities.

Emotional intelligence is the true sign of investment smarts.

3. No one lives life in the long-term. Long-term returns are the only ones that matter but you have to survive a series of short-terms to get there.

The good strategy you can stick with in those short-terms is preferable to the perfect strategy you can’t stick with…

9. The biggest risks are always the same…yet different. The next risk is rarely the same as the last risk because every market environment is different.

On the other hand, the biggest mistakes investors make are often the same — timing the market, recency bias, being fearful when others are fearful and greedy when others are greedy and investing in the latest fads.

It’s always a different market but human nature is the constant…

16. Experience is not the same as expertise. Just because you’ve been doing something for a long time doesn’t mean you’re an expert.

I know plenty of experienced investors who are constantly fighting the last war to their own detriment.

How many people who “called” the 2008 crash completely missed the ensuing bull market? All of them?

How many investment legends turn into permabears the older they get becasue they fail to recognize how markets have changed over time?

Loads of investment professionals who have been in the business for many years make the same mistakes over and over again…

18. There is a big difference between rich and wealthy. Lots of rich people are miserable. These people are not wealthy, regardless of how much money they have.

There are plenty of people who wouldn’t be considered rich based on the size of their net worth who are wealthy beyond imagination because of their family, friends and general contentment with what they have.

19. Optimism should be your default. It saddens me to see an increasing number of cynical and pessimistic people every year.

I understand the world can be an unforgiving place and things will never be perfect but investing is a game where the optimists win.

3. 8 Google Employees Invented Modern AI. Here’s the Inside Story – Steven Levy

EIGHT NAMES ARE listed as authors on “Attention Is All You Need,” a scientific paper written in the spring of 2017. They were all Google researchers, though by then one had left the company…

…Recurrent neural networks struggled to parse longer chunks of text. Take a passage like Joe is a baseball player, and after a good breakfast he went to the park and got two hits. To make sense of “two hits,” a language model has to remember the part about baseball. In human terms, it has to be paying attention. The accepted fix was something called “long short-term memory” (LSTM), an innovation that allowed language models to process bigger and more complex sequences of text. But the computer still handled those sequences strictly sequentially—word by tedious word—and missed out on context clues that might appear later in a passage. “The methods we were applying were basically Band-Aids,” Uszkoreit says. “We could not get the right stuff to really work at scale.”

Around 2014, he began to concoct a different approach that he referred to as self-attention. This kind of network can translate a word by referencing any other part of a passage. Those other parts can clarify a word’s intent and help the system produce a good translation. “It actually considers everything and gives you an efficient way of looking at many inputs at the same time and then taking something out in a pretty selective way,” he says. Though AI scientists are careful not to confuse the metaphor of neural networks with the way the biological brain actually works, Uszkoreit does seem to believe that self-attention is somewhat similar to the way humans process language.

Uszkoreit thought a self-attention model could potentially be faster and more effective than recurrent neural nets. The way it handles information was also perfectly suited to the powerful parallel processing chips that were being produced en masse to support the machine learning boom. Instead of using a linear approach (look at every word in sequence), it takes a more parallel one (look at a bunch of them together). If done properly, Uszkoreit suspected, you could use self-attention exclusively to get better results…

…The transformer crew set about building a self-attention model to translate text from one language to another. They measured its performance using a benchmark called BLEU, which compares a machine’s output to the work of a human translator. From the start, their new model did well. “We had gone from no proof of concept to having something that was at least on par with the best alternative approaches to LSTMs by that time,” Uszkoreit says. But compared to long short-term memory, “it wasn’t better.”

They had reached a plateau—until one day in 2017, when Noam Shazeer heard about their project, by accident. Shazeer was a veteran Googler—he’d joined the company in 2000—and an in-house legend, starting with his work on the company’s early ad system. Shazeer had been working on deep learning for five years and recently had become interested in large language models. But these models were nowhere close to producing the fluid conversations that he believed were possible.

As Shazeer recalls it, he was walking down a corridor in Building 1965 and passing Kaiser’s workspace. He found himself listening to a spirited conversation. “I remember Ashish was talking about the idea of using self-attention, and Niki was very excited about it. I’m like, wow, that sounds like a great idea. This looks like a fun, smart group of people doing something promising.” Shazeer found the existing recurrent neural networks “irritating” and thought: “Let’s go replace them!”

Shazeer’s joining the group was critical. “These theoretical or intuitive mechanisms, like self-attention, always require very careful implementation, often by a small number of experienced ‘magicians,’ to even show any signs of life,” says Uszkoreit. Shazeer began to work his sorcery right away. He decided to write his own version of the transformer team’s code. “I took the basic idea and made the thing up myself,” he says. Occasionally he asked Kaiser questions, but mostly, he says, he “just acted on it for a while and came back and said, ‘Look, it works.’” Using what team members would later describe with words like “magic” and “alchemy” and “bells and whistles,” he had taken the system to a new level.

“That kicked off a sprint,” says Gomez. They were motivated, and they also wanted to hit an upcoming deadline—May 19, the filing date for papers to be presented at the biggest AI event of the year, the Neural Information Processing Systems conference in December. As what passes for winter in Silicon Valley shifted to spring, the pace of the experiments picked up. They tested two models of transformers: one that was produced with 12 hours of training and a more powerful version called Big that was trained over three and a half days. They set them to work on English-to-German translation.

The basic model outperformed all competitors—and Big earned a BLEU score that decisively shattered previous records while also being more computationally efficient. “We had done it in less time than anyone out there,” Parmar says. “And that was only the beginning, because the number kept improving.”…

…TRANSFORMERS DID NOT instantly take over the world, or even Google. Kaiser recalls that around the time of the paper’s publication, Shazeer proposed to Google executives that the company abandon the entire search index and train a huge network with transformers—basically to transform how Google organizes information. At that point, even Kaiser considered the idea ridiculous. Now the conventional wisdom is that it’s a matter of time.

A startup called OpenAI was much faster to pounce. Soon after the paper was published, OpenAI’s chief researcher, Ilya Sutskever—who had known the transformer team during his time at Google—suggested that one of its scientists, Alec Radford, work on the idea. The results were the first GPT products. As OpenAI CEO Sam Altman told me last year, “When the transformer paper came out, I don’t think anyone at Google realized what it meant.”

The picture internally is more complicated. “It was pretty evident to us that transformers could do really magical things,” says Uszkoreit. “Now, you may ask the question, why wasn’t there ChatGPT by Google back in 2018? Realistically, we could have had GPT-3 or even 3.5 probably in 2019, maybe 2020. The big question isn’t, did they see it? The question is, why didn’t we do anything with the fact that we had seen it? The answer is tricky.”

Many tech critics point to Google’s transition from an innovation-centered playground to a bottom-line-focused bureaucracy. As Gomez told the Financial Times, “They weren’t modernizing. They weren’t adopting this tech.” But that would have taken a lot of daring for a giant company whose technology led the industry and reaped huge profits for decades. Google did begin to integrate transformers into products in 2018, starting with its translation tool. Also that year, it introduced a new transformer-based language model called BERT, which it started to apply to search the year after.

But these under-the-hood changes seem timid compared to OpenAI’s quantum leap and Microsoft’s bold integration of transformer-based systems into its product line. When I asked CEO Sundar Pichai last year why his company wasn’t first to launch a large language model like ChatGPT, he argued that in this case Google found it advantageous to let others lead. “It’s not fully clear to me that it might have worked out as well. The fact is, we can do more after people had seen how it works,” he said…

…Does Google miss these escapees? Of course, in addition to others who have migrated from the company to new AI startups. (Pichai reminded me, when I asked him about the transformer departures, that industry darling OpenAI also has seen defections: “The AI area is very, very dynamic,” he said.) But Google can boast that it created an environment that supported the pursuit of unconventional ideas. “In a lot of ways Google has been way ahead—they invested in the right minds and created the environment where we could explore and push the envelope,” Parmar says. “It’s not crazy that it took time to adopt it. Google had so much more at stake.”

Without that environment: no transformer. Not only were the authors all Google employees, they also worked out of the same offices. Hallway encounters and overheard lunch conversations led to big moments. The group is also culturally diverse. Six of the eight authors were born outside the United States; the other two are children of two green-card-carrying Germans who were temporarily in California and a first-generation American whose family had fled persecution, respectively.

4. In Depth: Local Governments Struggle to Tackle Mountain of Hidden Debt – Cheng Siwei, Wang Juanjuan, Zhang Yuzhe, Ding Feng and Zhang Yukun

The central government has been trying to address the problem of LGFV debt for years, mainly through piecemeal measures that had limited success. But in July, the Politburo vowed to formulate and implement a comprehensive strategy to resolve local government hidden debts.

These off-the-books liabilities, which include LGFV bonds with implicit official backing, have accumulated over the years to around 30 trillion to 70 trillion yuan according to some estimates, and become a threat to the country’s fiscal and financial stability and sustainability.

One of the main instruments being used to repay hidden debt in this round of debt resolution is special refinancing bonds — on-balance-sheet local government bonds whose proceeds are used to repay outstanding hidden debt. Issuance has stepped up significantly since early October after the Ministry of Finance launched a special refinancing bond swap program.

From October to December, almost all provincial-level regions on the Chinese mainland issued these special refinancing bonds, raising nearly 1.4 trillion yuan to repay hidden borrowings, according to calculations by analysts at Tianfeng Securities Co. Ltd. The regions include heavily indebted Guizhou province, which topped the list with issuance of 226.4 billion yuan.

Many regions have announced plans to issue more such bonds in February and March, with planned issuances totaling more than 100 billion yuan, the Tianfeng analysts wrote in a January report.

The campaign to resolve hidden debt has tightened rules for new debt issuance and cut some localities off from their previous financing channels, depriving them of resources to pay interest on hidden debt. The proceeds of special refinancing bonds cannot be used to make interest payments.

“The core issue now is that we can’t make our interest payments,” a source who works for an economic development zone in West China told Caixin, noting that without new financing, the fiscal revenue of the region can only sustain government agencies’ day-to-day operations and preferential policies for attracting businesses. He said his local government has stopped making all other payments, including those to project developers, to ensure it can meet interest payments on outstanding LGFV debt…

…The renewed push to bring hidden debt onto the books and restructure or swap LGFV debt, however, has reinforced the belief that the central government won’t allow LGFVs to default on their bonds, reviving investor sentiment. That’s led to a surge in demand for LGFV bonds over the past few months, even as the central government has repeatedly highlighted the need to stem any renewed buildup in hidden debt…

…Although LGFV bonds are back in hot demand, tightened oversight has made it more difficult for some vehicles, especially those with heavy debt burdens, to continue issuing new debt. This has curbed growth in hidden debt to some extent, but it has added to default risks of some LGFV bonds as there is less money available to make the interest repayments.

The central government ordered provincial officials to compile a list of LGFVs owned by local authorities in their jurisdictions…

…Obtaining new bank loans has become much harder for LGFVs on the list, as banks heed the central government’s instruction to prevent new LGFV debt.

Regarding existing LGFV debt, the State Council in September issued guidance that banks, among the most important creditors of LGFVs, should become involved in debt resolution in 12 provincial-level regions with high government leverage, which include Liaoning, Heilongjiang, and Jilin, the three rustbelt provinces in Northeast China. The guidance set out that banks should focus on restructuring or swapping existing loans, high-interest non-standard debt, and other types of borrowing.

5. Conviction and Quality – Josh Tarasoff

Conviction is no doubt the foundation of long-term business ownership. How is it formed? What is it like to have it? Why does it falter? In my experience there are two distinct kinds of conviction. Explicit conviction, as I call it, comes from having figured something out. It entails a useful prediction, like “our ETA is 5pm” or “majoring in economics will lead to better career prospects than majoring in philosophy.” There is an underlying logic to it, which can be explained and used to persuade. Implicit conviction, on the other hand, is exemplified by the trust one might have in a family member, a dear friend, a close colleague, to do the right thing, to get the job done, to come through. It is felt as opposed to believed. This kind of conviction doesn’t make predictions so much as align with what is good. It doesn’t theorize about goodness but rather knows it when it sees it…

…In the context of investing, one might develop the thesis that a particular company can capture X% market share, generate Y dollars in annual revenue, achieve Z% operating margins, and therefore has an intrinsic value within a certain range. One might have high confidence because of the presence of competitive advantages and management with a very good track record. One would have a range of expected returns from owning the shares over time. All of this would fall into the explicit category.

Sooner or later, the investment would encounter a confounding surprise. Perhaps execution turns choppy, a new competitive vector emerges out of nowhere, an exogenous crisis turns the world upside down, etc. Old projections are now in doubt, previous plans and strategies are being reworked, everything is less fun. These things are actually happening all the time— something explicit conviction has a way of tuning out! Only genuine and well-placed implicit conviction, a qualitative knowing that the company will do what it needs to and ought to do, is equipped to ably traverse this kind of terrain. Unlike analysis-based explicit conviction, implicit conviction comes from something deeper than the cause and effect we perceive in the unfolding of events—it is both analytical and, crucially, intuitive (about which more later)…

…While in everyday life implicit conviction arises naturally, in the context of investing I can’t help but feel it is somewhat alien. In part, this is because few companies are truly deserving. Even so, I suspect that implicit conviction is proffered by investors even less than it ought to be. It isn’t difficult to see why the investment industry is inhospitable to implicit conviction, and why its partner rules the roost. Implicit conviction forms of its own accord and cannot be planned. It defies quantification, eliciting the charge of being too “fuzzy” to matter. Nor can it be fully captured in words. Implicit conviction is impossible to transmit from analyst to portfolio manager or from portfolio manager to client, which is highly inconvenient for the business of managing money. It is primarily personal. It is quiet. By contrast, the appeal of the explicit is clear. Explicit conviction furnishes the comfort of knowability and modeled outcomes. It projects the legitimacy of diligence and precision. It is thought to be reliably manufactured via “repeatable process.” It is clever and self assured….

…Nonetheless, because literal communication necessitates choosing a word, I will use “Quality” (capitalized to distinguish it from the ordinary sense of the term) to indicate the deeper-something on which implicit conviction is based. Using “Quality” in this way is consistent with my prior writing and pays homage to the work of Robert Pirsig, which was a formative influence.

Analysis plays an important but limited role in detecting Quality. For example, the following is a selection of (neither necessary nor sufficient) indicators that I have found to be suggestive of Quality in companies:

  • “Wow” customer experiences
  • Mission to solve an important problem
  • Domain mastery (the best at what they do)
  • First-principles-based thinking and invention
  • Unlimited ambition combined with no-nonsense realism
  • Overcapitalized balance sheet
  • Founder mentality (life’s work).

While carefully looking for indicators like these is helpful, I think it would be a misstep to attempt to systematize the search, constructing a Grand Unified Theory of Quality and attendant comprehensive processes for finding and evaluating it. Quality emerges from the complexity of the system in action; it is in the how rather than the what. Thus, when Quality is broken down into parts and analyzed, its essence is lost. This explains why analysis alone has trouble discerning the authentic from the artificial. Moreover, Quality frozen in a theory or process cannot be recognized in sufficiently new contexts, such as in a company that is novel to one’s experience or in the same company as it evolves (they always do!).

So where does that leave us? With intuition. Well-honed intuition does what analysis cannot by perceiving Quality directly, as opposed to through an intellectual process. What I suspect is happening in the direct perception of Quality is subconscious pattern recognition, based upon a dynamic, holistic experience of the thing in question. Of course, the ability to intuitively recognize patterns in a specific domain must be earned through experience and feedback; indeed, I have found that the value of my own intuition has grown (starting at zero) over many years. Interestingly, I also find that experiencing Quality in any one domain (e.g., music or meditation, to use examples that are dear to me) can be helpful for recognizing it in other domains (including business) because Quality’s nature is universal, even as its manifestations are necessarily particular.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Microsoft. Holdings are subject to change at any time.