All articles

Dispelling This One Misconception About Stock Market Peaks

Last week, on 16 July 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station. My friend Willie Keng, the founder of investor education website Dividend Titan, was hosting a segment for the radio show and we talked about a few topics:

  • The drivers behind the stock price performance of US banks (Hints: In the short term, banks are facing pressure in a few areas, namely, a lower net interest margin, weak demand for commercial loans, and a continued deterioration in the US office properties market; in the long run, it’s the healthy of the US economy that will be the key driver and the economy still looks to be on solid footing even though there are some signs of a slow down)
  • My views on Goldman Sachs’ latest results (Hints: Goldman produced strong growth in the second quarter of 2024 and as an investment bank, this may be a sign of activity in the financial markets warming up) 
  • US stocks from the financial sector that are on my radar (Hint: I have been interested in thrift conversions, which is a niche corner of the US banking industry; thrifts, which are small community banks in the USA, tend to carry low valuations and get acquired at relatively high valuations)
  • Salesforce’s latest round of layoffs (Hint: It’s likely to be part of the normal day-to-day decisions that management has to make to keep costs in check; Salesforce has been on a quest to improve its margins since late 2022 and has been successful in doing so)
  • The impact of artificial intelligence, or AI, on software-as-a-service businesses (Hint: There are multiple possible outcomes, although my current stance is that AI will be a net positive for SaaS businesses)
  • Why it’s so difficult to tell when the stock market will peak (Hint: When looking at important financial data – such as valuations, interest rates, and inflation – at the cusp of past bear markets in US stocks, no clear signal can be found)
  • How valuations impact long-term returns (Hint: In general, when valuations are high, long-term returns tend to be low; conversely, when valuations are low, long-term returns tend to be high)
  • What can investors do to help themselves ride through market cycles (Hint: It’s critical to constantly remind ourselves of what is important – the underlying long-term business performance of a stock)
  • The concept of the “destination” (Hint: The concept of the destination is the idea of focusing on the eventual returns we can earn from a business over a multi-year, perhaps even multi-decade, holding period, and ignoring what happens in between)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Microsoft, and Salesforce. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 July 2024:

1. How a brush with death shaped my long game – Eric Markowitz

Last February, I opened my laptop and began writing a goodbye letter to my 18-month-old daughter.

“Dear Bea,” I began. “I want you to know how much I loved you…” I then carefully organized passwords to my computer, e-mail, and online brokerage accounts. My wife and I sat across from each other on the couch in stunned silence.

Hours earlier, I was told by ER doctors that I’d need emergency brain surgery to remove what they called a “rapidly enhancing lesion” in the center of my cerebellum, the part of my brain just above the brainstem. The lesion was about the size of a walnut.

At that point, doctors were unsure what it was. They explained it could either be a Stage 4 glioblastoma — terminal brain cancer — or an abscess that could pop at any point. If it was an abscess, the infection would likely prove fatal as well, given its proximity to my brainstem…

…That night, hours before the brain surgery, I laid in bed unable to sleep. I remember thinking about the crushing irony of my particular situation. For the last several years, I had built my professional identity around the idea of long-termism. I wrote a weekly newsletter about long-term investing; about compounding over many decades…

…And yet, here I was: 35 years old, and out of time. No more compounding. No more long-termism…

…At that precise moment, the idea of long-termism or “playing the long game” began to feel almost embarrassing — or ridiculous. The idea was like an act of hubris. The future isn’t earned; we’re lucky to experience it…

…Before this episode, I never had a significant health problem. But the truth is that I wasn’t living an entirely healthy, long-term-oriented lifestyle. I was constantly stressed at work. I had stopped exercising. I was glued to my phone — and to the market. In the months leading up to my condition, we were having a rough year, and it was all I could think about. I’d dream about stock prices. I’d wake up in a panic.

Despite the ideals of long-termism I professionally and publicly promoted, I was, in fact, living a lifestyle that was just the opposite. I was myopically focused on the short-term —on success, on the day-to-day. I avoided seeing friends; my marriage was becoming strained. Things were unraveling…

…The craniotomy was a tough procedure. They removed a large chunk of skull in the back of my head, spread open my brain with forceps, and removed the lesion… 

…Finally — and it’s easy in hindsight to breeze over the days it took — the report came back conclusive: an infection. Not cancer.

Later, I’d find out that typical abscesses rupture after 10 days or so. Mine had been in my head for at least 4 weeks. No doctor could explain it. I had a ticking time bomb in my brain that simply didn’t explode. Maybe the detonator malfunctioned…

…When people ask about how the experience has changed me, I simply say I’m re-committed to playing the long game.

Playing the long game isn’t just about structure and process and systems that are designed to withstand the long-term: it’s about the joy and gratitude of getting to play the game in the first place. For me, up until that point in my life, I had been making short-term decisions that led to stress and burnout. And, in retrospect, my “always on” lifestyle likely led to my near-fatal brush with death. Stress and playing short-term games quite literally nearly killed me.

My focus was all on the wrong things.

Coming out of this experience, I proactively shifted my focus. I decided to make both personal and business decisions that would create an environment where the most important things in my life could flourish long after I was gone. I read more. I talked to new people. I made more effort in my relationships — I no longer think about getting through the day, but what I’m building over the long-run. I put down my phone. I made new connections. I asked, “how can I set up my life today to ensure my kids — and their kids — will be set up?” In business, I asked, “how can I set up my business today to ensure it exists in 50 years — or even 100 years?” 

2. A borrower’s struggles highlight risk lurking in a surging corner of finance – Eric Platt and Amelia Pollard

Wall Street’s new titans have differed significantly in valuing the $1.7bn of debts they provided to workforce technology company Pluralsight, highlighting the risk that some private credit marks are untethered from reality…

…Private loans by their very nature rarely trade. That means fund managers do not have market data to rely on for objective valuations.

Instead they must draw on their own understanding of the value of the business, as well as from third-party valuation providers such as Houlihan Lokey and Kroll. They also can see how rivals are marking the debt in securities filings.

The funds share details of each individual business’s financial performance with its valuation provider, which then marks the debt. The fund’s board and audit committee ultimately sign off on those valuations…

…The loans to Pluralsight were extended in 2021, as part of Vista Equity Partners’ $3.5bn buyout of the company. It was a novel loan, based not on Pluralsight’s cash flows or earnings, but how fast its revenue was growing. Regulated banks are unable to provide this type of credit, which is deemed too risky. A who’s who of private credit lenders — including Blue Owl, Ares Management and Golub Capital — stepped in to fill the void.

The seven lenders to Pluralsight who report their marks publicly disclosed a broad range of valuations for the debt, with a Financial Times analysis showing the gulf widened as the company ran into trouble over the past year. The firms disclose the marks to US securities regulators within their publicly traded funds, known as BDCs, which offers a window into how their private funds may be valuing the debt.

Ares and Blue Owl marked the debt down to 84.9 cents and 83.5 cents on the dollar, respectively, as of the end of March. Golub had valued the loan just below par, at 97 cents on the dollar. The other four lenders, Benefit Street Partners, BlackRock, Goldman Sachs and Oaktree, marked within that range…

…The most conservative mark implies a loss across the lenders of nearly $280mn on the $1.7bn debt package. But Golub’s mark would imply a loss of just $50mn for the private lenders.

Some lenders have marked the loan down further since May, people familiar with the matter said.

Vista, for its part, started marking down its valuation of Pluralsight in 2022, cutting it to zero this year. Vista is expected to hand the keys to the business to the lenders in the coming weeks, with one person noting the two sides had made progress in recent talks…

…A publicly traded loan that changes hands below 80 cents on the dollar typically implies meaningful stress, a cue to investors of trouble. But as Pluralsight illustrated, that kind of mark never materialised until it became clear Vista might lose the business.

3. Private Equity’s Creative Wizardry Is Obscuring Danger Signs – Kat Hidalgo, Allison McNeely, Neil Callanan, and Eyk Henning

Even though buyout firms say they see green shoots in the M&A market, they’re deep into a third year of higher rates and scant opportunity to sell assets at decent prices, and they’ve been forced into a host of wheezes to keep things going: “Payment in kind” (PIK) lets PE-owned companies defer crippling interest payments in exchange for taking on even more costly debt; “net asset value” loans allow cash-strapped buyout firms to borrow against their holdings…

…The amount of distressed debt owed by portfolio businesses of the 50 biggest PE firms has climbed 18% since mid-March to $42.7 billion, according to data compiled by Bloomberg News using rankings from Private Equity International. “We expect defaults to go up,” Daniel Garant, executive vice president and global head of public markets at British Columbia Investment Management Corp., another Canadian pensions giant, told Bloomberg recently.

A key challenge for regulators is that much of PE’s borrowing was arranged with loose legal terms at a time when lenders were fighting for deals, making it easier today to use financial wizardry to keep sickly businesses alive.

“You don’t know if there are defaults because there are no covenants, right?” says Zia Uddin of US private credit firm Monroe Capital. “So you see a lot of amend and extend that may be delaying decisions for lenders.”

All this additional debt makes it tougher, too, for PE owners hoping for exits.

Take Advent International and Cinven. They took on heavy debts when buying TK Elevator including a roughly €2 billion ($2.1 billion) PIK note they loaded onto the lift maker that’s swelled to about €3 billion, according to people with knowledge of the situation. The tranches carry an interest rate of 11%-12%…

…In Europe, most private credit borrowers have been turning to PIK when reworking debt obligations, according to data from Lincoln International. In the US, Bloomberg Intelligence reckoned in a February note that 17% of loans at the 10 largest business development companies — essentially vehicles for private credit funds — involved PIK…

…One way firms try to keep investors sweet is by borrowing against a portfolio of their own assets, known as a NAV loan, and using the cash to help fund payouts. NAV lenders sometimes charge interest in the mid to high teens, and some borrowers have used holiday homes, art and cars as collateral…

…The proliferation of NAV, PIK and similar has also deepened connections between PE firms and their credit cousins, a possible contagion risk if things go wrong. In the US almost 80% of private credit deal volume goes to private equity-sponsored firms, according to the Bank for International Settlements…

…CVC Capital Partners came up with a novel use of extra leverage during its March IPO of Douglas AG. It borrowed €300 million from banks, injecting it as equity in the German beauty retailer to strengthen its balance sheet, and pledging Douglas shares as collateral in a so-called margin loan, according to the offering’s prospectus.

A fall of 30% to 50% from the IPO price would trigger a margin call, according to people with knowledge of the matter who declined to be identified as the information is private. The stock is down about a quarter since the listing…

…A new BIS report warns that “a correction in private equity and credit could spark broader financial stress,” citing potential knock-on effects on the insurers that heavily invest in these funds and on banks as the “ultimate providers of liquidity.”

“Some features in the financial markets have probably postponed the impact of the rise on interest rates, for example fixed rates, longer maturities and so on,” Agustin Carstens, BIS’s general manager, told Bloomberg TV last week. “These can change, and will be changing in the near future.”

4. China’s subsidies create, not destroy, value – Han Feizi

A common narrative bandied about by the Western business press is that China’s subsidized industries destroy value because they are not profitable – from residential property to high-speed rail to electric vehicles to solar panels (the subject of the most recent The Economist meltdown).

If The Economist actually knows better and is just doing its usual anti-China sneer, then it is par for the course and we give it a pass. But if this opinion is actually held – and all indications are that it is – then we are dealing with something far more pernicious. 248 years after the publication of Adam Smith’s “The Wealth of Nations” and the West has lost the economic plot…

…To be unable to comprehend this crucial point is to never have properly understood Adam Smith. “The Wealth of Nations” was never about the pursuit of profits.

They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.

The entire point of enlightened self-interest was supposed to be the secondary/tertiary effects that improve outcomes for all.

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.

What we want from the butcher, the brewer and the baker are beef, beer and bread, not for them to be fabulously wealthy shop owners. What China wants from BYD and Jinko Solar (and the US from Tesla and First Solar) should be affordable EVs and solar panels, not trillion-dollar market-cap stocks. In fact, mega-cap valuations indicate that something has gone seriously awry. Do we really want tech billionaires or do we really want tech?…

…The much-heralded multi-trillion dollar valuations of a handful of American companies (Microsoft, Apple, Nvidia, Alphabet, Amazon and Meta) – all of which will swear up and down and all day long that they are not monopolies – are symptoms of serious economic distortion. How much of their valuation is a result of innovation and how much is due to regulatory capture and anti-trust impotence?

It’s hard to say. China stomped on its tech monopolies and now manages to deliver similar if not superior products and services – able to make inroads into international markets (e.g. TikTok, Shein, Temu, Huawei, Xiaomi) – at always much lower prices.

The Western business press, confusing incentives with outcomes, lazily relies on stock markets to determine value creation. The market capitalization of a company is an important but entirely inadequate measure of economic value…

…What China has done in industry after industry is to flatten the supply curve by subsidizing hordes of producers. This spurs innovation, increases output and crushes margins. Value is not being destroyed; it’s accruing to consumers as lower prices, higher quality and/or more innovative products and services.

If you are looking for returns in the financial statements of China’s subsidized companies, you are doing it wrong. If China’s subsidized industries are generating massive profits, policymakers should be investigated for corruption.

A recent CSIS report estimated that China spent $231 billion on EV subsidies. While that is certainly a gross overestimation (the think tank’s assumption for EV sales tax exemption is much too high), we’ll go with it. That comes out at $578 per car when spread over all ~400 million cars (both EV and ICE) on China’s roads.

The result has been a Cambrian explosion of market entrants flooding China’s market with over 250 EV models. Unbridled competition, blistering innovation and price wars have blinged out China’s EVs with performance/features and lowered prices on all cars (both EV and ICE) by $10,000 to $40,000. Assuming average savings of $20,000 per car, Chinese consumers will pocket ~$500 billion of additional consumer surplus in 2024.

What multiple should we put on that? 10x? 15x? 20x? Yes, China’s EV industry is barely scraping a profit. So what? For a measly $231 billion in subsidies, China has created $5 to $10 trillion in value for its consumers. The combined market cap of the world’s 20 largest car companies is less than $2 trillion…

…The more significant outcomes of industrial policy are externalities. And it is all about the externalities.

To name just a few, switching to EVs weens China from oil imports, lowers particulates and CO2 emissions, provides jobs for swarms of new STEM graduates and creates ultra-competitive companies to compete in international markets.

Externalities from the stunning collapse of solar panel prices may be even more transformative. Previously uneconomic engineering solutions may become possible from mass desalinization to synthetic fertilizer, plastics and jet fuel to indoor urban agriculture. China could significantly lower the cost of energy for the Global South with massive geopolitical implications.

The city of Hefei in backwater Anhui province has achieved spectacular growth in recent years through shrewd investments in high-tech industries (e.g. EVs, LCD, quantum computing, AI, robotics, memory chips)…

…While returns for traditional venture capital investments are dictated by company profits, the Hefei model is more flexible. Returns can be collected through multiple channels from taxing employment to upgrading workforces to increasing consumer surplus. The internal hurdle rate can be set lower if positive externalities are part of the incentive structure.

5. Dear AWS, please let me be a cloud engineer again – Luc van Donkersgoed

I’m an AWS Serverless Hero, principal engineer at an AWS centric logistics company, and I build and maintain https://aws-news.com. It’s fair to say that I am very interested in everything AWS does. But I fear AWS is no longer interested in what I do.

This post is about AWS’ obsession with Generative AI (GenAI) and how it pushes away everything that makes AWS, well, AWS…

…Then 2024 came around, and somehow AWS’ focus on GenAI took on hysterical proportions. It started with the global AWS summits, where at least 80% of the talks was about GenAI. Then there was AWS re:Inforce – the annual security conference – which was themed “Security in the era of generative AI”…

…And this is the crux: AWS is now focused so strongly on GenAI that they seem not to care about anything else anymore – including everything that made developers love them and made them the leading cloud provider on almost every metric…

…I like GenAI. I use it extensively at work and for the AWS News Feed. I use ChatGPT to shape new ideas, Copilot to speed up development, and Claude to generate summaries. The point is that all these features add to an existing business. This business has customers, data, business rules, revenue, products, marketing, and all the other things that make a business tick. And most businesses had these things before 2022. GenAI allows us to add new features, and often faster than before. But GenAI has no value without an existing product to apply it to….

…But AWS and I are growing apart. I feel the things I value are no longer the things they value. By only talking about GenAI, they implicitly tell me databases are not important. Scalable infrastructure is not important. Maintainable applications are not important. Only GenAI is…

…In summary, AWS’ implicit messaging tells developers they should no longer focus on core infrastructure, and spend their time on GenAI instead. I believe this is wrong. Because GenAI can only exist if there is a business to serve. Many, if not almost all of us developers got into AWS because we want to build and support these businesses. We’re not here to be gaslighted into the “GenAI will solve every problem” future. We know it won’t.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

Why It’s So Difficult To Tell When The Stock Market Will Peak (Revised)

Many investors think that it’s easy to figure out when stocks will hit a peak. But it’s actually really tough to tell when a bear market would happen.

Note: This article is a copy of Why It’s So Difficult To Tell When The Stock Market Will Peak that I published more than four years ago on 21 February 2020. With the US stock market at new all-time highs, I thought it would be great to revisit this piece. The content in the paragraphs and table near the end of the article have been revised to include the latest valuation and returns data. 

Here’s a common misconception I’ve noticed that investors have about the stock market: They think that it’s easy to figure out when stocks will hit a peak. Unfortunately, that’s not an easy task at all.

In a 2017 Bloomberg article, investor Ben Carlson showed the level of various financial data that were found at the start of each of the 15 bear markets that US stocks have experienced since World War II:

Source: Ben Carlson

The financial data that Carlson presented include valuations for US stocks (the trailing P/E ratio,  the cyclically adjusted P/E ratio, and the dividend yield), interest rates (the 10 year treasury yield), and the inflation rate. These are major things that the financial media and many investors pay attention to. (The cyclically-adjusted P/E ratio is calculated by dividing a stock’s price with the 10-year average of its inflation-adjusted earnings.)

But these numbers are not useful in helping us determine when stocks will peak. Bear markets have started when valuations, interest rates, and inflation were high as well as low. This is why it’s so tough to tell when stocks will fall. 

None of the above is meant to say that we should ignore valuations or other important financial data. For instance, the starting valuation for stocks does have a heavy say on their eventual long-term return. This is shown in the chart below. It uses data from economist Robert Shiller on the S&P 500 from 1871 to June 2024 and shows the returns of the index against its starting valuation for 10-year holding periods. It’s clear that the S&P 500 has historically produced higher returns when it was cheap compared to when it was expensive.

Source: Robert Shiller data; my calculations

But even then, the dispersion in 10-year returns for the S&P 500 can be huge for a given valuation level. Right now, the S&P 500 has a cyclically-adjusted P/E ratio of around 35. The table below shows the 10-year annual returns that the index has historically produced whenever it had a CAPE ratio of more than 30.

Source: Robert Shiller data; my calculations

If it’s so hard for us to tell when bear markets will occur, what can we do as investors? It’s simple: We can stay invested. Despite the occurrence of numerous bear markets since World War II, the US stock market has still increased by 532,413% (after dividends) from 1945 to June 2024. That’s a solid return of 11.4% per year. Yes, bear markets will hurt psychologically. But we can lessen the pain significantly if we think of them as an admission fee for worthwhile long-term returns instead of a fine by the market-gods. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 July 2024:

1. Idea Brunch with “Made in Japan” – Edwin Dorsey and Made in Japan

For me, Japan is an interesting opportunity set because there’s a strong case to be made for several inflection points that are not all related. A few that come to mind:

  • Governance improvements: Japan always had a lot of companies with loads of cash on the balance sheet making them look ‘cheap’. The issue has always been that this cash was never for the shareholders so the market discounted this appropriately. In the last 1.5 years, however, the Tokyo Stock Exchange has cracked down on companies with weak governance/capital allocation policies and low valuation. They name and shame the companies that don’t try to improve their corporate value and are implementing a host of other measures to incentivize responsible capital allocation. I think this sends a signal to the global investor community that Japan is trying to become less of a value trap.
  • Interest rates/Inflation: Post 2008 Financial Crisis, Japan’s interest rates have been close to zero for over a decade. This is in a country that has been deflationary for so long and we’ve been gradually moving away from that. Inflation seems to be returning and interest rates are ‘normalizing.’ This could be the moment to wake up the animal spirits of Japan again, to take on more risk and for businesses to command pricing power. If inflation sustains itself at some level, it will no longer make rational sense for businesses and individuals to hold on to cash like they did in a deflationary economy where that was rewarded as their purchasing power increased. Now the opposite will happen which means they are incentivized to put the cash to work. This won’t just be businesses investing but also for individuals too. The government just made it way more attractive to do that through its new NISA scheme.
  • NISA: Japan has set up its new tax-free investment scheme for households called the Nippon Individual Savings Account (NISA). The first iteration was garbage but this one is promising. It was set up by the government to incentivize households to allocate their excess cash savings into the stock market. Household savings allocated to equities has been notoriously small, less than 20% or so. By providing more liquidity in the markets it could help the financial markets function better and make it also easier for institutions to participate in areas which were previously too illiquid.
  • Consolidation: I think we’re entering a phase of consolidation amongst Japanese SMEs which has been the backbone of Japanese society. We have an issue where many aging owners are not able to find successors for their businesses. There’s been a stigma around M&A in the past but this is starting to melt away and becoming a viable option. We’re also starting to see more young talent flowing into the M&A space. Moreover, with low interest rates, we’re seeing increased interest from foreign PE firms as well – which all tells me that we’re at an interesting juncture for industry consolidation.
  • Digitalization: One thing that you’re starting to see after Covid is the need for a more digital Japan has come to the forefront. We’ve been embarrassingly late to digital/software adoption but this was the turning point where we realized it was necessary. The government set up a Digital Agency to help adoption and provide various subsidy schemes to encourage the use of more software. We even have the term ‘DX’ short for Digital Transformation now added to the lexicon. There’s also the ‘digital cliff’ as it is called here. A lot of IT systems being used by corporate Japan today are super old something like more than 60% will be 20 years or older by 2025. So a lot of IT spending currently is going to maintaining these systems rather than building out new ones. Many people imagine Japan as this futuristic place, but you’ll be amazed how much paper we still use!…

…One of the contradictions I’ve felt about Japan is that large-cap growth in Japan gets priced at ridiculously high multiples. It’s not uncommon to see these things trade at 40 times P/E or higher. This is presumably because the cost of capital in Japan is low and in a deflationary economy where the population is declining, growth is rare. However, when you look at these small companies in great competitive positions that are growing double digits with lots of room to grow, you can find them trading for single-digit earnings multiples! The delta is so big that I call this the ‘chasm’. If you look at some of the large-cap growth companies, these also traded at very low multiples early on but as they continued to grow earnings per share at some point brokers start to cover it, institutions start to pile in and the stock re-rates quite significantly and that contradiction gets resolved. Some of these large caps are expensive and can de-rate as interest rates rise, but the gap is large enough that I still think it’s more likely that these small companies will re-rate than the large caps de-rating down to where these small caps are valued.

2. The Last 72 Hours of Archegos – Ava Benny-Morrison and Sridhar Natarajan

An Archegos staffer re-lived the craziness of being in an airport security line while on a call with panicked banks, trying to head off catastrophe. A Credit Suisse trader described nabbing a Citi Bike on his day off to reach the office and untangle billions tied to Bill Hwang’s family office. And in the midst of it all, a junior Goldman Sachs manager recounted a call from the dying firm as it pleaded for the return of almost half a billion dollars it accidentally sent the lender.

Wall Street’s trial of the decade has offered vivid glimpses of the 72 hours that obliterated Hwang’s $36 billion fortune. One after another, Wall Streeters told a New York jury their version of how his secretive family office — and its pileup of wild wagers on jerry-rigged spreadsheets — ultimately crumbled and saddled banks with more than $10 billion in losses.

But it’s not mere scenes. Weeks of testimony have exposed cringeworthy misjudgments and costly blunders in various camps throughout the crisis — hardly Wall Street’s preferred image of calculated risk-taking. Bankers, for example, painfully acknowledged how they relied on sometimes-vague or evasive trust-me’s from Archegos while doling out billions in firepower for Hwang’s bets. That confidence melted into confusion that’s been replayed in the courtroom of a 90-year-old judge. Prosecutors are trying to make the case that Hwang manipulated the market and defrauded lenders…

…Jefferies calls CEO Rich Handler, who is on holiday in Turks and Caicos with a spicy margarita on the way. They tell him Archegos isn’t answering their calls. Handler says he’s going to get his cocktail and he wants Archegos positions gone and a tally of losses by the time he comes back. It was one of the few banks that escaped with minimal losses…

…As ViacomCBS and Discovery slump, Archegos capital plummets too. The family office is wiped out by the end of the day — just one week after Hwang gathered staff at his corporate apartment and talked about ways to grow the fund to $100 billion…

…Three years after the Archegos flameout exposed the audacity of Hwang’s investing, weeks of testimony have also served as an indictment of sorts of the system that enabled him.

Bank insiders on the witness stand have described extending billions of dollars in financing while relying on the equivalent of pinky promises to understand the size and shape of his portfolio, an approach that culminated with more than $10 billion in losses at a handful of lenders. Courtroom testimony and exhibits also revealed a lack of skepticism among those gatekeepers until it was far too late.

3. An Interview with Daniel Gross and Nat Friedman About Apple and AI – Ben Thompson, Daniel Gross, and Nat Friedman

Let’s start with the current belle of the ball, Apple. Apparently we have a new obvious winner from AI. In case you’re keeping track, I think Google was the obvious winner, then OpenAI was the obvious winner, then Microsoft, then Google again, then everyone just decided screw it, just buy Nvidia — I think that one still holds actually — and now we are to Apple, which by the way does not seem to be using Nvidia. Here’s a meta question: has anything changed in the broader environment where we can say with any sort of confidence, who is best placed and why, or is this just sort of the general meta, particularly in media and analysts like myself, running around like chickens with their heads cut off?

NF: I think one thing that really plays to Apple’s favor is that there seems to be multiple players reaching the same level of capabilities. If OpenAI had clearly broken away, such that they were 10 times better or even 2 times better than everyone else in terms of model quality, that would put Apple in a more difficult position. Apple benefits from the idea that either they can catch up or they have their choice of multiple players that they can work with, and it looks like we have somewhere between three and five companies that are all in it to win it and most of whom are planning to offer their models via APIs.

You have Google, OpenAI, Anthropic, you have X, you have Meta and so if you’re on the side of application building, generally this is great news because prices are going to keep dropping 90% per year, capabilities are going to keep improving. None of those players will have pricing power and you get to pick, or in Apple’s case, you can pick for now and have time to catch up in your own first party capabilities. The fact that no one’s broken away or shown a dominant lead, at least in this moment, between major model releases. We haven’t seen ChatGPT-5 yet, we haven’t seen Q* yet. Yeah, on current evidence, I think that’s good for people who are great at products, focus on products and applications and have massive distribution…

Yeah, I mean I was writing today, I wrote about Apple three times this week, but the latest one was I perceive there being two risk factors for Apple. One is what you just said, which is one of these models actually figures it out to such a great extent that Apple becomes the commodity hardware provider providing access to this model. They’ll have a business there, but not nearly as a profitable one as they’re setting up right now where the models are the commodity, that’s risk factor number one.

Risk factor number two is, can they actually execute on what they showed? Can this on-device inference work as well as they claim? Will using their own silicon, and I think it’s probably going to be relatively inefficient, but given their scale and the way that they can architect it, they can probably pull it off having this one-to-one connection to the cloud. If they can do it, that’s great, but maybe they can’t do it. They’re doing a lot of new interesting stuff in that regard. Of those two risk factors, which do you think is the more important one?

DG: I don’t fully understand and I never fully have understood why local models can’t get really, really good, and I think that the reason often people don’t like hearing that is there’s not enough epistemic humility around how simple most of what we do is, from a caloric energy perspective, and why you couldn’t have a local model that does a lot of that. A human, I think, at rest is consuming like 100 watts maybe and an iPhone is using, I don’t know, 10 watts, but your MacBook is probably using 80 watts. Anyway, it’s within achievable confines to create something that has whatever the human level ability is, it’s synthesizing information on a local model.

What I don’t really know how to think about is what that means for the broader AI market, because at least as of now we obviously don’t fully believe that. We’re building all of this complicated data center capacity and we’re doing a lot of things in the cloud which is in cognitive dissonance with this idea that local models can get really good. The economy is built around the intelligence of the mean, not the median. Most of the labor is being done that is fairly simple tasks, and I’ve yet to see any kind of mathematical refutation that local models can’t get really good. You still may want cloud models for a bunch of other reasons, and there’s still a lot of very high-end, high-complexity work that you’re going to want a cloud model for, chemistry, physics, biology, maybe even doing your tax return, but for basic stuff like knowing how to use your iPhone and summarizing web results, I basically don’t understand why local models can’t get really good.

The other thing I’d add in by the way that’s going to happen for free is there’s going to be a ton of work both on the node density side from TSMC, but also on the efficiency side from every single major AI lab, because even though they run their models in the cloud, or because they run their models in the cloud, they really care about their COGS. You have this process that’s happened pretty durably year-over-year, where a new frontier model is launched, it’s super expensive to run and then it’s distilled, quantized or compressed so that the COGS of that company are more efficient. Now if you continue to do that, yeah, you do sort of wonder, wait a minute, “Why can’t the consumer run this model?”. There’s a ton of economic pressure to make these models not just very smart, but very cheap to run. At the limit, I don’t know if it’s going to be like your Apple TV, sort of computer at home is doing the work, or literally it’s happening in your hands, but it feels like local models can become pretty powerful…

And where’s OpenAI in this? I analogized them to FedEx and UPS relative to Amazon, where Amazon just dumps the worst tasks on them that Amazon doesn’t want to do and they take all the easy stuff. But at the same time, one of my long-running theses is is that OpenAI has the opportunity to be a consumer tech company and they just got the biggest distribution deal of all time. Where do you perceive their position today as opposed to last week?

DG: I don’t fully understand the value of the distribution from the Apple deal. Maybe it makes sense, maybe it’s the Yahoo-Google deal. I think the question in AI is, if you’re working on enterprise, that’s one thing. If you’re working on consumer, the old rules of capitalism apply and you need a disruptive user interface such that people remember to use your product versus the incumbents and maybe that was chat.openai.com.

Which is now chatgpt.com, by the way.

DG: Chatgpt.com, or maybe that’s not enough. I think you saw a hint, not necessarily of just how OpenAI, but all of these labs sort of see themselves going in their product announcement where they created a thing that you just talk to, and it’s quite possible that maybe that is sufficient to be a revolutionary new user interface to the point where they can create their own hardware, they can basically command the attention of customers.

But I sort of think the general rule in the handbook is, if you’re going to be in consumer, you want to be at the top of the value chain. I mean, certainly it’s a mighty and impressive company, but the deal with Apple doesn’t really signal top of value chain. So the question is, really the ancient question we’ve been asking ourselves on this podcast for years now, which is, “What is the new revolutionary user interface that actually causes a change in user behavior?”.

Does that mean that Google is the most well-placed? They have all the smartphone attributes that Apple does, they should have better technology as far as models go. Does it matter that they’re worse at product or trust, like they don’t have the flexible organization that you were detailing before? We spent a lot of time on Google the last time we talked, has anything shifted your view of their potential?

DG: I think it really all depends on whether you can make an experience, and it always has depended on whether you can make an experience that’s good enough to justify a change in user behavior.

I’d argue for example, that there was a period in time where even though the actual interface was pretty simple, generating high-quality images was enough to cause a dramatic shift in user behavior. Midjourney is Midjourney not because it has some beautiful angled bar to pinch-and-zoom thing. It’s just like that was the remarkable miracle that it had. It made really good images, and it gave it some sticking power. So it’s this tension between defaults and inferior product and new revolutionary experiences, and whether they have enough to break the calcification of the incumbent.

It’s quite possible that if no one has any new brilliant ideas that Google, even though the models don’t seem to be as excellent, at least to the consumer’s eye, that they survive just because they have some Android user base, they certainly have Google.com. I will say the thing that has been surprising to me is while the technical capabilities of Google’s model seem impressive, the consumer implementation is actually I think worse than, “Just okay”. I thought their integration of language models into search was abysmal, sorry, to be totally frank. It was referencing Reddit comments that weren’t real facts, it’s not that hard to fix this sort of thing. So they need to be doing the bare minimum I think to maintain their status in the hierarchy. It’s possible they don’t do that, it’s possible that a new revolutionary user interface is also created, it’s also possible that they catch up and they bumble their way through it and they’re just fine.

But this is, I think the main question to the challenger labs, if they’re going in the direction of a consumer product is, “How do you make something that is so great that people actually leave the defaults?”, and I think we always underestimate how excellent you need to be. Enterprise things are a little bit different, by the way, and OpenAI is a very good lemonade stand just on enterprise dynamics, but consumer is in a way easier to reason about. You just have to have a miracle product and if that doesn’t happen, then yeah, maybe you should be long Google and Apple and the existing incumbents…

…NF: We’re in a bubble, in my opinion, no question. Like the early Internet bubble in some ways, not like it in other ways. But yeah, just look at the funding rounds and the capital intensity of all this, it’s crazy.

But bubbles are not bad for consumers, they’re bad for the investors who lose money in them, but they’re great for consumers, because you perform this big distributed search over what works and find out what does and even the failed companies leave behind some little sedimentary layer of progress for everyone else.

The example I love to give his Webvan, which was a grocery delivery service in the Internet bubble, and because they didn’t have mobile, they had to build their own warehouses because they couldn’t dispatch pickers to grocery stores, and they tried to automate those warehouses, and then because the Internet was so small, they didn’t have that much demand. There were not that many people ordering groceries on the web and so they failed and they incinerated a ton of capital and you could regard that as a total failure, except that some of the people at Webvan who worked on those warehouses, went off to found Kiva Systems, which did warehouse automation robots, which Amazon bought, and then built tens of thousands of them, and so Webvan’s robot heritage is powering Amazon warehouses and some of those executives ended up running Amazon Fresh and they eventually bought Whole Foods and so all that led to a lot of progress for other people.

The other thing, of course, is that a lot of money gets incinerated and a lot of companies fail, the technology moves forward, the user — putting URLs at the end of movie trailers, people learned about URLs, but some great companies are built in the process and it’s always a minority. It’s always a small minority, but it does happen. So yeah, I think we’re clearly in some kind of bubble, but I don’t think it’s unjustified. AI is a huge revolution and incredible progress will be made, and we should be grateful to venture capital for philanthropically funding a lot of the progress that we’ll all enjoy for decades…

It is interesting to think about in the context of human intelligence, like to what extent you look at a baby, you look at a kid and how they acquire knowledge. I’m most inspired to do more research on babies that are blind or babies that are deaf, how do they handle that decrease in incoming information in building their view of the world and model of the world? Is there a bit where we started out with the less capable models, but when we do add images, when we do add videos, is there just an unlock there that we’re underestimating because we’ve overestimated text all along? I’m repeating what you said, Nat.

NF: Yeah, Daniel was way ahead on this. I think Daniel said that in our first conversation together, and this is a really active area of research now, is how can we synthesize the chain of the internal monologue, the thinking and the dead ends and the chain of thought that leads to the answer that’s encoded in the text on the Internet.

There was the Quiet-STaR paper and the STaR paper from [Eric] Zelikman who’s now at xAI. I don’t know what relation if any of that bears to Q*, but that’s basically what he did is to use current models to synthesize chains of reasoning that lead to the right answers where you already know the answer and then take the best ones and fine-tune those and you get a lot more intelligence out of the models when you do that. By the way, that’s one of the things the labs are spending money on generating is, “Can I get a lawyer to sit down and generate their reasoning traces for the conclusions that they write and can that be fed into the training data for a model and then make the models better at legal reasoning because it sees the whole process and not just the final answer?” — so chain of thought was an important discovery and yet it’s not reflected in our training data as widely as it could be.

4. Mining for Money – Michael Fritzell

I read Trevor Sykes book The Money Miners recently. It’s a book about Australia’s 1968-70 speculative mining bubble. Consider it a historical reference book about a bygone era…

…The free market price of nickel started rising from early 1969 onwards, from £1,500 per ton in January to £2,000 by March.

After a nickel miner strike in Canada, the free market price skyrocketed to £4,250 per tonne and eventually £7,000. The nickel rally was on…

…The company that came to be associated with the nickel boom the most was a small Kambalda miner called Poseidon…

…Poseidon’s fortunes changed when it hired full-time prospector Ken Shirley - an old friend of Norm Shierlaw. Ken lived in a caravan, living a lifestyle of moving around the bush to make new discoveries. His travels took him to Mount Windarra north of Kalgoorlie. He discovered minerals and pegged 41 claims along an iron formation stretching 11 kilometers.

In April 1969, Shirley sent in samples from Mount Windarra for assay and found 0.5% copper and 0.7% nickel together with associated platinum. The consulting geologists who analyzed the sample called it “very encouraging” and “intensely interesting”…

…On 29 September, Poseidon’s directors made their first public announcement about the discovery at Windarra. It said that the second drill hole had encountered nickel and copper but didn’t mention anything about the grade.

Just a few days after, on 1 October, they issued a more comprehensive statement showing 3.6% nickel at depths of 145-185 feet. This meant that Poseidon had struck nickel - the biggest nickel discovery in the history of Australia.

The announcement sparked a massive rally in the price of Poseidon. On 2 October, speculators flooded the Sydney Stock Exchange building after hearing about Poseidon in the press. Many of them were unable to reach the trading floor. On that day, on of the boards collapsed but prices continued to be updated on it will the staff refastened the ropes. Speculators didn’t want to miss an opportunity to buy…

…On 19 November 1969, Poseidon made an announcement confirming the strike length and width of the discovery. But strangely enough, it didn’t give any details about the assays from the drill holes. Despite the lack of information, the market took the report positively, causing Poseidon’s share price to rise further to AU$55.

Broker research departments issued reports, dreaming and imagining what Poseidon could be worth. These valuation exercises went along these lines:

  • If the strike length was 1,500 feet, the width was 65 feet, and the depth was 500, that meant a total orebody of 48 million cubic feet, assuming the orebody is a neat rectangular block
  • The orebody contained 13 cubic feet to the ton, which meant about three million tons of ore
  • With an average grade of 2.0-2.5% nickel, the orebody could contain about 70,000 tons of nickel
  • At an average price of AU$5,000 per ton, the orebody could be worth AU$350 milllion
  • There will also be costs involved, including for labor, equipment, finance, infrastructure, etc. Say around AU$200 million.
  • Over a mine life of 15 years, you could then calculate an income stream over time of the remaining AU$150 million worth and figure out that you could get earnings of AU$10 million per year
  • Capitalize that number, and you could have justified a share price of AU$60 for Poseidon. Others, like Panmure Gordon in London, ended up with a value of AU$380/share.

Using a forward P/E multiple against expected earnings from Mount Windarra, the price didn’t seem so high. And speculators therefore felt comfortable bidding up the price to even higher levels…

…At Poseidon’s annual meeting in December 1969, long queues also formed outside the event. When the doors opened, 500 people rushed into the building. But due to a lack of seats, about 200 of them had to stand at the back while the meeting went on.

At the AGM, a discussion started about a potential rights issue to fund future capital expenditures. Instead, a share placement was proposed to a select number of individuals at AU$5 per share - a massive discount to the then-prevailing share price of AU$100 - suggesting severe dilution without raising much capital.

This was a huge problem because Poseidon had struck nickel but not enough capital to actually develop the mine.

A geologist speaking at the AGM mentioned that the zone in which the drilling had taken place indicated four million tons of ore. Participants flooded out of the meeting trying to calculate what 2.4% times 4 million tonnes might imply in terms of nickel resources. Enthusiasm boiled over.

Investors rushed out of the AGM to public telephone booths to call their brokers. At the start of the AGM to the end, the share price ran from AU$112 to AU$130. Once the press caught wind of the story, the price rallied further to AU$185.

No one rang a bell at the top of the market, but some lone voices expressed concern about how far the market had run:

  • A London stock broker called R. Davie said that “A lot of Australian stocks, to put it mildly, are highly suspect”.
  • Melbourne firm A Holst & Co predicted that in a few years’ time, the majority of present “gambling stocks” would be bitter memories to those who continued to hold them.

In February 1970, Poseidon reached a market capitalization of AU$700 million, or about AU$10 billion in today’s money. This represented about 3x the market cap of the Bank of New South Wales. And one-third the value of BHP, even though Poseidon hadn’t even begun developing any mine…

…Poseidon’s stock price peaked at around AU$280 per share. The market was waiting for Poseidon to announce how it would fund the development of its mine in Windarra. Yet nothing was announced. Meanwhile, the share price started declining.

By the end of February, almost all other speculative stocks on the board had also fallen significantly, with some losing half their value.

What led to this sudden change in sentiment?

  • A major contributing factor was that nickel prices peaked and started declining from late 1960s onwards. The higher prices would eventually provide an incentive to search for new orebodies. Mines started coming online in a number of new number of new countries. World production of nickel skyrocketed.
  • At the tend of 1969, there were 145 mining stocks listed in Sydney, compared with just 86 at the start of the year. And there were another 100 more mining companies queuing up to float and eventually list on the exchange. Supply eventually met the demand for scrip.
  • Another factor was higher capital costs as Australian interest rates rose sharply
  • Yet another factor was rising inflation as the operating costs of a mine shot up

It didn’t help that Poseidon’s eventual grade was almost half what was originally reported, with the grade falling from 3.6% to 2.4%. Combine that with much lower nickel prices and sharply higher development costs, and you have all the ingredients of a boom turning to bust…

…Looking back at the 1969-70 mining boom, not a single major deposit was discovered. Though it is true that the AU$850 million raised during the boom did help fund the development of new mines.

In the subsequent five years, Poseidon turned out to be a massive disappointment to investors. It soon realized that it would need AU$50 million to develop its Windarra mine, yet it only had AU$2 million left in cash and liquid assets. The solution was to team up with Western Mining Corporation, which took a 50% stake in the project.

But Poseidon incurred debt in the process. It tried to deal with its debt problems by its stake in the mine. But nobody wanted to buy it. And so in 1976, Poseidon defaulted on its debt and was delisted from the Australian exchanges.

During the bankruptcy, Poseidon’s 50% interest in Windarra was sold to Shell Australia for AU$30 million. But by that time, nickel prices had declined so much that Windarra had become only marginally economic. With these lower nickel prices, Shell saw no way of making the mine financially viable and it therefore shut down Windarra in 1978. The Poseidon dream was gone.

Perhaps the biggest lesson from the bust was that most exploration companies fail. The book quoted one study from Ontario Canada on mining claims between 1907 and 1953. About 6,600 mining companies had been formed during those 46 years, but only 348 reached production stage. Out of those, 294 failed to show a taxable profit. And only 54 companies ended up paying a dividend. In other words, the success rate was less than 1%.

5. Falkland Islands – The Next Big Thing? – Swen Lorenz

The 3,600 residents of the remote Falkland Islands could soon experience an “economic boom” that has the potential to “transform the islands’ entire economy”.

So reported by the Daily Telegraph on 30 June 2024…

…The islands have since seen an initial oil exploration boom, and exploitable oil reserves were found in 2010. Sadly, the oil price fell off a cliff in 2014, which killed the prospect of actually producing oil in the Falklands. The share prices of the fledgling Falkland oil companies all fell over 90%, many went under altogether and disappeared from public markets…

…As the Daily Telegraph just reported:

“The Falkland Islands has opened the door to oil exploration in its waters for the first time in history, in a move that could trigger an economic boom for locals.

The territory’s ruling council has asked islanders if they will back the scheme to extract up to 500m barrels of oil from the Sea Lion field, 150 miles to the north.

Details of the scheme were released without fanfare in the Falkland Islands Gazette, an official government publication, signed off by Dr Andrea Clausen, director of natural resources for the Falkland Islands government.

‘A statutory period of consultation will run from June 24, 2024 to August 5, 2024… regarding Navitas’ proposals for the drilling of oil wells and offshore production from the Sea Lion field,’ it said.

The territory’s ruling council has asked islanders if they will back the scheme to extract up to 500m barrels of oil from the Sea Lion field, 150 miles to the north. …. The field is thought to contain 1.7bn barrels of oil, making it several times bigger than Rosebank, the largest development planned for the UK’s own North Sea, estimated to hold 300m barrels.”

Are we about to see the Falkland Islands hype 2.0?…

…Now that Keir Starmer has wiped the floor with Rishi Sunak, will anything change?

It’s unlikely.

As the Daily Telegraph put it:

“Labour … has made accelerating the net zero transition a key part of its pitch to the electorate. Sir Keir Starmer’s party has promised to ban all new oil and gas exploration in British waters. This ban would not affect the Falklands, as it is the local administration there who have a say over drilling rights to surrounding waters.

Many within the Falklands government have wanted to make the islands a centre for oil production. John Birmingham, deputy portfolio holder for natural resources, MLA (Member of the Legislative Assembly), said: ‘Offshore hydrocarbons have the potential to be a significant part of our economy over the coming decades.

In a statement, the Falklands Islands government said: ‘We have the right to utilise our own natural resources. The Falkland Islands operates its own national system of petroleum licensing, including exploration, appraisal and production activities related to its offshore hydrocarbon resources.”

It’s all taken a long time, but the investment thesis behind the Falkland Islands oil discoveries could finally play out.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. e currently have a vested interest in Apple, Alphabet (parent of Google), Amazon, Meta Platforms, and Microsoft. Holdings are subject to change at any time.

Company Notes Series: Natural Resource Partners

Editor’s note: We’re testing out a new series for the blog, the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. Please give us your thoughts on the new series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!


Start of notes for Natural Resource Partners

Data as of 9 January 2024

Background on company

  • Company name: Natural Resource Partners LP
  • Ticker: NYSE: NRP
  • Structure: Publicly traded Delaware limited partnership formed in 2002
  • Natural Resource Partners LP’s operations are conducted through Opco and its operating assets are owned by its subsidiaries, where Opco refers to NRP (Operating) LLC, a wholly owned subsidiary of Natural Resource Partners LP.  NRP (GP) LP is the general partner and has sole responsibility for conducting Natural Resource Partners LP’s business and for managing its operations. Because NRP (GP) LP is a limited partnership, its general partner, GP Natural Resource Partners LLC, conducts its business and operations; the Board of Directors and officers of GP Natural Resource Partners LLC also makes the decisions for Natural Resource Partners LP. Robertson Coal Management LLC, a company wholly owned by Corbin Robertson, Jr., owns all of the membership interests in GP Natural Resource Partners LLC. 
  • The senior executives who manage Natural Resource Partners LP are employees of Western Pocahontas Properties Limited Partnership or Quintana Minerals Corporation, which are both controlled by Corbin Robertson Jr.
  • Neither GP Natural Resource Partners LLC nor any of its affiliates receive any management fee or other compensation in connection with the management of Natural Resource Partners LP apart from reimbursement for all direct and indirect expenses incurred on the behalf of Natural Resource Partners LP. 

Business

  • Natural Resource Partners LP has two segments: Mineral Rights, and Soda Ash
  • In 9M 2023, Natural Resource Partners LP’s total revenue was US$275.9 million and 79% was from Mineral Rights (US$217.3 million) and 22% was from Soda Ash (US$58.6 million). In 2022, Natural Resource Partners LP’s total revenue was US$389.0 million and 85% was from Mineral Rights (US$329.2 million) and 15% was from Soda Ash (US$59.8 million)

Business – Mineral Rights segment

  • The Mineral Rights segment consists of 13 million acres of mineral interests and other subsurface rights – including coal and other natural resources – across the US; if combined in a single tract, the ownership would cover roughly 20,000 square miles. The ownership provides critical inputs for the manufacturing of steel, electricity, and basic building materials, as well as opportunities for carbon sequestration and renewable energy. Natural Resource Partners is working to strategically redefine its business as a key player in the transitional energy economy in the years to come. Figure 1 below shows Natural Resource Partners LP’s geographic distribution of its ownership. 
Figure 1
  • Under the Mineral Rights segment, Natural Resource Partners LP does not mine, drill, or produce minerals. Instead, the limited partnership leases its acreage to companies engaged in the extraction of minerals in exchange for royalties and various other fees. The royalties are generally a percentage of the gross revenue received by lessees (the companies that extract the minerals), and are typically supported by a floor price and minimum payment obligation that protects Natural Resource Partners LP during significant price or demand declines. The majority of revenue from the Mineral Rights segment revenues come from royalties related to the sale of coal. Of the Mineral Rights segment’s US$217.3 million in revenue in 9M 2023, US$170.8 million came from Coal Royalty revenue, so Coal Royalty Revenue was 62% of Natural Resource Partners LP’s total revenue in 9M 2023; of the Mineral Rights segment’s US$329.2 million in revenue, in 2022, US$227.0 million came from Coal Royalty revenue, so Coal Royalty Revenue was 58% of Natural Resource Partners LP’s total revenue in 2022.  Natural Resource Partners LP’s coal is primarily located in the Appalachia Basin, the Illinois Basin, and the Northern Powder River Basin. Natural Resource Partners LP’s coal-related leases are typically long-term in nature – at end-2022, two-thirds of royalty-based leases have initial terms of 5 to 40 years, with substantially all lessees having the option to extend the lease for additional terms. Leases include the right to renegotiate royalties and minimum payments for the additional terms. 
  • Figure 2 below shows all the other revenue sources for the Mineral Rights segment in 9M 2023 and 9M 2022:
Figure 2
  • There are two kinds of coal, and Natural Resource Partners LP participates in both in its Mineral Rights segment:
    • Metallurgical coal, or met coal, is used to fuel blast furnaces that forge steel and is the primary driver of Natural Resource Partners LP’s long-term cash flows. Met coal is a high-quality, cleaner coal that generates exceptionally high temperatures when burned and is an essential element in the steel manufacturing process. Natural Resource Partners LP’s met coal is located in the Northern, Central and Southern Appalachian regions of the United States.
    • Thermal coal, sometimes referred to as steam coal, is used in the production of electricity. The amount of thermal coal produced in the US has been falling over the last decade as energy providers shift to natural gas and to a lesser extent, alternative energy sources such as geothermal, wind, and solar. Management believes thermal coal’s long-term secular decline will continue. This, together with the long-term strength of the met coal business and Natural Resource Partners LP’s carbon neutral initiatives mean that thermal coal will be a diminishing contributor to Natural Resource Partners LP’s business in the future. The vast majority of the limited partnership’s thermal coal sales are located in Illinois and its operations are some of the most cost-efficient mines east of the Mississippi River. The remainder of Natural Resource Partners LP’s thermal coal is located in Montana, the Gulf Coast and Appalachia.
    • Met coal tends to be priced higher than thermal coal.
    • In 2022, 70% of Natural Resource Partners LP’s Coal Royalty revenues and approximately 45% of coal royalty sales volumes were derived from metallurgical coal.
    • Figure 3 shows the types of coal production of Natural Resource Partners LP from various properties in 2022, and Figure 4 shows the limited partnership’s significant coal royalty properties in 2022.
Figure 3

Figure 4

  • Under the Mineral Rights segment, Natural Resource Partners LP also participates in the sequestration of carbon dioxide underground. Similar to its Coal Royalty business, Natural Resource Partners LP only plans to lease acreage to companies that will conduct carbon dioxide sequestration. Natural Resource Partners LP owns approximately 3.5 million acres of specifically reserved subsurface rights in the southern US with the potential for permanent sequestration of greenhouse gases. The carbon capture utilization and storage industry is in its infancy but a few facts are clear. A sequestration project requires acreage possessing unique geologic characteristics, close proximity to sources of industrial-scale greenhouse gas emissions, and the appropriate form of legal title that grants the acreage owner the right to sequester emissions in the subsurface. Although carbon sequestration rights and ownership continue to evolve, management believes that Natural Resource Partners LP owns one of the largest acreages in the USA with potential for carbon sequestration activities. In 2022 Q1, Natural Resource Partners LP leased its first acreages (75,000 acres) for subsurface carbon dioxide sequestration in underground pore space in southwest Alabama, with the potential to store over 300 million metric tons of carbon dioxide; in October of 2022, the second subsurface carbon dioxide sequestration lease was signed, this time for 65,000 acres of pore space near southeast Texas, with an estimated storage capacity of at least 500 million metric tons of carbon dioxide. At end-2022, Natural Resource Partners LP had 140,000 acres of pore space under lease for carbon dioxide sequestration, with estimated carbon dioxide storage capacity of 800 million metric tons.

Business – Soda Ash segment

  • The Soda Ash segment consists of 49% non-controlling equity interest in Sisecam Wyoming, a trona ore mining and soda ash production business located in the Green River Basin of Wyoming. Sisecam Wyoming mines trona and processes it into soda ash that is sold both in the USA and internationally into the glass and chemicals industries.
  • Sisecam Resources LP runs Sisecam Wyoming and owns the other 51%. Natural Resource Partners LP is not involved in the day-to-day operation of Sisecam Wyoming, although Natural Resource Partners LP is able to appoint – and has appointed – 3 of the 7 members of Sisecam Wyoming’s Board of Managers.
    • In December 2021, Sisecam Resources LP changed majority-owners. Before this, Sisecam Wyoming was named Ciner Wyoming, and Sisecam Resources LP was named Ciner Resources LP. Under the terms of the transaction, Ciner Enterprises Inc, which controls 74% of Ciner Resources LP, effectively sold 60% of its interests in Ciner Resources LP to Sisecam Chemicals USA Inc, an indirect subsidiary of Turkish conglomerate Türkiye Şişe ve Cam Fabrikalari A.Ş. Ciner Resources LP subsequently changed its name to Sisecam Resources LP. 
    • In February 2023, Sisecam Resources LP announced that it would be fully acquired by Sisecam Chemicals Resources LLC. Sisecam Chemicals Resources LLC is in turn, 60% owned by Sisecam Chemicals USA Inc. The acquisition price of Sisecam Resources LP is US$25 per unit for all the units of Sisecam Resources LP that were not controlled by Sisecam Chemicals USA Inc (from the above, Sisecam Chemicals USA Inc already controlled 60% of Sisecam Resources LP – see Appendix for more). Sisecam Resources LP’s total unit count as of 31 March 2023 was 19.8 million, so Sisecam Resources LP was valued by Sisecam Chemicals USA Inc at US$495 million. Sisecam Resources LP’s only business interest is its 51% stake in Sisecam Wyoming; so if Sisecam Resources LP was valued at US$495 million, the entire Sisecam Wyoming is worth US$971 million, and Natural Resources LP’s 49% stake in Sisecam Wyoming is worth US$476 million.
  • Sisecam Wyoming is one of the largest and lowest cost producers of soda ash in the world, serving a global market from its facility located in the Green River Basin of Wyoming. The Green River Basin geological formation holds the largest, and one of the highest purity, known deposits of trona ore in the world, in fact the vast majority of the world’s accessible trona is located in the Green River Basin. Trona is a naturally occurring soft mineral and is also known as sodium sesquicarbonate. Trona consists primarily of sodium carbonate (or soda ash), sodium bicarbonate, and water. Sisecam Wyoming processes trona ore into soda ash, which is an essential raw material in flat glass, container glass, detergents, chemicals, paper and other consumer and industrial products.
  • Around 30% of global soda ash is produced by processing trona, with the remainder being produced synthetically through chemical processes. Synthetic production of soda ash is more expensive than the costs for mining trona for trona-based production. In addition, trona-based production consumes less energy and produces fewer undesirable by-products than synthetic production.
  • Sisecam Wyoming’s Green River Basin surface operations are situated on approximately 2,360 acres in Wyoming (of which, 880 acres are owned by Sisecam Wyoming), and its mining operations consist of approximately 24,000 acres of leased and licensed subsurface mining area. 

Business – Customers

  • There is customer concentration for the whole of Natural Resource Partners LP, and also for the Soda Ash segment.
  • Natural Resource Partners LP’s revenue from (1) Alpha Metallurgical Resources was US$102.4 million in 2022, which accounted for 37% of the year’s total revenue and (2) Foresight Energy Resources was US$65.6 million, which accounted for 24% of the year’s total revenue.
  • For the Soda Ash segment, the two largest customers of Sisecam Wyoming are distributors in its export network that collectively made up 26% of its total gross revenue.

Business – Commodity prices

  • Even though Natural Resource Partners LP’s royalty fees are typically supported by a floor price and minimum payment obligation that protects Natural Resource Partners LP during significant price or demand declines, the limited partnership is still affected by price swings in commodity prices.
  • In 2022, met coal and thermal coal prices both reached record highs in 2022; met coal prices was the primary driver of Natural Resource Partners LP’s strong Mineral Rights segment performance in 2022. See Table 1 below for Mineral Rights segment performance in 2022.
  • In 9M 2023, met coal and thermal coal prices were both below record highs seen in 2022 – the Mineral Rights segment saw a dip in performance in 9M 2023, as shown in Table 1.
Table 1

Management

  • Corbin Robertson, Jr, 75, has served as CEO and Chairman of the Board of Directors of GP Natural Resource Partners LLC since 2002; GP Natural Resources LLC has managed Natural Resource Partners LP since its formation and listing in 2002.
  • 2015 was a tough year for Natural Resource Partners LP as commodity prices crashed and it had too much debt. Since then, Natural Resource Partners LP has dramatically improved its financial health. See Figures 5, 6, and 7.
Figure 5
Figure 6
Figure 7

Valuation

  • Unit price of Natural Resource Partners LP: US$96.93
  • Market cap of Natural Resource Partners LP: US$1.225 billion
  • Enterprise value of Natural Resource Partners LP: US$1.41 billion
  • Value of Natural Resource Partners LP’s stake in Sisecam Wyoming is US$476 million, so the market is assigning a value of US$938 million for the Mineral Rights segment
  • Trailing free cash flow as of 30 Sep 2023 is US$304 million (lion’s share comes from the Mineral Rights segment since most of net income is from the segment), so the Mineral Rights segment is valued at just 3x FCF. Worth noting that Natural Resource Partners LP’s FCF has been relatively stable since 2015 – see Figure 8
  • In Figure 6 above, it is worth noting that Natural Resource Partners LP’s aim is to “retire all permanent debt, redeem all the 12% preferred equity, and eliminate all outstanding warrants, all of which will require approximately US$325 million.” 
  • On the 12% preferred equity, Natural Resource Partners LP issued US$250 million of the preferred equity units in March 2017 at a price of US$1,000 per preferred equity unit. The preferred equity is convertible to common units, but Natural Resource Partners LP can choose to redeem the preferred equity for cash. The outstanding balance of the preferred equity as of 30 September 2023 is US$72 million. Once all the preferred equity is cleared, Natural Resource Partners LP can save US$30 million in annual coupon payments (based on US$250 million issue), and this adds directly to free cash flow; if the US$72 million outstanding balance is fully cleared, Natural Resource Partners LP can save US$8.6 million in annual coupon payments.
Figure 8

 

Appendix

Chart showing Sisecam Wyoming and Sisecam Resources LP’s ownership structure before and after the February 2023 announcement of the acquisition by Sisecam Chemicals USA


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 07 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 July 2024:

1. Etched is Making the Biggest Bet in AI – Etched

In 2022, we made a bet that transformers would take over the world.

We’ve spent the past two years building Sohu, the world’s first specialized chip (ASIC) for transformers (the “T” in ChatGPT).

By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either.

But for transformers, Sohu is the fastest chip of all time. It’s not even close.

With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs…

…By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling. We are living in the largest infrastructure buildout of all time.

Scaling the next 1,000x will be very expensive. The next-generation data centers will cost more than the GDP of a small nation. At the current pace, our hardware, our power grids, and pocketbooks can’t keep up…

…Santa Clara’s dirty little secret is that GPUs haven’t gotten better, they’ve gotten bigger. The compute (TFLOPS) per area of the chip has been nearly flat for four years…

…No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.

Suddenly, that’s changed:

  • Unprecedented Demand: Before ChatGPT, the market for transformer inference was ~$50M, and now it’s billions. All big tech companies use transformer models (OpenAI, Google, Amazon, Microsoft, Facebook, etc.).
  • Convergence on Architecture: AI models used to change a lot. But since GPT-2, state-of-the-art model architectures have remained nearly identical! OpenAI’s GPT-family, Google’s PaLM, Facebook’s LLaMa, and even Tesla FSD are all transformers…

…We believe in the hardware lottery: the models that win are the ones that can run the fastest and cheapest on hardware. Transformers are powerful, useful, and profitable enough to dominate every major AI compute market before alternatives are ready…

  • …As models scale from $1B to $10B to $100B training runs in the next few years, the risk of testing new architectures skyrockets. Instead of re-testing scaling laws and performance, time is better spent building features on top of transformers, such as multi-token prediction.
  • Today’s software stack is optimized for transformers. Every popular library (TensorRT-LLM, vLLM, Huggingface TGI, etc.) has special kernels for running transformer models on GPUs. Many features built on top of transformers aren’t easily supported in alternatives (ex. speculative decoding, tree search).
  • Tomorrow’s hardware stack will be optimized for transformers. NVIDIA’s GB200s have special support for transformers (TransformerEngine). ASICs like Sohu entering the market mark the point of no return. Transformer killers will need to run on GPUs faster than transformers run on Sohu. If that happens, we’ll build an ASIC for that too!…

…On GPUs and TPUs, software is a nightmare. Handling arbitrary CUDA and PyTorch code requires an incredibly complicated compiler. Third-party AI chips (AMD, Intel, AWS, etc.) have together spent billions on software to little avail.

But since Sohu only runs transformers, we only need to write software for transformers!

Most companies running open-source or internal models use a transformer-specific inference library like TensorRT-LLM, vLLM, or HuggingFace’s TGI. These frameworks are very rigid – while you can tweak model hyperparameters, changing the underlying model code is not really supported. But this is fine – since all transformer models are so similar (even text/image/video ones), tweaking the hyperparameters is all you really need.

2. Evolution of Databases in the World of AI Apps – Chips Ahoy Capital

Transactional Database vendors like MDB focus on storing and managing large volumes of transactional data. MDB also offers Keyword Search & rolled out Vector Search (albeit late vs competitors). Historically MDB Keyword Search has not been as performant as ESTC in use case utilizing large data sets or complex search queries & has less comprehensive Search features to ESTC…

…A vector database stores data as high-dimensional vectors rather than traditional rows and columns. These vectors represent items in a way that captures their semantic meaning, making it possible to find similar items based on proximity in vector space.

Real-World Example:

Imagine you have an online store with thousands of products. Each product can be converted into a vector that captures its attributes, like color, size, and category. When a customer views a product, the vector database can quickly find and recommend similar products by calculating the nearest vectors. This enables highly accurate and personalized recommendations.

In essence, a vector database helps in efficiently retrieving similar items, which is particularly useful in applications like recommendation systems & image recognition…

…RAG combines the strengths of Vector Search and generative AI models to provide more accurate and contextually relevant responses. Here’s how it works: 1) A user submits a query 2) the system converts the query into a vector and retrieves relevant documents or data from the vector database based on similarity 3) the retrieved documents are fed into a generative AI model (LLM), which generates a coherent and contextually enriched response using the provided data.

Multimodal models integrate multiple data types (text, images, audio) for comprehensive understanding and generation. It is crucial for vector databases to support multimodal data to enable more complex and nuanced AI applications. PostGres is a dominant open source vendor in the database market (scored #1 as most used Vector DB in recent Retool AI survey) but on it’s own it does NOT seem to include native support for multi-modality in it’s Vector Search. This limits the use cases it can be applied or used to without using an extension or integration to other solutions…

…Simple AI Use Cases:

Similarity Search has been one of the first and most prominent use cases of using GenAI. When a query is made, the database quickly retrieves items that are close in vector space to the query vector. This is especially useful in applications like recommendation engines &  image recognition where finding similar items is crucial. These use cases have been in POC since last year, and are starting to move into production later this year.

Complex AI Use Cases:

Enter Generative Feedback Loop! In a Generative Feedback Loop, the database is not only used for Retrieval of data (main use case in Similarity Search). But it also provides Storage of Generated Data. The database in this case stores new data generated by the AI model if deemed valuable for future queries. This in my view changes the relationship that the AI Application has with a database as it then has to store data back in. A key example for Generative Feedback Loop is an Autonomous Agent…

…An AI autonomous agent and a database work together to perform complex tasks efficiently. The relationship between a database and an AI Agent at first seems similar to other use cases, where the database holds all necessary data and the AI Agent queries the database to retrieve relevant information needed to perform its tasks.

The key difference here is the Learning and Improvement aspect of AI Agents. Instead of just containing historical data, the database has been updated with new data from user interactions and agent activities. The AI Agent then uses this new data to refine its algorithms, improving its performance over time…

…A real life example could be an E-commerce Chatbot. The customer buys a product and leaves a review for that product. The database then updates the new purchase and feedback data, and the AI Agent learns from this feedback to improve future recommendations. In this scenario, the database is not just being queried for data, but it is storing data back from the interaction, the AI Agent is learning from this, creating what is referred to as a Generative Feedback Loop.

3. The Big Bad BREIT Post – Phil Bak

So here it is, our analysis of Blackstone’s Real Estate Income Trust. The data presented is as-of the original publication of June 2023. It should be noted that over the past year everything has played out as we warned, including the gating of Starwood’s SREIT. Last thing I’ll say: I’d have much preferred to be wrong…

…Given the vital role that “NAV” plays in fundraising and performance reporting, it’s surprising that a greater amount of transparency is not provided by sponsors into their valuation methodology. Remind me again why they don’t provide a comprehensive explanation for each input in the DCF model?  Contrary to popular assumption, NAV is not based on appraisals that utilize sales comparisons. Instead, it’s based on an opaque discounted cash flow (DCF) methodology that is based on assumptions that are at the discretion of the sponsor who realizes fee streams pegged to the asset values they assign.

BREIT’s self-reported performance is – by their own admission – “not reliable.” Why we didn’t take a closer look at it before is as much a mystery as how they compute it. Management can’t just pull numbers out of thin air, and they’ve done nothing illegal, but they have a lot of discretion on where they estimate share values to be.

According to their prospectus, Blackstone values the fund itself once a month; then once a year it brings in an outsider who prepares a valuation based on their direction. But in its March 28, 2023 prospectus amendment, BREIT removed the steps in bold.  (1) a third-party appraisal firm conducts appraisals and renders appraisal reports annually; (2) an independent valuation advisor reviews the appraisal reports for reasonableness; (3) the advisor (Blackstone) receives the appraisal reports and based in part on the most recent appraisals, renders an internal valuation to calculate NAV monthly; (4) the independent valuation advisor reviews and confirms the internal valuations prepared by the advisor. (5) BREIT will promptly disclose any changes to the identity or role of the independent valuation advisor in its reports publicly filed with the SEC.

The verbiage in their disclosures doesn’t suggest that their calculation will be better than relying on market prices. The highlighted portions seem to be saying that Blackstone uses baseless returns in their SEC filings. They are not using a methodology prescribed by the SEC or any regulatory body. They do not adhere to any accounting rules or standards. Nor is their monthly NAV calculation audited by an independent public accounting firm. Blackstone uses it solely to determine the price at which the fund will redeem and sell shares. The NAV also happens to dictate the fees they can earn…

…One of BREIT’s big selling points was the ability to get a dividend of around 4% when interest rates were near zero, but the fund cannot – and has never been able to – cover the dividend payment. The current Class S distribution of 3.74% and Class I yield of 4.6% aren’t fully earned based on a key REIT cash-flow measure: Available Funds from Operations (AFFO). AFFO is used to approximate the recurring free cash flow from an income producing real estate vehicle and calculate the dividend coverage.

Blackstone reports AFFO, but their reported number is janky. It omits the management fees they charge.  Their rationale is that they have not taken their fees in cash but instead converted their $4.6 billion in fees into I-Shares, which is a class of BREIT shares that has no sales cost load.  But their election to accept shares is optional, the shares they receive are fully earned and they can redeem their shares at stated NAV.  What’s more, they have redemption priority over other BREIT investors; there is no monthly or quarterly redemption limitation.  Blackstone has already redeemed $658 million in shares.

BREIT’s AFFO also omits recurring real estate maintenance capital expenditures and stockholder servicing fees which are part of the sales load. Computing an AFFO more consistent with public company peers would result in a payout ratio for the first half of 2023 of more than 250%.

BREIT, unlike most big public REITs, has only covered about 13% of their promised dividend distribution. There’s not a single year in which they could cover their payment if everybody elected to receive it. Since inception, the company has delivered $950 million in AFFO and declared $7.3 billion in distributions.  That’s a stunning 768% dividend payout ratio…

…BREIT is levered approximately 49% against NAV and closer to 60% as measured against cost – the average cost of BREIT’s secured borrowings stands at approximately 5.5 % before hedges so the cost of their debt exceeds the yield. There are few ways you can turn these numbers into a double digit return.  Rents would have to go to the moon. The only way there can be positive leverage over a holding period (IRR) is if there is a shedload of positive income growth. And that’s exactly what BREIT has baked in the valuation cake. Interest rates went up so the NPV should be way down but – in a fabulous coincidence – future cash flow expectations went up by just enough to offset it. The numerator where revenue growth shows up made up for the rise in rates in the denominator…

…Here’s the BREIT Story in a nutshell: They’ve reported an annual return since inception for its Class S investors north of 10% with real estate investments that have a gross current rate of return of less than 5% on their cost.  They’ve been buying assets at a 4% cap rate, paying a 4.5% dividend and reporting 10+% returns. And nobody has called bullshit…

…By taking BREIT’s current NOI and dividing it by the NAV, investors can compute the implied cap rate on BREIT’s portfolio as they are valuing it – and compare it with public REITs. Interest rates have moved 200-300 basis points in recent months, and in public markets elevated cap rates have driven a 25% decline in values. A recent analysis of two vehicles in the non-traded REIT space concluded that both funds are being valued at implied cap rates of approximately 4.0% when publicly traded REITs with a similar property sector and geographic are trading at an implied cap rate closer to 5.75% . Applying that 5.75% cap rate to BREIT would result in a reduction in shareholder NAV of more than 50%. The current valuation of roughly $14.68/ share should be closer to $7-8/share.

4. Grant Mitchell — The Potential of AI Drug Repurposing – Jim O’Shaughnessy and Grant Mitchell

[Grant:] I was leading teams that were really pioneering the use of large medical record databases to identify subpopulations where a drug might perform better, might be higher in efficacy or better in safety. And we realized that that’s really, in a way, it’s kind of drug repurposing. It’s taking a drug and finding a population where it works a little bit better in a drug that already exists.

And as David was working in the lab and I was working in the data, we kind of came together and we say, “Can we automate what we’ve done? Can we scale what we’ve done in just one disease?” And given the explosion and the amount of data that exists out there and the improvements in the way that we can harmonize and integrate the data into one place, and then the models that have been built to analyze that data, we thought that maybe it would be possible. And we would check in every few years. 2016, 2017, it wasn’t really possible. We had this dream for a long time. 2018, 2019 is probably when I was talking to you and I was thinking about can we do this?

And really, lately it’s become possible, especially with, like I said before, more data, structured better. You have models like these large language models that are able to digest all of medical literature, output it in a structured fashion, compile it into a biomedical knowledge graph, these really interesting ways to display and analyze this kind of data. And ultimately, that’s how Every Cure was formed, was the concept that the drugs that we have are not fully utilized to treat every disease that they possibly can, and we can utilize artificial intelligence to unlock their life-saving potential.

Jim: Just so incredibly impressive. And a million questions spring to mind. As you know, my oldest sister, Lail, died of lupus. And when you said the cytokine storm, she had a kind of similar thing where she would go into remission, and then there’d be a massive attack, and it wasn’t like clockwork like your colleague’s, but when she died in 1971, it was like nobody knew very much at all about the disease. And in this case, did you find that the cure that worked for your colleague, was that transferable to other people with this similar disease?

Grant: Yeah, so the cure that worked for him, we studied his blood, we sampled his lymph nodes, we did immunohistochemistry and flow cytometry and basically found that their cytokines were elevated, another molecule called VEGF was elevated, there’s T cell activation. This all pointed towards something called the mTOR pathway. And started looking at different drugs that would hit that pathway, settled on a drug called Sirolimus. Sirolimus has been around for decades. It’s actually isolated from a fungus found in the soil on Easter Island. It’s amazing, right? And it shuts down the overactivation of this pathway that leads to this cascade that causes this whole cytokine storm.

For David it works perfectly, and it also works for about a third of the other patients that have a disease like David. And so that’s resulted in the benefit to countless thousands and thousands of patients’ lives. It’s a pretty thrilling and satisfying and motivating thing to be able to figure something like that out and to be able to do it, they have the opportunity to do it more and at scale and have the opportunity to save potentially millions of lives is a huge motivation for my team…

…[Grant:] So we couldn’t quite piece it together, and it was really an aha moment that this should be designed as a nonprofit, and it should be an AI company, because if you want to build the world’s best AI platform for drug repurposing, you’re going to need the world’s best dataset to train it, and you’re not going to get your hands on all the data that you want to get your hands on if you’re a competitor to all these people that are trying to use this data.

So we’re collaborative. We’re non-competitive. We are not profit-seeking. Our primary goal is to relieve patient suffering and save patient lives. So I’ll get to your question about how we’re utilizing that kind of resiliency data that I mentioned before. But first I’m going to help you understand how we use it. I’m going to describe the kind of data set that we’re constructing, and it’s something called a biomedical knowledge graph. It’s well known in the areas and the fields that we’re in, but maybe not a commonly known term to the layman, but it’s effectively a representation in 3D vector space of all of the biomedical knowledge we have as humanity, every drug, every target, every protein, every gene, every pathway, cell type, organ system, et cetera, and how they relate to different phenotypes, symptoms, and diseases.

And so every one of those biomedical concepts that I just described would be represented as a node, and then every relationship that that concept has with another relationship, like a drug treats a disease, there would be an edge. They call it a semantic triple. Drug, treats, disease. So you’ve got a node, an edge, and a node. And imagine a graph of every known signaling molecule and protein and a concept you can imagine, tens of millions of nodes, even more edges, representing all of human knowledge in biology. And that’s what multiple people have constructed. Actually, NIH funded a program called the NCATS Translator Program where a number of these knowledge graphs have been constructed. Other groups are doing it. A lot of private companies have their own. We are compiling them and integrating it with an integration layer that kind of takes the best from the top public ones, and then layers in additional proprietary data that we get from other organizations or data that we generate on our own.

And the example that you just mentioned, a company that is working on tracking genetic diseases and groups of people with the same genetic disease and looking at subpopulations within that group where there might be some resilience to the mutation, and then studying their genome to say, “Okay, what other proteins are being transcribed that might be protective against this mutation?”, and then going out and designing drugs that might mimic that protection. Well, how’s that data going to fit into my knowledge graph? Well, you can imagine that now if I have the data set that they’re working with, I know that there’s a mutation that results in a disease. So a gene associated with disease, that’s a node, an edge, and a node. And I also know that this other protein is protective of that disease.

So that just information that goes into the graph. And the more truth that I put into that graph, the more I can train that graph to identify patterns of successful examples of a drug working for a disease, and then it can try and find that pattern elsewhere where it either identifies nodes and edges that should already be connected or are connected in our knowledge base but no one has actually acted on, or it can maybe even generate a hypothesis on a totally new edge that is novel and has never been considered by experts before. So to answer your question, again, is we’re not doing that work ourselves, but we integrate the knowledge from that work so it can train our models and so we can pursue drug repurposing ideas…

…[Grant:] We’re not designing novel compounds. We think that there’s so much low-hanging fruit with the 3000 drugs that already exist that we are going to spend years and years unlocking the life-saving potential of those. And the reason why we’re focused there is because that is the fastest way to save human lives. If you develop a novel compound, you have to go all the way through the entire clinical development of an approval process. IND, phase one, phase two, phase three trials. This takes years and years and hundreds of millions of dollars, whereas in certain scenarios in drug repurposing, just like with my co-founder David, within weeks of us coming up with the hypothesis that this drug might work for him, as long as we could find a physician that would prescribe it to him, it went directly into his human body just weeks later.

So that brings me to this issue that I think we’re going to see, and you as an investor might make yourself aware of, is that there’s going to be lots and lots of failures in the world of AI-driven drug discovery. And that’s because not only are you an AI company that’s generating hypotheses, you’re also a biotech company that has to validate a novel compound and bring it all the way through the clinic through clinical trials and through regulatory approvals and into patients. So here you are an AI company, you’ve hired up your team of 50 data scientists and experts, and you come up with your hypothesis and you say, “Okay, great.”

You’re not Amazon that gets to A/B test where they’re going to put a button on the user interface and then they get feedback by the end of the day and okay, move the button here instead of here. When you come up with your hypothesis after your AI team says, “Okay, this is what the drug we’re going to move forward with,” you now have to go through potentially 10 years and hundreds of millions of dollars of additional development. So you don’t know if your AI team built anything of value. You don’t have that validation feedback loop that you do in other AI consumer-based organizations. So now you’re juggling sustaining an AI corporation that doesn’t have a feedback loop while you have to also pay for the clinical development of a drug. And so it’s a tension that’s hard, hard to manage.

And drug repurposing solves that tension. It allows us to go from hypothesis to validation in a much tighter feedback loop. So what we’re doing is something that both helps patients in the fastest and cheapest way possible, but also, the happy accident is that we push forward the field of data-driven drug discovery because we can inform our models in a faster feedback loop…

…[Grant:] One thing I learned when I was at Quantum Black and at McKinsey is, and we would go up against other machine learning organizations. I remember one time they put us head to head with another group and they said, “Okay, whoever comes with the best insights in the next three months, we’re going to pick to go with a longer contract going forward. And two seemingly similar teams working on the same dataset. We came up with a totally different recommendations than the other team did, and what was actual differentiator between the teams was that we had five medical degrees on our team, not just a bunch of data scientists, but data scientists plus medical experts. And in every step of the way that you’re building these knowledge graphs and designing these algorithms, you’re interfacing with medical expertise to make sure you imbue it with clinical understanding, with biological rationale of how this is actually going to work and how to interpret the typically really messy medical data.

And so if you think about the matrix that we’re producing, this heat map of 3000 drugs cross-referenced with 22,000 diseases creates 66 million possibilities, and we then score those possibilities from zero to one, and normalize them across the whole landscape. So that’s a tricky thing to do is drug A for disease X compared to drug B for disease Y, how do you compare the possibilities of each of those in zero to one? So we create that normalized score, and then we start looking at the highest scores and then filter down from there to say, “Okay, of all the highest probability of success opportunities here, which ones are going to impact patients the most, and which ones can we prove out quickly and efficiently in a lowcost trial with a few metapatients and high signal, so we can do this in three to six to 12 months births and suppose of five-year trial times?”

And the thing to think about, back to the comment about we need medical expertise highly integrated with what we’re doing is that even if you take the top thousand scores there, you’re still in the 0.001% of the highest ranking of scores, and now you got to pick amongst your thousand to get down to the top five. To get down to the top one, what is my first shot on goal going to be? That better be successful for all the things that I’m working on here, and it better help patients and really better work. So the AI can’t do that. You need a really smart head of translational science to make that last sort of decision of what’s going to go into patients and how it’s all going to work…

… [Grant:] we’re a nonprofit because we want to build the world’s best AI platform and we need the best data set to do it to save as many lives as we possibly can with drugs that already exist. So since the drugs already exist, it’s kind of a funny thing. I say we’re the smallest and the biggest pharma company in the world. We’re the biggest because every single drug that already exists is in our pipeline. We’re the smallest because we don’t own any of them. And then we take those drugs and we go after diseases that are totally neglected by the pharmaceutical industry. So it’s by design has to be a nonprofit.

5. How Bull Markets Work – Ben Carlson

Halfway through the year, the S&P 500 was up 15.3%, including dividends.

Despite these impressive gains the bull market has been relatively boring this year.

There have been just 14 trading days with gains of 1% or more. There has been just a single 2% up day in 2024. And there have only been 7 days of down 1% or worse.

Small moves in both directions.

Bull markets are typically boring like this. Uptrends tend to be these slow, methodical moves higher. Bull markets don’t make for good headlines because they’re made up of gradual improvements.

Bear markets, on the other hand, are where the excitement happens. Downtrends are full of both big down days and big up days…

..The best and worst days happen at the same time because volatility clusters. Volatility clusters because investors overreact to the upside and the downside when emotions are high…

…It’s also interesting to note that even though the S&P 500 is having a boring year, it doesn’t mean every stock in the index is having a similar experience.

While the S&P is up more than 15% there are 134 stocks down 5% or worse while 85 stocks are down 10% or more so far this year.

Stock market returns are concentrated in the big names this year, but it’s normal for many stocks to go down in a given year.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, and Tesla. Holdings are subject to change at any time.

No, Dividends Are Great

Dividends are the fruits of our investments and are what makes investing in companies so profitable.

In recent times, large American technology companies such as Meta Platforms, Salesforce, and Alphabet have initiated a dividend.

It’s easy to imagine that their shareholders would be pleased about it, but this isn’t always the case. Some shareholders are actually disappointed about the dividend announcements. They think that the companies have nowhere else to invest their capital and are thus returning it to their shareholders. In other words, they think that the companies’ growth potential have stalled.

But I see things differently. Dividends are ultimately what we, as shareholders, invest in a company for. Long-term shareholders are here to earn a cash stream from investing in companies. This is akin to building your own business which generates profits which you can cash out and enjoy. As such, dividends are the fruits of our investment.

And just because a company has started paying a dividend does not mean it can’t grow its earnings. Just look at some of the dividend aristocrats that have grown their earnings over a long span of time. There are many companies that can generate high returns on invested capital. This means that they can pay out a high proportion of their earnings as dividends and still continue to grow.

Dividends can compound too

For investors who don’t want to spend the dividend a company is paying, they can put that dividend to use by reinvesting it.

When a company is not paying a dividend, its shareholders have to rely on management to invest the company’s profits. When there’s a dividend, shareholders can invest the dividend in a way that they believe give them the highest risk-adjusted return available. Moreover, a company’s management team may not be the best capital allocators around – in such a case, when the company generates excess cash, management may invest it in a way that does not generate good returns. When a company pays a dividend, shareholders can make their own decisions and do not have to rely on management’s capital allocation skills.

And if you think the company was better off buying back shares, you can simply buy shares of that company with your dividends. This will have a similar effect to share buybacks as it will increase your stake in the company.

What’s the catch?

Dividends have some downsides though. 

Compared to buybacks, reinvesting dividends to buy more shares may be slightly less effective as shareholders may have to pay tax on those dividends. For example, Singapore-based investors who buy US stocks have to pay a 30% withholding tax on all US-company dividends.

The other downside is there’s more work for shareholders. If management was reinvesting prudently and not paying dividends, shareholders wouldn’t need to make a decision. But with dividends, shareholders have to decide where and when to reinvest that dividend. This said, it does give shareholders more options and opens up possibilities of where the dividend can be invested, instead of just relying on management. To me, I would happily take this tradeoff.

Don’t fret

Dividends are good. It’s funny that I even need to say this.

Dividends are the fruits of our investments and are what makes investing in companies so profitable. Without it, we will just be traders of companies, and not investors.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Alphabet, Meta Platforms, and Salesforce. Holdings are subject to change at any time.

What We’re Reading (Week Ending 30 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 30 June 2024:

1. An Interview with Scale AI CEO Alex Wang About the Data Pillar for AI – Ben Thompson and Alex Wang

When you saw that there was going to be a third pillar, yet no one was there, did you have any particular insights on how that would work, or was it just a matter of, “There’s a problem space that needs solving, and we’ll figure out how to solve it in the future”?

AW: Yeah. Probably the most formative, immediate experience was that I was training one of a neural network at this time on a single GPU in Google Cloud and using TensorFlow, and it was a neural network that detected emotion based on a photo of someone’s face, and all I did basically was I took the tutorial for ImageNet, so basically literally the tutorial code for a very different image recognition algorithm, and then I just swapped out the data set and then pressed “Enter”. Then 12 hours later, I had a neural network that smashed any of the other methods on this problem of recognizing emotion from images.

So the takeaway there is actually, data is what matters most.

AW: Yeah. From problem to problem, data is the only thing that varies, is maybe the better way to put it, and as a programmer, you kind of realize, “Oh, actually data is what’s doing all the actual programming and my insight into the problem doesn’t actually matter, it’s just all embedded in the data set that the model ends up getting trained on”.

So I think, A) I knew that data was very important. I remember this realization, the model ended at some performance, and I was like, “Okay, I’ve got to make this model better,” and so then I was like, “Okay, how am I going to improve on this data set?”, and then there was the second light bulb, which is that this is an incredibly painful process. You open up all the images and then you go through and you just look at, “Okay, are the labels for all the images correct?”, and then you’re like, “Okay, what new images should I get to pull into this?”, and then, “How am I going to get those labeled?”, and so all of the core operations, so to speak, of updating or changing or improving the data set were incredibly painful.

So I started the company in 2016, and this was an era where there was a broad-based recognition that platforms, particularly developer platforms that made very ugly things very easy were good businesses. It was already clear that AWS was ridiculously successful as a business, the most successful enterprise business that had ever existed, and then Stripe, it was also clearly recognized that Stripe was very successful, and so as a student of those companies realized that, “Hey, we should take this incredibly messy and complicated thing that exists today, and then figure out how to turn that into a beautiful developer UX and if we can accomplish that, then there’s a lot of value to be had here”.

There’s a lot to unpack there. Just as a broader philosophical point, do you think that insight about data still holds? So it’s not just that there’s three pillars, compute, algorithm, and data, but actually data is the most important, and just like you saw before, is it more complicated now or is even more the case?

AW: Yeah, I think it’s proving to be more and more the case. I was at an event with a lot of other AI CEOs recently, and one of the dinner conversations is, “Okay, compute, power, data: which do you run out of first?”, and the consensus answer around the room is data, and I think the data wall has become over the past few months, a pretty commonly debated topics. “Are we hitting a data wall in LLM development, or are we just fundamentally coming against the limits of data?” Even the most liberal assumptions around, let’s assume that you really did train on all human-generated text, which no sensible person does because you filter out all the bullshit, but if you did train on all human-generated texts, even then we will run out by 2027, 2028.

So just overall in terms of the sheer amount of data that’s necessary to keep up with scaling, we’re very clearly hitting some meaningful wall, and then if you look at, I think a lot of the model performance improvements as of late, or sort of the big gains in models, my personal reason, I think a lot of that actually boils down to data, and innovations on how to use data, and innovations on basically the data-intensive parts of the AI stack…

How have the needs of the market shifted then? You mentioned that you were getting at this before and I interrupted. You start out with images for self-driving cars, today it’s all about these text-based models. What is entailed in going from images to text?

AW: We had an interesting mid-step here, which is broadly speaking, I think the shift as the models have increased in intelligence is towards greater levels of expertise. But basically, we started autonomous vehicles and then starting about 2020 we actually started working with the government, the US government and this was driven because I grew up in Los Almos and realized that AI is likely a very important technology for our security.

We can do a side bit here, you wrote a very interesting piece on Substack in 2022, The AI War and How to Win It. Give me your thesis here and why you think it’s a big deal.

AW: Yeah, I think that the basic gist is first, if you look at the long arc of human history, it is punctuated by war. In some sense, human history is all about war, and then if you look at the history of war, then the history of war in some sense is all about technology. If you look at particularly the transitions from World War I to World War II to future wars, the Gulf War for example, the most significant bit so to speak, or the largest factor in how these wars end up playing out really, is access to technology. Obviously this is deep to my upbringing, grew up in Los Alamos, basically every year you have a multi-day history lesson on Los Alamos National Lab and the origins thereof.

So then you think about, “Okay, what are the relevant technologies today that are being built?”, and there’s a host of technologies I think are important, hypersonic missiles, space technology, et cetera. But AI is, you could very easily make the case, that it is the most important. If you could solve problem solving, then all of a sudden you have this incredibly powerful advantage.

If you believe that AI is really important for hard power, for American hard power, which is very important for I think ensuring that our way of life continues, then the most shocking thing for me was looking at, was going through and looking at the things that the CCP [Chinese Communist Party] were saying about AI, and there are CCP officials who have very literally said, “We believe that AI is our opportunity to become the military superpower of the world”. That we believe that roughly speaking, they said, “Hey, the Americans are not going to invest enough into AI, and so we’ll disrupt them by investing more into AI proportionally, and if we do so, even though we spend a lot less on our military, we will leapfrog them in capability”. This is, I think as a startup person, this is the core Innovator’s Dilemma or the core disruptive thesis that the CCP had basically a disruptive thesis on war powered by artificial intelligence.

This is basically the idea that you’re going to have these autonomous vehicles, drones, whatever, of all types controlled by AI, versus the US having these very sophisticated but operated by humans sort of systems, and the US will fall into the trap of seeking to augment those systems instead of starting from scratch with the assumption of fully disposable hardware.

AW: Yeah, I think there is at its core two main theses. One is perfect surveillance and intelligence in the sort of CIA form of intelligence, and this I think is not that hard to believe. Obviously, in China, they implemented cross-country facial recognition software as their first killer AI app, and it doesn’t take that much to think, “Okay, if you have that, then just extend the line and you have more or less full information about what’s happening in the world” and so that I think is not too hard to imagine.

Then the hot war scenarios is to your point, yeah, autonomous drone swarms of in land, air or sea that are able to coordinate perfectly and outperform any human.

I think when people hear AI, they think about the generative AI, LLMs, OpenAI, whatever it might be, and assume that’s a US company, Google’s a US company, et cetera, and so the US is ahead. This is obviously thinking about AI more broadly as an autonomous operator. Is the US ahead or what’s your perception there?

AW: I think that on a pure technology basis, yes, the US is ahead. China’s caught up very quickly. There’s two very good open source models from China. One is YiLarge, which is the model from Kai-Fu Lee‘s company, 01.ai. And then the other one is Qwen 2, which is out of Alibaba and these are two of the best open source models in the world and they’re actually pretty good.

Do they use Scale AI data?

AW: No, we don’t serve any Chinese companies for basically the same reasons that we’re working with the US military. YiLarge is basically a GPT-4 level model that they open-sourced and actually performs pretty well, so I think that on the technology plane, I think the US is ahead and by default I think the US will be maintaining a lead.

There’s an issue which Leopold Aschenbrenner recently called a lot of attention to, which is lab security. So we have a lead, but it doesn’t matter if, it can all be espionaged away basically and there’s this case recently of this engineer from Google, Linwei Ding who stole the secrets of TPU v6 and all these other secrets.

And wasn’t discovered for six months.

AW: Yeah, it wasn’t discovered for six months and also the way he did it was that he copy-pasted the code into Apple Notes and then exported to a PDF, and that was able to circumvent all the security controls.

So how does this tie into this middle stage for you of starting to sign government contracts? What were those about?

AW: Yeah, so I basically realized, and the punchline of what I was going through was that the United States was, by default, going to be bad at integrating AI into national security and into the military and a lot of this is driven by, for a while — this is less true now, but for a while — tech companies actively did not want to help the DOD and did not actively want to help US military capabilities based on ideology and whatnot, and even now the DOD and the US government are not really that great at being innovative and have a lot of bureaucracy that prevent this. So I decided basically like, “Hey, Scale, we’re an AI company, we should help the US government”.

We started helping them and we started working with them on all of their data problems that they needed to train specialized image detectors or specialized image detection algorithms for their various use cases, and this was the first foray into an area that required a lot of expertise to be able to do effectively, because at its core, the US government has a lot of data types and a lot of data that are very, very specialized. These are specialized sensors that they pay for, they’re looking at things that generally speaking the general population doesn’t care about, but they care a lot about — movement of foreign troops and the kinds of things that you might imagine military cares about — and so required data that was reflective of all of the tradecraft and nuance and capabilities that were necessary, so this was one of the first areas.

We actually have a facility in St. Louis, which have people who are by and large trained to understand all this military data to do this labeling.

So this was a clear separation then from your worldwide workforce?

AW: Yeah, exactly. It was a clear break in the sense that we were doing problems that almost anyone in the world could, with enough effort, do effectively and do well, to almost like the Uber driver, a very broad marketplace view, to something that required niche expertise and niche capability to do extremely well.

This sort of phase transition of data — there’s sort of a realization for us that, “Oh, actually in the limit almost all of the data labeling, almost all the data annotation is going to be in the specialized form”, because the arc of the technology is, first we’re going to build up all this generalized capability, and this will be the initial phase building of all these general capability, but then all the economic value is going to come from specializing it into all these individual specific use cases and industries and capabilities and it flowing into all the niches of the economy…

So where does synthetic data come into this?

AW: Yeah, synthetic is super fascinating. So I think that this has become super popular because we’re hitting a data wall, in some ways the most seductive answer to the data wall is, “Oh, we’ll just generate data to blow past the data wall”, generate data synthetically using models themselves. I think the basic results are that, at a very high level, synthetic data is useful, but it has a pretty clear ceiling because at it’s core you’re using one model to produce data for another model, so it’s hard to blow past the ceiling of your original model at a very fundamental level.

It’s a compressed version of what went into the original model.

AW: Yeah, exactly. It’s a very good way to compress insight from one model to get to another model, but it’s not a way to push the frontier of AI, so to speak…

So basically this is huge problem everyone is running into, it’s incredibly hard to solve and so someone is going to need to solve it and you’ve been working on it for eight to ten years or however long it’s been. The thesis seems pretty fairly straightforward, even if the margins are not necessarily going to be Nvidia-style margins, given that you have to use hundreds of thousands of humans to do that.

AW: Yeah and I think the other key nuance here, the other interesting thing, is today our revenue is 1% of Nvidia’s because, by and large, the budgets are mostly allocated towards compute. I think as with any portfolio optimization problem, in time, if data is actually the biggest problem, the percent of budgets that are allocated to data versus compute will slowly shift over time. So we don’t have to be half the budgets, even if we get to 5% of the budgets or 10% of the budgets versus 1% of the budgets, then there’s a pretty incredible growth story for data.

2. My Stock Valuation Manifesto – Vishal Khandelwal

1 .I must remember that all valuation is biased. I will reach the valuation stage after analyzing a company for a few days or weeks, and by that time I’ll already be in love with my idea. Plus, I wouldn’t want my research effort go waste (commitment and consistency). So, I will start justifying valuation numbers.

2. I must remember that no valuation is dependable because all valuation is wrong, especially when it is precise (like target price of Rs 1001 or Rs 857). In fact, precision is the last thing I must look at in valuation. It must be an approximate number, though based on facts and analysis.

3. I must know that any valuation method that goes beyond simple arithmetic can be safely avoided. If I need more than four or five variables or calculations, I must avoid that valuation method…

…10. I must remember that good quality businesses often don’t stay at good value for a long time, especially when I don’t already own them. I must prepare in advance to identify such businesses (by maintaining a watchlist) and buy them when I see them priced at or near fair values without bothering whether the value will become fairer (often, they do).

11. I must remember that good quality businesses sometimes stay priced at or near fair value after I’ve already bought them, and sometimes for an extended period of time. In such times, it’s important for me to remain focused on the underlying business value than the stock price. If the value keeps rising, I must be patient with the price even if I need to wait for a few years (yes, years!)…

…13. Ultimately, it’s not how sophisticated I am in my valuation model, but how well I know the business and how well I can assess its competitive advantage. If I wish to be sensible in my investing, I must know that most things cannot be modeled mathematically but has more to do with my own experience in understanding businesses.

14. When it comes to bad businesses, I must know that it is a bad investment however attractive the valuation may seem. I love how Charlie Munger explains that – “a piece of turd in a bowl of raisins is still a piece of turd”…and…“there is no greater fool than yourself, and you are the easiest person to fool.”

3. I Will F****** Piledrive You If You Mention AI Again – Nikhil Suresh

I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders…

…Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught…

…Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f****** catastrophe…

…A friend of mine was invited by a FAANG organization to visit the U.S a few years ago. Many of the talks were technical demos of impressive artificial intelligence products. Being a software engineer, he got to spend a little bit of time backstage with the developers, whereupon they revealed that most of the demos were faked. The products didn’t work. They just hadn’t solved some minor issues, such as actually predicting the thing that they’re supposed to predict. Didn’t stop them spouting absolute gibberish to a breathless audience for an hour though! I blame not the engineers, who probably tried to actually get the damn thing to work, but the lying blowhards who insisted that they must make the presentation or presumably be terminated.

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India…

…I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.

However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.

The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms because a market algorithm works out that humans can be converted into gloobnar, a novel epoxy which is in great demand amongst the aliens the next galaxy over for fixing their equivalent of coffee machines. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound. However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company’s bottom line in this scenario, so we won’t pay it any more attention.

A second outcome is that it turns out that the current approach does not scale in the way that we would hope, for myriad reasons. There isn’t enough data on the planet, the architecture doesn’t work the way we’d expect, the thing just stops getting smarter, context windows are a limiting factor forever, etc. In this universe, some industries will be heavily disrupted, such as customer support.

In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I’ll use once every five years. I was actually happy about this, and it’s a real edge over other applications. But if you don’t have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don’t actually have any internal documentation worth retrieving. Fix. Your. Shit.

The final outcome is that these fundamental issues are addressed, and we end up with something that actually actually can do things like replace programming as we know it today, or be broadly identifiable as general intelligence.

In the case that generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the future. Is that clear now? Having your team type in import openai does not mean that you are at the cutting-edge of artificial intelligence no matter how desperately you embarrass yourself on LinkedIn and at pathetic borderline-bribe award ceremonies from the malign Warp entities that sell you enterprise software5. Your business will be disrupted exactly as hard as it would have been if you had done nothing, and much worse than it would have been if you just got your fundamentals right. Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this. If we thread the needle between moderate impact and asteroid-wiping-out-the-dinosaurs impact, everything will be changed forever and your tepid preparations will have all the impact of an ant bracing itself very hard in the shadow of a towering tsunami.

4. Palmer Luckey and Anduril want to shake up armsmaking – Schumpeter (The Economist)

The war in Ukraine has been a proving ground for these sorts of weapons—and for Mr Luckey’s company. He visited Kyiv two weeks into the war. “What we’ve been doing was tailored for exactly the type of fight that’s going on and exactly what we predicted was going to happen,” he argues, pointing to three lessons.

One is the importance of drones that can navigate and strike autonomously, even in the face of heavy jamming of their signals and obscurants like metal-filled smoke clouds. Many existing drones have struggled with this, says Mr Luckey, because they lack “multi-modal” sensors, such as optical and infrared cameras, to substitute for GPS, and do not have enough built-in computing power to use the latest object-recognition algorithms.

Second is the observation that software is eating the battlefield. Imagine that Russia begins using a new type of jammer. Mr Luckey says that the data can be sent back immediately to generate countermeasures, which are then remotely installed on weapons at the front line without having to change any hardware. A recent study by the Royal United Services Institute, a think-tank in London, noted that drones in Ukraine needed to have their software, sensors and radios updated every six to 12 weeks to remain viable. Anduril, claims Mr Luckey, is “literally pushing new updates…every single night”.

His third lesson from Ukraine is that weapons must be built in vast quantities—and therefore cheaply. He laments that Russia produces shells and missiles far more cheaply than America does: “The US is now on the wrong side of an issue that we were on the right side of during the Cold War.” Anduril makes much of the fact that its production processes are modelled not on big aerospace firms, but automotive ones.

5. What It Really Takes to Build an AI Datacenter – Joe Weisenthal, Tracy Alloway, and Brian Venturo

Tracy (19:48):

Can I ask a really basic question? And we’ve done episodes on this, but I would be very interested in your opinion, but why does it feel like customers and AI customers in particular are so, I don’t know if addicted is the right word, but so devoted to Nvidia chips, what is it about them specifically that is so attractive? How much of it is due to the technology versus say maybe the interoperability?

Brian (20:18):

So you have to understand that when you’re an AI lab that has just started and it is an arms race in the industry to deliver product and models as fast as possible, that it’s an existential risk to you that you don’t have your infrastructure be your Achilles heel. Nvidia has proven to be a number of things. One is they’re the engineers of the best products. They are an engineering organization first in that they identify and solve problems, they push the limits, they’re willing to listen the customers and help you solve problems and design things around new use cases. But it’s not just creating good hardware, it’s creating good hardware that scales and they can support at scale.

And when you’re building these installations that are hundreds of thousands of components on the accelerator side and the InfiniBand link side, it all has to work together well. And when you go to somebody like NVIDIA that has done this for so long at scale with such engineering expertise, they eliminate so much of that existential risk for these startups. So when I look at it and I see some of these smaller startups saying, we’re going to go a different route, I’m like, what are you doing? You’re taking so much risk for no reason here. This is a proven solution, it’s the best solution and it has the most community support go the easy path because the venture you’re embarking on is hard enough.

Tracy (21:41):

Is it like the old, what was that old adage? No one ever got fired for buying Microsoft. Is it like no one IBM? Yeah, yeah. Or IBM, something like that.

Brian (21:50):

The thing here is that it’s not even, nobody’s getting fired for buying the tried and true and slower moving thing. It’s getting fired for buying the tried and true and best performing and bleeding edge thing. So I look at the folks that are buying other products and investing in other products almost as like they’re trying, they almost have a chip on their shoulder and they’re going against the mold just to do it.

Joe (22:14):

There are competitors to NVIDIA that they claim cheaper or more application specific chips. I think Intel came out with something like that. First of all, from the CoreWeave perspective, are you all in on Nvidia hardware?

Brian (22:31):

We are.

Joe (22:32):

Could that change

Brian (22:33):

The party line is that we’re always going to be driven by customers, right? And we’re going to be driven by customers to the chip that is most performant provides the best. TCO is best supported right now and in what I think is the foreseeable future, I believe that is strongly Nvidia…

…Joe (23:30):

What about Meta with PyTorch and all their chips?

Brian (23:33):

So their in-house chips, I think that they have those for very, very specific production applications, but they’re not really general purpose chips. And I think that when you’re building something for general purpose and there has to be flexibility in the use case while you can go build a custom ASIC to solve very specific problems, I don’t think it makes sense to invest in those to be a five-year asset if you don’t necessarily know what you’re going to do with it…

…Joe (25:31):

Let’s talk about electricity. This has become this huge talking point that this is the major constraint and now that you’re becoming more vertically integrated and having to stand up more of your operations, we talked to one guy formerly at Microsoft who said one of the issues is that there may be a backlash in some communities who don’t want their scarce electricity to go to data centers when they could go to household air conditioning. What are you running into right now or what are you seeing?

Brian (25:58):

So we’ve been very, very selective on where we put data centers. We don’t have anything in Ashburn, Virginia and the Northern Virginia market I think is incredibly saturated. There’s a lot of growing backlash in that market around power usage and just thinking about how do you get enough diesel trucks in there to refill generators that they have a prolonged outage. So I think that there’s some markets where it’s just like, okay, stay away from that. And when the grids have issues and that market hasn’t really had an issue yet, it becomes an acute problem immediately.

Just think about the Texas power market crisis back in, I think it was 2021, 2020 where the grid wasn’t really set up to be able to handle the frigid temperatures and they had natural gas valves that were freezing off at the natural gas generation plants that didn’t allow them to actually come online and produce electricity no matter how high the price was, right?

So there’s going to be these acute issues that people are going to learn from and the regulators are going to learn from to make sure they don’t happen again. And we’re kind of siting our plants and markets where our data centers and markets where we think the grid infrastructure is capable of handling it. And it’s not just is there enough power? It’s also on things.

AI workloads are pretty volatile in how much power they use and they’re volatile because every 15 minutes or every 30 minutes, you effectively stop the job to save the progress you’ve made. And it’s so expensive to run these clusters that you don’t want to lose hundreds of thousands of dollars of progress. So they take a minute, they do what’s called checkpointing where they write the current state of the job back to storage and at that checkpointing time, your power usage basically goes from a hundred percent to like 10% and then it goes right back up again when it’s done saving it.

So that load volatility on a local market will create either voltage spikes or voltage sags. A voltage sag is what you see is what causes a brownout that we used to see a lot of times when people would turn their air conditioners on. It’s thinking through, okay, how do I ensure that my AI installation doesn’t cause a brownout when people are turning during checkpointing, when people are turning their air conditioners on?

That’s the type of stuff that we’re thoughtful around, how do we make sure we don’t do this right? And talking to engineer NVIDIA’s engineering expertise, they’re working on this problem as well, and they’ve solved this for the next generation. So it’s everything from is there enough power there? What’s the source of that power? How clean is it? How do we make sure that we’re investing in solar and stuff in the area to make sure that we’re not just taking power from the grid to also when we’re using that power, how is it going to impact the consumers around us?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Microsoft, and Tencent. Holdings are subject to change at any time.

How Nvidia Passed Microsoft In Market Cap To Become The Most Valuable Public Company & More

Last week, on 19 June 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station. My friend Willie Keng, the founder of investor education website Dividend Titan, was hosting a segment for the radio show and we talked about a few topics:

  • Singapore Telecommunications’ and KKR’s joint-investment of S$1.75 billion in ST Telemedia Global Data Centres (Hints: Singtel’s share of the initial investment is S$400 million and should not cause Singtel to struggle financially in any way if it does not work out; ST Telemedia Global Data Centres has a portfolio of 95 data centres, but it is a private company so it’s hard to tell how much value Singtel will be getting in exchange)
  • The drivers behind Nvidia’s rise to surpass Microsoft in market cap to become the most valuable public company in the world (US$3.3 trillion market cap), and the potential risks and challenges the company might face (Hints: In my view, Nvidia’s rise is driven by the interplay of enthusiasm over AI and the company’s excellent business results; the risks faced by the company include potential pricing-pressure from a key supplier, and competing products from its main customers) 
  • How I identify value opportunities in the US stock market when market indices are at record levels (Hint: The way to look for opportunities is to look at a stock as a piece of a business and figure out the economic value of the underlying business)
  • How I manage investing risks (Hint: It starts with defining what risk is, and isn’t) 

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 June 2024:

1. The C Word – Jonathan Clements

ON SUNDAY MORNING, May 19, I was enjoying croissants and coffee with Elaine at the kitchen table, while watching the neighborhood sparrows, finches, cardinals and squirrels have their way with the bird feeder. All was right in our little world, except I was a little wobbly when walking—the result, I suspected, of balance issues caused by an ear infection.

It was going to be a busy week, and I figured that it would be smart to get some antibiotics inside me, even if visiting the urgent care clinic on Sunday might be more expensive than contacting my primary care physician on Monday and perhaps having to go in for an appointment.

Long story short, I ended the day in the intensive care unit of a local hospital, where the staff discovered lung cancer that’s metastasized to my brain and a few other spots. This, as you might imagine, has meant a few changes in my life, and there will be more to come.

I have no desire for HumbleDollar to become HumbleDeathWatch. But my prognosis is not good. I’ve had three brain radiation treatments and I started chemotherapy yesterday, but these steps are merely deferring death and perhaps not for very long. I’ll spare you the gory medical details. But as best I can gather, I may have just a dozen okay months ahead of me…

The cliché is true: Something like this makes you truly appreciate life. Despite those bucket-list items, I find my greatest joy comes from small, inexpensive daily pleasures: that first cup of coffee, exercise, friends and family, a good meal, writing and editing, smiles from strangers, the sunshine on my face. If we can keep life’s less admirable emotions at bay, the world is a wonderful place.

We can control risk, but we can’t eliminate it. I’ve spent decades managing both financial risk and potential threats to my health. But despite such precautions, sometimes we get blindsided. There have been few cancer occurrences in my family, and it’s never been something I had reason to fear. Chance is a cruel mistress.

It’s toughest on those left behind. I’ll be gone, but Elaine and my family will remain, and they’ll have to navigate the world without me. I so want them to be okay, financially and emotionally, and that’s driving many of the steps I’m now taking…

Life’s priorities become crystal clear. Even at this late stage, I believe it’s important to have a sense of purpose, both professionally and personally. I can’t do much about the fewer years, and I have no anger about their loss. But I do want the time ahead to be happy, productive and meaningful.

2. Central Banking from the Bottom Up – Marc Rubinstein

From his office a few blocks from the River Rhine in Dusseldorf, Theo Siegert had been scouring the world for investment opportunities. His research process had thrown up an under-appreciated banking stock headquartered across the border in Switzerland, and he started building a stake. Siegert knew a bit about the banking business – he was already a non-executive director of Deutsche Bank – but this stock was different. In his home country, as in many others, central banks tend not to trade freely on the stock exchange. Not so in Switzerland. Before long, Siegert had become the largest shareholder of the Schweizerische Nationalbank, the Swiss National Bank…

…It would be difficult for the Swiss National Bank to pursue its mandate – ensuring that money preserves its value and the economy develops favorably – if it also had to pander to the demands of private shareholders. So it limits private shareholders to voting just 100 of their shares – equivalent to a 0.1% position – leaving Siegert with 4,910 shares on which he is ineligible to vote. And it caps the dividend at 15 Swiss Francs a share, equivalent to a 0.4% yield at today’s price of 3,850 Swiss Francs. Of the remaining distributable net profit, a third accrues to the central government and two-thirds to regional cantonal governments.

As a result, the 10.4 kilograms of gold per share the bank carries and its 1.2 million Swiss Francs of overall net assets per share (at March valuations) remain out of grasp for private shareholders. At best, the stock is a safe haven, providing a preferred return in a strong currency, with no counterparty risk…

…The trouble was, 2022 wasn’t a good year for asset prices, leaving the Swiss National Bank highly exposed…

…Having earned 174 billion Swiss Francs cumulatively over the prior thirteen years, the Swiss National Bank lost 133 billion Swiss Francs in a single year in 2022, equivalent to 17% of GDP. It canceled its dividend for only the second time in over 30 years, signaling that there is risk in a 0.40% dividend after all.

And although asset markets recovered in 2023, strength in the Swiss Franc during the year – partly driven by the bank selling down some of its foreign assets – led to a record foreign exchange hit, triggering another overall loss (of 3 billion Swiss Francs) and another canceled dividend. Fortunately, 2024 has so far been better and, as of the first quarter, over 40% of the two-year loss has been recovered…

…In some cases, such large losses have eaten into capital, leaving many central banks operating on negative equity. As a private sector analyst, this looks frightening, but explicit government support makes it moot. Even before the current spate of losses, some central banks, including those in Chile, the Czech Republic, Israel and Mexico, carried on their business for years with negative capital. A study from the Bank for International Settlements concludes that none of them compromised on their ability to fulfill their mandate.

Because it maintains both a distribution reserve to carry forward some profit and a currency reserve that is not distributable, the Swiss National Bank did not slip into negative equity despite its large loss. At the end of 2023, its equity to asset ratio stood at 7.9% and by the end of March, it was up to 14.3%. That contrasts with the Federal Reserve, which has $43 billion of capital supporting $7.3 trillion of assets, not including almost a trillion dollars of unrealized losses.

But going forward, the business of central banking will grow more challenging. Not only do higher rates expose central banks to losses related to assets purchased in the past, they also make it difficult to generate net interest income on the current balance sheet. Seigniorage income still persists but the falling use of cash may erode it in future years. Meanwhile, commercial bank deposits – which form the bulk of a central bank’s liabilities (449 billion Swiss Francs in the case of the Swiss National Bank, compared with 76.3 billion Swiss Francs of banknotes) – are typically remunerated at market rates, which are higher than yields on legacy securities. Central banks are paying a floating rate while locked into a (lower) fixed rate on their assets.

The challenge is evident in a closer look at the Swiss National Bank. In the era of negative interest rates, it earned income on sight deposits it held on behalf of commercial banks. In 2021, the last full year of negative rates, that income was 1.2 billion Swiss Francs. Having raised rates to 1.50%, the relationship flipped and the central bank began paying interest to commercial banks, which in 2023 amounted to 10.2 billion Swiss Francs. With the yield on Swiss Franc-denominated securities still low, net interest income on the book came to a negative 8.7 billion Swiss Francs…

…From its most recent high of 7,900 Swiss Francs at the beginning of 2022, the Swiss National Bank stock price has halved. Against its muted profit outlook, this is no surprise: The golden era of central bank profitability is likely over…

…For others, though, it’s fine. As the general manager of the Bank for International Settlements noted last year, “Unlike businesses, central banks are designed to make money only in the most literal sense.” Viewing central banks as stocks is instructive, but fortunately for the economy at large, there is more to them than that.

3. Reports of the petrodollar system’s demise are ‘fake news’ – here’s why – Joseph Adinolfi

Earlier this week, reports circulating widely on social-media platforms like X offered up a shocking proclamation: A 50-year-old agreement between the U.S. and Saudi Arabia requiring that the latter price its crude-oil exports in U.S. dollars had expired on Sunday.

The collapse of the accord would inevitably deal a fatal blow to the U.S. dollar’s status as the de facto global reserve currency, various commentators on X opined. Surely, financial upheaval lay ahead…

…But as speculation about an imminent end to the U.S. dollar’s global dominance intensified, several Wall Street and foreign-policy experts emerged to point out a fatal flaw in this logic: The agreement itself never existed…

…The agreement referred to by Donovan is the United States-Saudi Arabian Joint Commission on Economic Cooperation. It was formally established on June 8, 1974, by a joint statement issued and signed by Henry Kissinger, the U.S. secretary of state at the time, and Prince Fahd, the second deputy prime minister (and later king and prime minister) of Saudi Arabia, according to a report found on the Government Accountability Office’s website.

The agreement, as initially envisioned, was intended to last five years, although it was repeatedly extended. The rational for such a deal was pretty straightforward: Coming on the heels of the 1973 OPEC oil embargo, both the U.S. and Saudi Arabia were eager to flesh out a more formal arrangement that would ensure each side got more of what it wanted from the other.

The surge in oil prices following the OPEC embargo was leaving Saudi Arabia with a surplus of dollars, and the Kingdom’s leadership was eager to harness this wealth to further industrialize its economy beyond the oil sector. At the same time, the U.S. wanted to strengthen its then-nascent diplomatic relationship with Saudi Arabia, while encouraging the country to recycle its dollars back into the U.S. economy…

…According to Donovan and others who emerged on social-media to debunk the conspiracy theories, a formal agreement demanding that Saudi Arabia price its crude oil in dollars never existed. Rather, Saudi Arabia continued accepting other currencies – most notably the British pound (GBPUSD) – for its oil even after the 1974 agreement on joint economic cooperation was struck. It wasn’t until later that year that the Kingdom stopped accepting the pound as payment.

Perhaps the closest thing to a petrodollar deal was a secret agreement between the U.S. and Saudi Arabia reached in late 1974, which promised military aid and equipment in exchange for the Kingdom investing billions of dollars of its oil-sales proceeds in U.S. Treasurys, Donovan said. The existence of this agreement wasn’t revealed until 2016, when Bloomberg News filed a Freedom of Information Act request with the National Archives…

…Still, the notion that the petrodollar system largely grew organically from a place of mutual benefit – rather than some shadowy agreement established by a secret cabal of diplomats – remains a matter of indisputable fact, according to Gregory Brew, an analyst at Eurasia Group…

…Even more importantly as far as the dollar’s reserve status is concerned, the currency or currencies used to make payments for oil (BRN00) (CL00) are of secondary importance. What matters most when it comes to the dollar maintaining its role as the world’s main reserve currency is where oil exporters like Saudi Arabia decide to park their reserves, Donovan said.

4. On the Special Relativity of Investment Horizons – Discerene Group

We believe that it is hard for corporate executives to think long-term if they are overwhelmingly rewarded for short-term results. In their paper, “Duration of Executive Compensation,”2 Radhakrishnan Gopalan, Todd Milbourn, Fenghua Song, and Anjan Thakor developed a metric for “pay duration.” It quantifies the average duration of compensation plans of all the executives covered by an executive intelligence firm’s survey of 2006-2009 proxy statements. The average pay duration for all executives across the 48 industries in their sample was just 1.22 years. We think that such performance-based compensation duration borders on the absurd for leaders of ostensibly multi-decade institutions buffeted by so many factors beyond their short-term control.

Perhaps unsurprisingly, incentives drive behavior.3 Executive-pay duration was longer in firms that spent more on R&D, firms with a higher proportion of independent board directors, and firms with better stock-price performance. Conversely, firms that offered shorter pay duration to their CEOs were more likely to boost short-term earnings with abnormal accruals of operating expenses.

In a survey4 of 401 US CFOs conducted by John Graham, Campbell Harvey, and Shiva Rajgopal,   80% of survey participants reported that they would decrease discretionary spending on R&D, advertising, and maintenance to meet earnings targets. 55.3% said that they would delay starting a new project to meet an earnings target, even if such a delay entailed a sacrifice of value. 96.7% prefer smooth to bumpy earnings paths, keeping total cash flows constant. One CFO said that “businesses are much more volatile than what their earnings numbers would suggest.” 78% of survey participants would sacrifice real economic value to meet an earnings target.

Likewise, Daniel Bergstresser and Thomas Philippon have found5 that the more a CEO’s overall compensation is tied to the value of his/her stock, the more aggressively he/she tends to use discretionary “accruals” to affect his/her firm’s reported performance…

…According to the World Economic Forum and International Monetary Fund, the average holding period of public equities in the US has fallen from >5 years in 1975 to ~10 months in 2022…

…Another effect of short-termism has been to encourage firms to shed or outsource functions formerly considered to be critical to businesses, including R&D, manufacturing, sales, and distribution, thus creating atomized and fragile slivers of businesses that nevertheless often command illogically lofty valuations. For example, in recent times, aerospace, pharmaceuticals, and software companies that do not attempt to sustain going-concern investments and instead seek to continually acquire other companies in order to hollow out such companies’ engineering, R&D, and/or sales/distribution teams — thereby eliminating all possible sources of competitive advantage — have been feted as “asset-light” and “high-ROIC” poster children of their respective industries.

5. An Interview with Terraform Industries CEO Casey Handmer About the Solar Energy Revolution – Ben Thompson and Casey Handmer

But let’s dig into this solar thing. What is driving the cost curve decrease that was forecasted in 2011 that attracted you? And that has absolutely manifested over the last 10 years, famously exceeding every official projections for future costs. It always ends up being cheaper, faster than people realize. What is the driver of that?

CH: Well, so actually even Ramez Naam’s predictions were too conservative. No one, back then, predicted that solar would get as cheap as it has now. If you look at the DOE’s predictions in 2012 for how long it would take for us to get to current solar costs, their best guesses were 2150, and I don’t know if I’ll live that long.

So of course their entire roadmap for decarbonization didn’t include this, but now we have it. Can we use it? Yes, we sure as hell can and we sure as hell should, because it’s a massive gift that enables us to — we don’t have to de-growth in order to stop emitting pollution into the atmosphere. We can build our way out of the climate crisis by just increasing energy consumption and making energy cheaper for everyone.

In terms of how it gets cheaper, well, essentially, as I say, once the technology is inside the tent of capitalism, it’s generating value for people. It tends to attract wealth, it tends to attract capital, and that capital can be used to do things like hire manufacturing process engineers, and they’re very, very clever and they work very hard, particularly probably hundreds of thousands of engineers working at various solar factories in China right now. And sooner or later, they will find every possible configuration of matter necessary to force the price down. So same as with Moore’s law, essentially, we’ve just seen steady improvements.

Yeah, I was going to ask, is this an analogy to Moore’s law or is it actually the same sort of thing? Moore’s law is not a physical law, it is a choice by companies and individuals to keep pushing down that curve. Number one, what I get from you is that’s the same sort of concept here, but number two, are the actual discoveries actually similar to what’s going on?

CH: Yeah, actually to a large extent because it’s a silicon-based technology.

Right, exactly.

CH: There’s a lot of commonality there, but I think Moore’s law is not a law of nature, it’s what we call a phenomenological law, an emergent law. But basically all it says is there’s a positive feedback loop between cost reductions, increases in demand, increase in production, and cost reductions. So provided that the increase in demand, the induced demand as a result of the cost reduction, exceeds the cost reduction for the next generation of technology, you have a positive feedback loop. Otherwise, it’ll converge at some point, right? You’ll achieve maybe a 10x cost reduction and then it’ll stop, and we start to hit diminishing returns on all these technologies. But if you look at Moore’s law, it’s actually a series of maybe 20 or 30 different overlapping technology curves that kind of form this boundary of technology throughout time, and you see the same thing in solar technology if you really look under the hood and see what’s going on.

But yeah, the fundamental thing is there’s just enormous demand for solar at lower and lower prices and so manufacturers are justified in investing the capital they need in order to hit those prices and then the feedback mechanism keeps going. Solar manufacturing itself is a brutally competitive business which is both good and bad, it means like if you decide that you want to compete in solar, you don’t have to be at it for 50 years in order to compete. If you can capitalize, you can build a solar factory and if you’re smart enough and you work hard enough, in five years you can be in the top 20 manufacturers globally which is huge. Talking about billions of dollars of revenue every year just because everyone’s existing capital stock gets depreciated really quickly.

Right. But to your point, it’s also commodity then, right? So how do you actually build a sustainable business?

CH: Well, picks and shovels essentially. So actually one of the things that we like to say at Terraform, and I’m jumping the gun slightly here, but Terraform’s product essentially is a machine that converts solar power into oil and gas, so it bridges these two technology spans. It allows you to arbitrage essentially economically unproductive land that would otherwise just be getting hot in the sun. You throw some solar panels on there, that’s your computing hardware, but that’s not very useful, right? I could hand you an H100 but doesn’t do anything for you until you’ve got software to run on it and the software allows the raw computing power of that H100 to become useful for an end consumer…

Actually let’s run through some of the objections to solar power and then I think that will inherently get to some of these things. So we talked about the nuclear bit, what happens when the sun doesn’t shine?

CH: Yeah, so we’re actually seeing this in California right now. It creates a time arbitrage, right? If you have the ability to store power during the day and then release it during the night, you can make an incredible amount of money and that’s why we’ve seen battery deployments in California, for example, increased by I think a factor of 10x in the last four years, and the effect of that is it’s basically allowing people to transport power, or transport energy, through time in much the same way that power lines, transmission lines, allow people to transport electricity through space.

So what is happening with the battery cost curve? Because if that’s sort of an essential component to make this happen-

CH: Same thing, same story.

For the same reasons?

CH: Exactly the same reasons, same story. Battery manufacturing is probably a little bit more complex and not quite as well-developed as silicon solar panel manufacturing, but we’re seeing year-on-year growth of battery manufacturing. It’s like well over 100%, so it’s actually growing faster than solar, and then the cost improvement’s not quite as steep, but it’s easily like 5% or 10% per year depending on which technology you’re looking at.

In 2021, for example, it was extremely confidently predicted that lithium ion batteries would never get under $100 per kilowatt hour at the cell level and the pack level, and of course Tesla was widely mocked for claiming that they would be able to get ultimately below $100 bucks per kilowatt hour at the pack level. But then again, I think January this year or December last year, a Chinese manufacturer came out with a sodium ion battery cell, which is at $56 per kilowatt hour, so it’s like a 2x reduction in cost on top of what is already considered cutting edge, and we just go down from there.

Now, sodium ion batteries might not be perfectly suited for all kinds of applications, but they’re probably cheaper to produce than the lithium ion batteries. We know they’re cheaper to produce in lithium batteries and they’re more than capable of doing the sort of load shifting required to essentially store power during the day and then use it in the evening.

Are we in a situation already, or do we still have a bit to go, where the sort of combined weighted cost of solar, which is much cheaper than nuclear as you talked about, plus batteries, which sounds like it’s still more expensive now, but when you combine the two is it already lower?

CH: Yeah, so again just look at the data, right — the market reveals its preference. CleanTechnica ran an article almost five years ago now showing that in Texas they were developing battery plants 10:1 compared to gas peaker plants. Texas runs its own its own grid under slightly different rules where you can basically just build and connect and then the grid can force you to curtail if they’ve got overproduction, but that typically means it’s a more liquid market. And even in Texas, which is certainly not ideologically committed to solar, and actually incidentally this year deployed more solar than California did.

Yeah, I was going to say.

CH: Also Texas has the cheapest natural gas in the history of the universe, but they’re deploying more battery packs than they are gas peaker plants 10:1…

…CH: But I just want to say there’s a conception that, oh, solar and batteries only are on the grid because they’re massively subsidized and they’re actually screwing everything up. That’s actually, that’s not true. Solar and batteries is what’s keeping the grid working right now, it’s the only thing that’s providing expanded capacity.

The major challenge with additional solar development, particularly here in the States, is we now have this ten-year backlog or kind of development queue before you can connect your solar array to the grid, and the reason for that is the grid is old and it’s kind of overwhelmed, and it’s not able to transport all that power effectively to market.

Of course, one solution to this is just to build more grid. Another solution is to put some batteries on the grid. And, you know, the third solution is basically just build batteries and solar wherever you can, it’s actually working really well.

Then obviously what Terraform is doing is taking this otherwise un-utilized capacity for solar development and then pouring it into another aspect of our civilization’s absolutely unquenchable thirst for energy. Just to give you some hard numbers here, roughly a third of U.S. energy is consumed in the form of electricity and about two-thirds in the form of oil and gas. So even if we successfully electrified huge amounts of ground transportation and also moved all of the electricity grid to say wind, solar and a bit of nuclear and some batteries and maybe some geothermal or something like that, so completely decarbonize the grid, that would only deal with about a third of the economy. Two-thirds of the economy still runs on oil and gas and so that’s what Terraform is here to try and deal with.

One more question on the batteries.

CH: Yeah.

There’s always been, or the common refrain has been, we need a battery breakthrough, we need something completely new. Is the take, and you mentioned the sort of sodium ion, but even with terms of lithium ion, is the actual expectation or is the actual realization in your expectation going forward that actually the technology we have — sure, it’d be great to get a breakthrough, but there’s actually way more improvements and in what we have that will carry us a long way?

CH: Lithium ion batteries are already amazing. I mean, they’ve been around for about 35 years now, I think they were first commercialized for Panasonic camcorders or something and even then they were extremely compelling. They pushed NiCAD [nickel-cadmium] out of the market almost instantaneously, which is the previous battery chemistry and numerous applications. They’re more than good enough.

You say, “Well, I’d like a battery breakthrough”. Why? “Because I want to run my supersonic electric jet off batteries.” Well, good luck with that. But for all ground transportation purposes, for static backups, for all these kinds of applications, not only is the technology already great, it’s got a 30 year history of manufacturing at scale. We know how to make it safe, we know how to make it cheap, it’s extremely compelling and the numbers speak for themselves.

Battery manufacturing capacity expansion is not just happening for no reason, there’s enormous untapped demand for batteries. The way I like to think of it is what’s your per capita lithium ion allocation? Maybe in 1995, you might have a Nokia 3210 with — actually that would be after 1995 — but with a small lithium ion battery in it. So you’ve got 10 grams per person of lithium ion battery and nowadays my family has two electric cars, and that’s probably most of our batteries.

Yeah, now we have laptops, we have computers.

CH: But in terms of the bulk mass, like 400 kilograms per person or something for people to have electric cars and then if you have a static backup battery in your house and then maybe a share of your per capita part of the grid scale batteries and so on. I think it could easily scale to a couple of tons per lithium ion battery per person, particularly in like the more energy intensive parts of the United States.

Is that a large number? No, not really. I easily have a couple of tons per person in terms of steel just in my cars. I easily have probably 50 tons of concrete per person in terms of my built environment. I don’t actually think this is a particularly large number, I just think it’s unusual to see in such a short span of time some product go from the size of your thumb to the size of a large swimming pool, a large hot tub or something like that, in terms of your per capita allocation.

Where are we at as far as availability of say lithium or of all the various rare minerals or rare earths, whether that go into both solar and batteries?

CH: Yeah, I mean, again, I’m not a super expert on batteries, but the cure for high prices is high prices. Lithium is the third most common element in the universe, there’s no shortage of it. You could argue there’s a shortage of lithium refining capacity in the United States, particularly if you’re concerned about strategic vulnerability.

It’s like the rare earth thing, right? Rare earths are not actually rare. It’s just the actual ability to refine them.

CH: They’re super common, and actually solar solves that. It turns out that you can electrically catalytically separate rare earth elements using cheap solar power, more significantly lower environmental impact and much lower cost than traditional refining, and I have some friends working on that.

It is certainly true that batteries, people are concerned about cobalt. Actually, I have some cobalt here, here’s a cube of cobalt on my desk. Cobalt is a fabulous metal, but there’s not a huge amount of it necessarily. It’s not scarce like gold, but the mining situation is not quite sorted out. But at the same time, like almost all the major battery manufacturers use almost no cobalt right now because they’re able to adapt their processes to basically optimize their costs towards the cheaper materials.

Capitalism solves this, we don’t have to worry too much about it, there’s literally hundreds of thousands of chemists out there right now who are solving this problem right now, you don’t have to lose sleep over it, it is a completely commoditized production system…

What happens with old solar panels and old batteries? Obviously this is an objection to nuclear which is nuclear waste, and the good thing with nuclear waste is it’s really not that much. We’re talking about this deployment of massive amounts of solar panels, all these batteries. Where are we at in 10, 20 years if this build out happens? Is that a potential issue?

CH: I’m not too worried about it. And again, you need to look at your waste stream on a per capita basis. If we deployed as many solar panels as I want to, how many solar panels will you end up disposing of? I think if you ground them up it’d be one garbage bag per year. For a suburban family, we probably have 1,000 garbage bags of trash every year that gets landfilled.

But to talk about specifics, batteries I think are prime targets for recycling because the materials in them are essentially, as Elon Musk once said, super concentrated for the raw materials you need to make batteries. There’s multiple companies out there, including Redwood Materials, that are doing exclusively battery recycling, or battery component recycling, which is super obvious. That said, as battery production increases, even if you recycle all the old batteries, it will only be 1% of the input stream or something, but I just don’t see a future where we have giant piles of batteries lying around.

Then as far as solar panels go, they’re like a layer of silicon dioxide, which is glass, a layer of silicon, which used to be glass, and then a layer of silicon dioxide and maybe some aluminum around the edges. Well, you can strip off the aluminum and recycle that trivially, we’ve been recycling aluminum for 100 years, and the glass is glass. You can grind it up and landfill it, it’s basically sand.

People will say, “Oh, what about cadmium or something?” — well first, solar uses a cadmium telluride process to make their solar panels. But again, the amounts involved are trivial, they’re inert, they’re solid, they can’t run or leach or anything like that, I’m not too worried about it. As far as the sort of trash that humans routinely landfill, solar panels would actually significantly increase the purity of our dumps because they’re so inert compared to everything else…

…CH: One of the things I like to say is that oil and gas is so common in our civilization, it’s invisible because every single thing that you see with your eyes is a surface that’s reflecting light, it’s usually pigmented or made of plastic, and that pigment or plastic is made of oil or it’s made of natural gas. So unless you go outside and look at a tree, which is ultimately made of a kind of plastic also derived from sunlight and air, it’s extremely difficult to lay your eyes on anything that’s not made of hydrocarbons and obviously, so we’re extremely bullish about growth.

Now it could be the case that there’s zero growth. It could be the case that the oil and gas industry just motors along at about $8 trillion of revenue per year, which is about $1 billion per hour. So just in the time we’ve been talking, it’s $1 billion, which is just insane. But I actually think that once we unlock these cheaper forms of hydrocarbons that it will promote substantial growth, particularly in the energy-intensive industries.

So just to underscore the vision here, I get really, really fired up about this, because when I think of aviation and how amazing it is, and how we’ve only had it as a species for about a hundred years, and it’s only really been something that we can enjoy in jet transport for maybe 50 years. But actually the people who routinely fly on aircraft, and I know that you’re one of them because you’re an expert obviously, and myself, it’s probably only 50 million people on earth who’ve ever had that experience of flying in a jet, I don’t know more than 10 times in their life. Wouldn’t it be incredible if that number was 500 million or 5 billion, but to get there from here in terms of fossil fuel consumption, emits a lot of CO₂, but it also requires a huge amount of fuel. Aviation currently consumes about 2% of the world’s oil and gas just to fly less than 1% of the world’s population around, and so obviously we need to bring on a new source of fuel.

So when you think, well, what is a nice climate-positive version of aviation? Is it like the European model where we force airlines to make customers pay for carbon sequestration or carbon credits or something like that, which is either extremely expensive or extremely fraudulent or both, but in any case makes aviation more expensive and less accessible to people, just makes it more exclusive? Or do we say, “Why don’t we solve both these problems at once, and just bring online enormous new supply of high quality, cheap gas and natural gas for the future liquefied natural gas powered supersonic aircraft?”

At the same time it just happens to be carbon-neutral, so you don’t have to worry about CO₂ emissions, it’s not polluting the atmosphere with new CO₂ from the crust, and at the same time, instead of Boeing producing 500 aircraft a year, Boeing and maybe a few more startups can be producing 10,000 aircraft per year to service this kind of massive explosion in demand driven by economic expansion. That is a sick vision, that is so cool, we should absolutely do this as quickly as we can.

I think whether or not Terraform plays a huge role in this process or not, and I’m certainly intending for it to be — currently we’re leading this process — the economics is inevitable that we’re going to switch over to synthetic fuel sooner or later, and when we do, it’s going to get really, really cheap because we’re running it off solar power and when it gets really, really cheap, we’re going to do amazing aviation and other energy applications, and increase manufacturing and maybe some little bit of geo-engineering on the side to keep things in check, increase water supply in dry areas and so on. Why wait until 2060? We could have this done in 2040 if we just apply ourselves the right way and find the right business model…

How does it work? Give the non-physicist overview of how Terraform works.

CH: Yeah, sure. So from a customer’s perspective on the outside, essentially what a Terraformer does is it allows you to build your own oil and gas well in your backyard, regardless of the fact that you don’t own a drill rig, and in fact you don’t live anywhere near where oil and gas occurs naturally, which is again pretty cool. But how does it work under the hood? Well, it consumes electricity and most of that electricity gets used locally.

Actually I should state the Terraformer itself sits in the solar array, and that’s to reduce the cost of transmission of electricity, which would be absolutely prohibitive in this case, and the electricity gets used to capture CO₂ from the air and to split water into hydrogen and oxygen. We throw the oxygen away like trees do, we take the hydrogen and we react that in a classical old school chemical reactor with the CO₂ to produce methane and water. Then we can separate the water out because it condenses at a much higher temperature from the methane and we’re just left over with methane plus a little bit of leftover CO₂ and hydrogen and a tiny bit of water vapor. That’s natural gas, right?

Actually, when you get natural gas out of the ground, if you did have a drill rig and you did live in a place where natural gas occurs and you drill a hole in the ground, gas comes out. Well now you’ve got to build a well top and a bunch of other stuff that’s actually really complicated, and you might have a blowout and then what comes out of the ground is like between 10 and 80% natural gas and a bunch of other contaminants on top of that which have to be removed before you can sell it.

We don’t have that problem. What we produce is the pure product. It’s really compellingly elegant the way we do this. There’s no geology risk, plug-and-play once you plug it in it just generates a predictable amount of gas every day for however long the system lasts, which is most likely measured in decades.

In this case, you don’t have a battery capital cost, I presume it only runs when then suns out, right?

CH: Yeah, that’s absolutely correct. And I’ll say for anyone who’s considering doing a hardware tech startup, well, there is basically a recipe that we’ve stumbled upon for taking any existing industry and then applying it to solar power and getting the benefit of that extremely cheap power.

The first is you have to get the CapEx way, way down because your utilization is low, you’re only using your plant maybe 25% of the time, so you have to get the cost down by at least a factor of four. Then on top of that, you also have to make it compatible with the sun coming up and going down. So time variability, which is difficult, but not impossible. We have many processes that we can routinely throttle up and down in our everyday lives so you understand this intuitively, but if you can do that, and it sounds impossible, of course, “I just want a chemical reactor that’s 1/10 the size and 1/4 the cost and I can ramp it up and down”.

Well, the way you make this work is you just use more power. So you say, “Well, I don’t care about efficiency quite as much because my power is so cheap”, and that’s what makes it easy. But if you can do this, then you have —

You have to change that core assumption. Whereas almost every invention today is all about increasing the efficient use of power, and the whole point of solar is, “What if we assume power is basically infinite, but it’s bounded by time, then what would we do?”.

CH: It’s like cycles in your computer are basically free or on your cell phone or something…

Desalination seems like a potentially massive win here and very pertinent to the American West for example. But this idea that if you assume energy is infinite, we’re not short of water on earth, we’re short of water without salt.

CH: That’s right, yeah. I mean there are some places where it’d be relatively difficult to transport even fresh water from the ocean, but in California that’s not the case. California is at the end of the Colorado River, which is declining, and California of course has senior water rights, we take about 5 million acre feet of water per year.

So unlike Terraform, which is definitely developing new proprietary technology in-house, it’s quite exciting, but with solar desalination, you don’t need any new technology. You just go and build a plant essentially with stuff you can buy off the shelf. How much would it cost to build a plant that is able to substitute 100% of California’s water extraction from the Colorado River, essentially doubling Southern California’s water supply, and at the same time allowing you to fix the Salton Sea and also set up a massive light metals industry and a bunch of other things?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Tencent, and Tesla. Holdings are subject to change at any time.