What We’re Reading (Week Ending 18 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 18 August 2024:

1. This Is How Treasury Really Funds Trillions of Dollars of US Debt –  Joe Weisenthal, Tracy Alloway, and Amar Reganti

Tracy (08:40):

So when I think about debt issuance, I used to cover corporate credit, and so I think about, you know, being a treasurer at a large multinational like an Apple or a Microsoft or whatever, and the decision making process there where, you know, if I decide there are favorable market conditions, I might go out and work with my bankers and decide to issue some debt. What is the difference between being a treasurer at a big company versus being US Treasury?

Amar (09:19):

Oh, a vast difference, right? And I too started on the other side, as a corporate portfolio manager in the bond market. You’d look at companies coming to the market, they either needed cash or as opportunistic. For the US government and for the debt management office, it’s very different. It’s that, you are always going to be at various points on the curve, whether or not at that point it’s, what I would call, tactically a good thing. And you know, this goes into that regular and predictable issuance cycle. And the point there, and this is how we get to cost, which is again different from how corporates measure cost is that, by being consistent, by helping this ecosystem thrive, you’re going to create a liquidity premium, right? That, because there is this regular and predictable nature to your issuance cycle, people understand they’re not going to be surprised that the availability of securities is going to be well calibrated to what the environment needs.

And when I meant environment or ecosystem, I meant the entire ecosystem. You want to service as broad of and diversified group of investors as possible. And that includes people who will actively short your securities, right? Because that provides a supply outside of auction cycles for people to buy and also helps stimulate repo markets and so on. So you want to be sure that you aren’t attempting to use pure price on what’s on the yield curve as a point on why or how you should issue.

Now, I want to be a little careful. There is a quantitative framework that Treasury has and it’s a model that, you know, a number of people collaborated on. Credit goes to people like Brian Sack, Srini Ramaswamy, Terry Belton, Kris Dawsey, a number of others who built this model. And it sort of gives a sense of, okay, historically, based on a number of inputs, where has Treasury benefited the most by issuing. But that’s like an important guidepost, but the more important part is the qualitative feedback that Treasury hears from its dealers, from investors, from central bank reserve managers who hold vast amounts of Treasuries. And that all also feeds in, along with the [Treasury] Borrowing Advisory Committee (TBAC), into making issuance decisions…

…Joe (16:05):

Also, Tracy, just to add onto that, we have an inverted yield curve. So, theoretically, if you wanted to borrow at the low, you know, one could say ‘Oh look, it’s cheaper to borrow at the long end, why are you selling all these bills when actually the cheapness is at the end?’

Tracy (16:18):

So this is the very essence of the current controversy. What is happening — and I know you’re not a Treasury now — but what is happening when the Treasury comes out with that kind of decision?

Amar (16:28):

Okay. So the first kind of framework you want to think about is, and you had asked this initially, is how do they make these directional issuance decisions? Well, the first thing is that Treasury does look at long-term averages of where it is in its weighted average maturity, right? Like when you add all these securities together, what’s sort of the average maturity? And historically, it’s been around 60 [or] 61 months. Treasury is well above that right now. It’s around 71 months. So it’s actually pretty, pretty high up.

Tracy (16:57):

Which, just to be clear, most people would say that’s a good thing, right? You want to term out your debt?

Amar (17:02):

Maybe if you’re a corporate treasurer you might want to do that, but there’s a lot of arguments that you actually don’t want to term out your debt.

Tracy (17:10):

Oh, interesting.

Amar (17:10):

So, the first is, is that yes, the curve is inverted. That’s, if you decided to move issuance that way, chances are you could uninvert the curve. I’m not saying that’s a definitive, it depends on how much or or how likely, you know, what else is happening in markets. The second thing is that, as in a previous episode, I thought Josh Younger explained it really well, you could roll these three-month bills, you know, all the way out to 10 years or you could issue a 10 year.

And if you’re sort of risk neutral, there’s no savings, right? Or there’s no gain or savings. It just means that, forwards get realized and it’s effectively the same thing. So when Treasury does that, you’re saying that, over time, you’re effectively making a tactical rates call that somehow, that you think that 10 year rates or 30 year rates won’t go substantially lower. That’s the first thing. The second thing is that the sheer amount that you can put on the 10 and 30 year is going to be less than what you can put in the bills market. Now that’s just absent anything that the Federal Reserve is doing. That’s just generally true, right? Like it’s just a broader and bigger, it tends to be a broader and bigger market.

Joe (18:19):

The shorter end.

Tracy (18:20):

Yeah, there’s more demand for shorter-dated securities.

Amar (18:22):

Yeah. But the third thing is that what Treasury really is trying to do is look around across the ecosystem and say, ‘Hey, where should we be feeding securities to over time if we are kind of taking a risk neutral sort of approach to this? That we’re not extrapolating what forward curves are going to be. We don’t know any more than a typical rate strategist or someone. We know what we don’t know about how market rates evolve over time. So because of that, our job is to help issue securities to where the biggest pools of capital are, because that’s how you issue risk-free securities and keep up the health and demand for, and liquidity of, your asset class.’ So the biggest pool of money now, in particular, is still at the front end, right? The amount of reserves that have been created is really dramatic.

2. Investing success comes down to one word: focus – Chin Hui Leong

Buffett does the same thing. On his table, he keeps a tray labelled, in capital letters, “TOO HARD”, a strategically placed reminder that most of the opportunities which cross his desk belong in that tray.

Now pause and think about that for a moment. Buffett is widely lauded for his investment smarts and long investing experience. In other words, it would be ridiculous to suggest that he has trouble understanding any company.

But Buffett knows better than that. Despite his ability, he is smart enough to know that there are many companies out there that he does not understand and should not touch. We would be wise to do the same…

…There’s an unfortunate adage in news broadcasting: If it bleeds, it leads. Said another way, negative headlines tend to get almost all of the attention while positive news gets buried in the process.

It’s true in investing as well. When Facebook reported a loss of a million daily active users (DAUs) in early 2022, the reaction from news outlets and analysts was deafening, with some even suggesting Facebook is on its last legs as a social network.

But since reporting the loss, the social network has gained over 180 million DAUs by 2023. Do you hear about these positive gains in the media? No, you don’t.

This example tells you one thing: You have to be proactive in searching for positive trends within the company.

And that means looking past its current problems and honing in on the parts which are not said out loud. For instance, at the end of 2021, Meta was far from a dying business. In fact, the social media company had nearly US$48 billion on its balance sheet after generating US$39 billion in free cash flow during the year.

3. The Seven Virtues of Great Investors – Jason Zweig

Curiosity is the first investing virtue. It’s what enables you to find and develop all the others…. Ordinary investors are afraid of what they don’t know, as if they are navigating the world with those antique maps that labeled uncharted waters with the warning “here be dragons.” Great investors are afraid of what they do know, because they realize it might be biased, incomplete or wrong. So they never deviate from their lifelong, relentless quest to learn more…

…without independence, investors are doomed to mediocrity. What’s your single most valuable asset as an investor? Your mind! If you let other people do your thinking for you, you’ve traded away your greatest asset — and made your results and your emotions hostage to the whims of millions of strangers. And those strangers can do the strangest things…

…Making a courageous investment “gives you that awful feeling you get in the pit of the stomach when you’re afraid you’re throwing good money after bad,” says investor and financial historian William Bernstein of Efficient Frontier Advisors in Eastford, Conn.

4. Integration and Android – Ben Thompson

Yesterday Google announced its ninth iteration of Pixel phones, and as you might expect, the focus was on AI. It is also unsurprising that the foundation of Osterloh’s pitch at the beginning of the keynote was about integration. What was notable is that the integration he focused on actually didn’t have anything to do with Pixel at all, but rather Android and Google:

We’re re-imagining the entire OS layer, putting Gemini right at the core of Android, the world’s most popular OS. You can see how we’re innovating with AI at every layer of the tech stack: from the infrastructure and the foundation models, to the OS and devices, and the apps and services you use every day. It’s a complete end-to-end experience that only Google can deliver. And I want to talk about the work we’re going to integrate it all together, with an integrated, helpful AI assistant for everyone. It changes how people interact with their mobile devices, and we’re building it right into Android.

For years, we’ve been pursuing our vision of a mobile AI assistant that you can work with like you work with a real life personal assistant, but we’ve been limited by the bounds of what existing technologies can do. So we’ve completely rebuilt the personal assistant experience around our Gemini models, creating a novel kind of computing help for the Gemini era.

The new Gemini assistant can go beyond understanding your words, to understanding your intent, so you can communicate more naturally. It can synthesize large amounts of information within seconds, and tackle complex tasks. It can draft messages for you, brainstorm with you, and give you ideas on how you can improve your work. With your permission, it can offer unparalleled personalized help, accessing relevant information across your Gmail Inbox, your Google calendar, and more. And it can reason across personal information and Google’s world knowledge, to provide just the right help and insight you need, and its only possible through advances we made in Gemini models over the last six months. It’s the biggest leap forward since we launched Google Assistant. Now we’re going to keep building responsibly, and pushing to make sure Gemini is available to everyone on every phone, and of course this starts with Android.

This may seem obvious, and in many respects it is: Google is a services company, which means it is incentivized to serve the entire world, maximizing the leverage on its costs, and the best way to reach the entire world is via Android. Of course that excludes the iPhone, but the new Gemini assistant isn’t displacing Siri anytime soon!

That, though, gets why the focus on Android is notable: one possible strategy for Google would have been to make its AI assistant efforts exclusive to Pixel, which The Information reported might happen late last year; the rumored name for the Pixel-exclusive-assistant was “Pixie”. I wrote in Google’s True Moonshot:

What, though, if the mission statement were the moonshot all along? What if “I’m Feeling Lucky” were not a whimsical button on a spartan home page, but the default way of interacting with all of the world’s information? What if an AI Assistant were so good, and so natural, that anyone with seamless access to it simply used it all the time, without thought?

That, needless to say, is probably the only thing that truly scares Apple. Yes, Android has its advantages to iOS, but they aren’t particularly meaningful to most people, and even for those that care — like me — they are not large enough to give up on iOS’s overall superior user experience. The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.

Of course Pixel would need to win in the Android space first, and that would mean massively more investment by Google in go-to-market activities in particular, from opening stores to subsidizing carriers to ramping up production capacity. It would not be cheap, which is why it’s no surprise that Google hasn’t truly invested to make Pixel a meaningful player in the smartphone space.

The potential payoff, though, is astronomical: a world with Pixie everywhere means a world where Google makes real money from selling hardware, in addition to services for enterprises and schools, and cloud services that leverage Google’s infrastructure to provide the same capabilities to businesses. Moreover, it’s a world where Google is truly integrated: the company already makes the chips, in both its phones and its data centers, it makes the models, and it does it all with the largest collection of data in the world.

This path does away with the messiness of complicated relationships with OEMs and developers and the like, which I think suits the company: Google, at its core, has always been much more like Apple than Microsoft. It wants to control everything, it just needs to do it legally; that the best manifestation of AI is almost certainly dependent on a fully integrated (and thus fully seamless) experience means that the company can both control everything and, if it pulls this gambit off, serve everyone.

The problem is that the risks are massive: Google would not only be risking search revenue, it would also estrange its OEM partners, all while spending astronomical amounts of money. The attempt to be the one AI Assistant that everyone uses — and pays for — is the polar opposite of the conservative approach the company has taken to the Google Aggregator Paradox. Paying for defaults and buying off competitors is the strategy of a company seeking to protect what it has; spending on a bold assault on the most dominant company in tech is to risk it all.

I’ve referenced this piece a few times over the last year, including when Osterloh, the founding father of Pixel, took over Android as well. I said in an Update at the time:

Google has a very long ways to go to make [Google’s True Moonshot] a reality, or, frankly, to even make it a corporate goal. It will cost a lot of money, risk partnerships, and lower margins. It is, though, a massive opportunity — the maximal application of AI to Google’s business prospects — and it strikes me as a pretty big deal that, at least when it comes to the org chart, the Pixel has been elevated above Android.

In fact, though, my takeaway from yesterday’s event is the opposite: Android still matters most, and the integration Google is truly betting on is with the cloud.

5. Signature Bank – why the 36,000% rise in 7 months? – Swen Lorenz

In case you don’t remember, Signature Bank had gotten shipwrecked in March 2023, alongside the other infamous “crypto-deposit banks”, Silvergate Bank and First Republic Bank. Its stock had to be considered worthless, at least by conventional wisdom.

However, between October and December 2023, the share price suddenly rose from 1 cent to USD 1.60. Buyers were hovering up shares, sometimes several million in a single day.

The stock then doubled again and reached USD 3.60, and with heavy trading…

…On 12 March 2023, New York authorities closed the bank. Because of its size, the US government considered a collapse a systemic risk, which enabled the FDIC to step in and guarantee all deposits after all. Whereas deposit holders were going to be made whole, those investors who held equity or bonds issued by Signature Bank were going to lose their entire investment. Within one week, the majority of the bank’s deposits and loans were taken over by New York Community Bancorp (ISN US6494451031, NYCB), which is the usual way to dispose of a failed banking operation…

…Not all of Signature Bank’s assets were transferred to New York Community Bancorp. When the bank closed its doors, it had USD 107bn of assets. Of that, only USD 47bn were transferred to New York Community Bancorp – basically, the part of the bank’s portfolio that was deemed a worthwhile business. A portfolio with a remaining USD 60bn of loans would remain in receivership, and it was earmarked for a gradual unwinding.

In September 2023, the FDIC sold another USD 28bn of the bank’s assets to Flagstar Bank.

The remaining USD 32bn of loans comprised mortgages made against commercial real estate and rent-regulated apartment buildings in New York – asset classes that are not exactly in favour with investors.

However, the FDIC knew that it was going to release more value from these remaining loans if it allowed them to continue to maturity. The government entity needed help, though, to get the job done, and it had to deliver some evidence that letting this portfolio run off over time was indeed the best way to minimise losses and maximise proceeds.

To this end, the FDIC put these remaining loans into joint venture entities. Minority stakes in these entities were then offered to private equity companies and other financial investors…

…These financial investors paid the equivalent of 59-72 cents on the dollar…

…For the FDIC to be made whole on the remaining USD 32bn portfolio of loans, it needs to recover 85% of the outstanding amounts. If the recovery rate of these remaining USD 32bn of loans comes out higher than 85%, there will be money left over to go towards holders of the bank’s bonds, preference shares, and ordinary shares.

How could any external investor come up with an estimate for the likely recovery rate?…

…It’s all down to the default rate and the so-called severity.

The default rate is the percentage of loans where the debtor won’t be able to make a repayment in full.

Severity is the percentage loss suffered when a debtor is not able to make a repayment in full. E.g., a debtor may not be able to pay back the entire mortgage but just 75%. In that case, the severity is 25%…

…The resulting estimate of an 8% loss on the loan portfolio means that 92% of the loan book will be recovered. Given that the FDIC’s claims only make up 85% of the loan book, this means there will be money left over to go towards the holders of Signature Bank’s bonds, preference shares, and ordinary shares.

This money is not going to be available immediately since most loans run out in 5-7 years. This gives the managers of these loan portfolios time to work towards maximising how much debtors can repay…

…The FDIC is first in line to receive the money that comes in. According to Goodwin’s estimate, the FDIC’s claims will be paid off in full at the end of 2027.

From that point on, the bonds, preference shares, and ordinary shares will have a value again, as they entitle the holder to a share in the remaining leftover proceeds.

For the ordinary shares, Goodwin estimates USD 600m to be left over, which will become available in about five years’ time. When discounting this sum by 20% p.a., Signature Bank has a fair market cap of of USD 223m.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms (parent of Faccebook), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 11 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 11 August 2024:

1. Ted Weschler Case Study – DirtCheapStocks

To set the stage – Weschler’s Valassis purchases started in 2008 and ended in 2010.

Markets were in free fall in the back half of 2008. The S&P 500 traded down 12% in the first six months of the year. This was already a blow to investors. But things were about to get much worse. In the second half of the year, the S&P would trade down another 26%. 2008 was the worst year for the S&P since the 1930’s. Investors were scared. The country was frozen…

…There was blood in the streets, no doubt, but market participants were getting the investment opportunity of a lifetime. Weschler bought the bulk of his Valassis shares in the 4th quarter of 2008.

Valassis was a direct mail marketing company. It made the coupons that come in the daily paper along with the other marketing material sent directly to your mailbox. Junk mail, basically.

But this junk mail has a reasonably high conversion rate. There’s a reason it shows up in our mailbox daily.

In early 2007, Valassis had purchased ADVO, the direct mail business. The purchase of ADVO doubled the size of the company, taking revenues from $1 billion to $2 billion. ADVO was acquired for $1.2B, financed almost entirely by debt. Prior to the ADVO acquisition, Valassis operated with only ~$115MM of net debt. Debt grew 10x over night. The company levered up – big time…

…Valassis stock was destroyed in late 2008. Shares traded as high as $16.80 in the second quarter. At the lows of the fourth quarter, shares dipped to $1.05. A 94% drop…

…Weschler began buying in the fourth quarter of 2008. The stock price at that time ranged from $1.05 to $8.73. I don’t know exactly what he paid, but the stock fell hard on volume. Weshler was able to purchase 6.24% (or 3,000,000 shares) of the business in the quarter. We’ll assume he paid ~$3/share…

…Valassis was trading at a ridiculously cheap price. This underscores how afraid investors were in the moment. At some point in the fourth quarter, shares dropped as low as $1.05 – meaning someone paid less than one times free cash flow for this business.

Shares were cheap on a market cap basis, but when considering the heavy debt burden, they looked a lot more expensive…

…The 8.25% Senior Notes weren’t due until 2015. So at the time Weschler was buying, he would’ve known the company had ~7 years before that debt was to be repaid/refinanced. The 2015 notes required no scheduled principal repayment prior to maturity…

…Term loan B matured in 7 years, and required minimal principal payments…

…Long story short, the business had 7 years of cash flow generation before it would need to reconsider its debt situation. EBIT, even in the depths of the recession, was enough to cover interest expense. At the end of 2008, Valassis was in compliance with all of its covenants…

…Here’s the cash flow statement from 2009 – 2011:…

  • …Operating cash flow is consistently positive.
  • There is minor capex, leaving loads of excess cash.
  • All free cash flow was used for debt repayment and stock repurchases…

…In February 2014, Harland Clarke Holdings acquired Valassis for $34.05/share.

Weschler’s 2008 purchases would’ve compounded at a rate of 52.5% for a little less than 6 years…

…We don’t know exactly what Weschler was thinking when he bought his shares. But I’d guess the combination of an extremely cheap price, favorable debt repayment schedule and consistent cash flow were the deciding factors.

2. What Bill Ackman Got Wrong With His Bungled IPO – Jason Zweig

This week, Bill Ackman, the hedge-fund billionaire who has 1.4 million followers on X, had to pull the plug on his new fund before it could launch its initial public offering.

That’s because he’d organized his proposed Pershing Square USA, or PSUS, as a closed-end fund…

…Ackman, who has styled himself as a crusader for the investing public, could have tried using his new vehicle to shatter the status quo on fees. Instead, it would have cemented the status quo.

The fund’s 2% annual management fee, which Ackman was going to waive for the first year, would have been competitive at a hedge fund—but far more costly than at market-tracking ETFs.

Then there was the load, or sales charge, of 1.5% for individual investors and somewhat lower for institutions—an irksome cost of admission that people no longer have to pay on most other assets…

…If demand is high, closed-end shares can trade at a premium, or more than the sum of their parts known as net asset value. Usually, they trade at a discount, or less than what the portfolio is worth. The lower a fund’s return and the higher its expenses, the deeper the discount will tend to go.

According to the Investment Company Institute, more than 80% of closed-end funds recently traded at discounts. Stock funds were trading at almost 10% less than their net asset value; bond funds, about 9% below their NAV.

Typically, a closed-end fund doesn’t issue new shares after its IPO; nor does it redeem, or buy your shares back. Instead, you have to buy from, or sell to, another investor. That means new buyers don’t increase the fund’s capital, and sellers don’t decrease it…

…That’s why the firms that run them call closed-end funds “evergreen assets,” or permanent capital.

Over the decades, a few great investors have used that structure to enrich their shareholders rather than to fill their own pockets…

…Those examples suggest to me that Ackman missed an opportunity to innovate.

It was institutions, not individual investors, that balked at the potential discount on his fund.

What if Ackman instead had bypassed the investment bankers and their 1.5% sales load, offering the fund directly to individuals only, commission-free? And what if he’d set a reasonable management fee of, say, 0.5%?

Such an innovative, self-underwritten deal is likely feasible, several securities lawyers say, but would have been more expensive for Ackman than a conventional IPO…

…In the past few weeks, the New York Stock Exchange and Cboe Global Markets’ BZX Exchange separately proposed rule changes that would eliminate the requirement for closed-end funds to hold annual meetings for shareholders.

Good luck trying to get a lousy fund to hire a new manager if you can’t even vote your disapproval without somehow convening a special meeting.

Boaz Weinstein, founder of Saba Capital Management, an activist hedge-fund manager that seeks to narrow the discounts on closed-end funds, calls the exchanges’ rule proposals “some of the most shocking disenfranchisement efforts against closed-end fund shareholders in over 100 years.”

3. How to Build the Ultimate Semiconductor for LLMs – Joe Weisenthal, Tracy Alloway, Reiner Pope, and Mike Gunter

Joe (17:30):

I know there’s always this sort of cliché when talking about tech, they’re like, oh, Google and Facebook, they can just build this and they’ll destroy your little startup. They have infinite amount of money, except that doesn’t actually seem to happen in the real world as much as people on Twitter expect it to happen.

But can you just sort of give a sense of maybe the business and organizational incentives for why a company like Google doesn’t say, “oh, this is a hundred billion market NVIDIA’s worth three and a half trillion or $3 trillion. Let’s build our own LLM specific chips.” Why doesn’t that happen at these large hyperscaler companies that presumably have all the talent and money to do it?

Mike (18:13):

So Google’s TPUs are primarily built to serve their internal customers, and Google’s revenue for the most part comes from Google search, that Google search, and in particular from Google search ads, Google search ads is a customer of the TPUs. It’s a relatively difficult thing to say that hundreds of billions of dollars of revenue that we’re making, we’re going to make a chip that doesn’t really support that particularly well and focuses on this at this point, unproven in terms of revenue market.

And it’s not just ads, but there are a variety of other customers. For instance, you may have noticed how Google is pretty good at identifying good photos and doing a whole variety of other things that are supported in many cases by the TPUs.

Reiner (19:06):

I think one of the other things too that we see in all chip companies in general or companies producing chips is because producing chips is so expensive, you end up in this place where you really want to put all your resources behind one chip effort. And so just because the thinking is that there’s a huge amount of return on investment in making this one thing better rather than fragmenting your efforts, really what you’d like to do in this situation where there’s a new emerging field that might be huge or might not, but it’s hard to say yet. What you’d like to do is maybe spin up a second effort on the side and have a skunk works, see how it works.

Joe (19:37):

Yeah that’s right. That would be amazing just to let Reiner, or just let the two of you go have your own little office somewhere else.

Reiner (19:44):

Yeah. Organizationally, it’s often challenging to do, and we see this across all companies. Every chip company really has essentially only one mainstream chip product that they’re iterating on and making better and better over time…

…Joe (21:49):

Let’s get to MatX. Tell us the product that you’re designing and how it fundamentally will differ from the offerings on the market, most notably from Nvidia.

Reiner (22:01):

So we make chips and in fact, racks and clusters for large language models. So when you look at NVIDIA’s, GPUs, you already talked about all of this, the original background in gaming, this brief movement in Ethereum, and then even within AI, they’re doing small models of large models. So what that translates to, and you can think of it as the rooms of the house or something. They have a different room for each of those different use cases, so different circuitry in the chip for all of these use cases. And the fundamental bet is that if you say, look, I don’t care about that. I’m going to do a lousy job if you try and run a game on me, or I’m going to do a lousy job if you want to run a convolutional network on me, but if you give me a large model with very large matrices, I’m going to crush it. That’s the bet that we’re making at MatX. So we spend as much of our silicon as we can on making this work. There’s a lot of detail in making all of this work out because you need not just the matrix multiplication, but all of the memory bandwidths and communication bandwidths and the actual engineering things to make it pan out. But that’s the core bet.

Tracy (23:05):

And why can’t Nvidia do this? So Nvidia has a lot of resources. It has that big moat as we were discussing in the intro, and it has the GPUs that are already in production and it’s working on new ones. But why couldn’t it start designing an LLM focused chip from scratch?

Mike (23:23):

Right? So you talked about NVIDIA’s moat, and that moat has two components. One component is that they build the very best hardware, and I think that is the result of having a very large team that executes extremely well and making good choices about how to serve their market. They also have a tremendous software moat, and both of these moats are important to different sets of customers. So they’re a tremendous software moat. They have a very broad, deep software ecosystem based on CUDA that allows it…

Tracy (23:59):

Oh yeah, I remember this came up in our discussion with Coreweave.

Mike (24:03):

Yeah. And so that allows customers who are not very sophisticated, who don’t have gigantic engineering budgets themselves to use those chips and use NVIDIA’s chips and be efficient at that. So the thing about a moat is not only does it in some sense keep other people out, it also keeps you in. So insofar as they want to keep their software moat, their CUDA moat, they have to remain compatible with CUDA and compatibility with that software. Compatibility with CUDA requires certain hardware structures. So Nvidia has lots and lots of threads. They have a very flexible memory system. These things are great for being able to flexibly address a whole bunch of different types of neural net problems, but they all cost in terms of hardware, and they’re not necessarily the choices to have those sorts of things. They’re not necessarily the choices, in fact, not the choices that you would want to make if you were aiming specifically at an LLM. So in order to be fully competitive with a chip that’s specialized for LLMs, they would have to give up all of that. And Jensen himself has said that the one non-negotiable rule in our company is that we have to be compatible with CUDA.

Joe (25:23):

This is interesting. So the challenge for them of spinning out something totally different is that it would be outside the family. So it’s outside the CUDA family, so to speak. And

Tracy (25:35):

Meanwhile, you already have high PyTorch and Triton waiting in the wings, I guess…

…Joe (39:00):

Tell us about what customers, because I’ve heard this, we’re all trying to find some alternative to Nvidia, whether it’s to reduce energy costs or just reduce costs in general or being able to even access chips at all since not everyone can get them. There are only so many chips getting made. But when you talk to theoretical customers, A, who do you imagine as your customers? Is it the OpenAIs of the world? Is it the Metas of the world? Is it labs that we haven’t heard of yet that could only get into this if there were sort of more focused lower cost options? And then B, what are they asking for? What do they say? You know what, we’re using NVIDIA right now, but we would really like X or Y in the ideal world.

Reiner (39:48):

So there’s a range of possible customers in the world. The way that we see or a way you divide them up and how we choose to do that is what is the ratio of engineering time they’re putting into their work versus the amount of compute spent that they’re putting in. So the ideal customer in general for a hardware vendor who’s trained to make the absolute best but not necessarily easiest to use hardware, is a company that is spending a lot more on their computing power than they are spending on the engineering time, because then that makes a really good trade off of, maybe I can spend a bit more engineering time to make your hardware work, but I get a big saving on my computing costs. So companies like OpenAI would be obviously a slam dunk.

There’s many more companies as well. So the companies that meet this criteria of spending many times more on compute than on engineering, there’s actually a set of maybe 10, 15 large language model labs that are not as well known as OpenAI, but you might think Character.AI, Cohere and many other companies like that and Mistral.

So the common thing that we hear from those companies, all of those are spending hundreds of millions of dollars on compute, is I just want better FLOPS per dollar. That’s actually the single deciding factor. And that’s primarily the reason they’re deciding on today, deciding on NVIDIA’s products rather than some of the other products in the market is because the FLOPS per dollar of those products is the best you can buy. But when you give them a spec sheet and the first thing they’re going to look at is just what’s the most floating point operations I can run on my chip? And then you can rule out 90% of products there on the basis of, okay, just doesn’t meet that bar. But then after that, you then go through the more detailed analysis of saying, okay, well I’ve got these floating point operations, but is the rest going to work out? Do I have the bandwidths and the interconnect? But for sure the number one criteria is that top line FLOPS.

Joe (41:38):

When we talk about delivering more flops per dollar, what are you aiming for? What is current benchmark flops per dollar? And then are we talking like, can it be done like 90% cheaper? What do you think is realistic in terms of coming to market with something meaningfully better on that metric?

Reiner (41:56):

So NVIDIA’s Blackwell in their FP4 format offers 10 petaFLOPS in that chip, and that chip sells for ballpark 30 to 50,000, depends on many factors. That is about a factor of two to four better than the previous generation NVIDIA chip, which was the Hopper chip. So part of that factor is coming from going to lower precision, going from 8-bit precision to 4-bit precision. In general, precision has been one of the best ways to improve the FLOPS you can pack into a certain amount of silicon. And then some of it is also coming from other factors such as cost reductions that NVIDIA has been deploying. So that’s a benchmark for where NVIDIA is at now. You need to be at least integer multiples better than that in order to compete with the incumbent. So at least two or three times better on that metric we would say. But then of course, if you’re designing for the future, you have to compete against the next generation after that too. So you want to be many times better than the future chip, which isn’t out yet. So that’s the thing you aim for.

Joe (42:56):

Is there anything else that we should sort of understand about this business that we haven’t touched on that you think is important?

Mike (43:03):

One thing, given that this is Odd Lots that I think the reason that Sam Altman is going around the world talking about trillions of dollars of spend is that he wants to move the expectations of all of the suppliers up. So as we’ve observed in the semiconductor shortage, if the suppliers are preparing for a certain amount of demand and demand, in the case of famously of the auto manufacturers as a result of COVID canceled their orders and then they found that demand was much, much, much larger than they expected. It took a very long time to catch up. A similar thing happened with NVIDIA’s H100. So TSMC was actually perfectly capable of keeping up with demand for the chips themselves, but the chips for these AI products use a very special kind of packaging, which puts the compute chips very close to the memory chips and hence allows them to communicate very quickly called CoWoS.

And the capacity for CoWoS was limited because TSMC built with a particular expectation of demand, and when H100 became such a monster product, their CoWoS capacity wasn’t able to keep pace with demand. So supply chain tends to be really good if you predict accurately and if you predict badly on the low side, then you end up with these shortages. But on the other hand, these companies, because the manufacturing companies have very high CapEx, they’re fairly loath to predict badly on the high side because that leads them to having spend a bunch of money on capital CapEx that they’re unable to recover.

4. The Impact of Fed Rate Cuts on Stocks, Bonds & Cash – Ben Carlson

It can be helpful to understand what can happen to the financial markets when the Fed raises or lowers short-term rates.

The reason for the Fed rate cut probably matters more than the rate cut itself.

If the Fed is cutting rates in an emergency fashion, like they did during the Great Financial Crisis, that’s a different story than the Fed cutting because the economy and inflation are cooling off…

…Most of the time stocks were up. The only times the S&P 500 was down substantially a year later occurred during the 1973-74 bear market, the bursting of the dot-com bubble and the 2008 financial crisis.

It’s been rare for stocks to be down three years later and the market has never been down five years after the initial rate cut.

Sometimes the Fed cuts because we are in or fast approaching a recession, but that’s not always the case…

…Average returns have been better when no recession occurs but the disparity isn’t as large as you would assume.

Most of the time the stock market goes up but sometimes it goes down applies to Fed rate cuts just like it does to every other point in time.

Obviously, every rate cut cycle is different. This time it’s going to happen with stocks at or near all-time highs, big gains from the bottom of a bear market, a presidential election, and the sequel to Gladiator coming out this fall.

5. Enough! This Is How the Sahm Rule Predicts Recessions (Transcript Here) – Joshua Brown and Claudia Sahm

Brown (02:11): I’ve been around for a long time and I had not heard about the Sahm Rule but apparently it’s something that you created in 2019. The first person to mention it to me was Nick Koulos which he did on the show. And I guess it had a lot of relevance to start talking about now because we’re trying to figure out if the Fed is staying too tight and if the good economy we’ve had is going to start slipping away before the Fed can start easing and that’s why everyone’s talking about the Sahm Rule.

I want to try to explain it very succinctly and you tell me if I’m missing anything about how the Sahm Rule works. That’s important to the discussion. The Sahm Rule is a recession indicator you came up with about five years ago. Basically what you’re doing is calculating the three-month moving average of the national unemployment rate, so not just last month’s print, but you’ll take the last three, you’ll average those and you’re comparing them to the lowest three-month moving average for the unemployment rate that we’ve had over the last 12 months. Do I have that? Okay you’re nodding.

Sahm (03:28): That’s the formula. We’re there.

Brown (03:29): Okay. If the current three-month average is 0.5 percentage points or more above the lowest three-month average from the last 12 months, that would signal the early stages of a recession – and we could talk about how early – but that would be the “trigger”. And I’m so excited to have you on today because as of the last employment report we got, the three-month average is now more than, just barely, 0.5% above the lowest three-month average that we’ve had, therefore the Sahm Rule is in effect…

..Brown (06:30): So according to your work the Sahm Rule, I guess on a back test, would have accurately signalled every actual recession we’ve had since the 1970s, without the false positives that can occur outside of recessions. This is in some ways similar to my friend Professor Cam Harvey who was trying to figure out why the inverted yield curve has been so accurate in predicting recessions and so far has not had a false positive either. Some would say recent history has been the false positive but he would argue “I’m still on the clock.” But it’s interesting that you created this for fiscal policy while working at the Fed.

Sahm (07:20): So as one of the analysts who covered consumer spending in 2008, understanding what consumers were doing with their, say, rebate checks or later tax credits, the Fed works around the edges. In the staff’s forecast, there are estimates of what fiscal policy does to the economy and the Fed can take that into consideration when they do their monetary policy. It may seem a little counterintuitive but that’s a very important piece of the health of the economy, understanding consumers. But I will say having watched that episode made me want to help improve the policy for next time. The Sahm Rule was part of a policy volume in early 2019 on how to – all kinds of automatic stabilizers, it was just a piece of it. It comes from the back test, I’m looking at history. Before that, it did pass the 2020, calling that recession with flying colours, but anyone could have done that. Yet there are some very unusual circumstances in this cycle that the Sahm Rule – in my opinion, I do not think the US economy is in a recession despite what the Sahm Rule is stating right now…

…Sahm (13:23): There are two basic reasons the unemployment rate goes up. One, there’s a weakening demand for workers, unemployment rate goes up. That’s very consistent with recessionary dynamics. That’s bad and it builds, there’s momentum. That’s where the Sahm Rule gets its accuracy from historically. The other reason that you can have the unemployment rate increase is if you have an increase in the supply of workers. In general, the unemployment rate can get pushed around. It’s even worse right now for the Sahm Rule because early in the pandemic we had millions of workers drop out of the labour force, just walk away. Then we ended up, because they didn’t all come back as quickly as, say, customers did, so we had labour shortages. The unemployment rate got pushed down, probably unsustainably, because we just didn’t have enough workers. Then in recent years, we’ve had a surge in Immigration, as well as we had a good labour market, so people were coming in from the sidelines. So we’ve had two rather notable changes in the labour supply.

I think as we’ve learned – and this is a broad lesson from this – is anytime we have really abrupt, dramatic changes, the adjustments can take a long time. So now as we have these immigrants coming in, this is solving the labour shortage. That is a very good thing, having a larger labour force particularly as we have many people ageing out. That helps keep us growing. That’s a good thing. But in the interim where they’re still searching for jobs, things have slowed down some in terms of adding jobs. That causes the unemployment rate to drift up. Now if it’s just about that supply adjustment, it’s temporary. And at the end of it it’s a good thing, because we’ve got more workers. And we’ve had recessions when there were expansions in the labour force like in the 1970s, so I don’t want to act like just because we have more workers now, everything is okay. It’s just the Sahm Rule – and again as you point out, it’s right at the cusp of its historical trigger. It’s got a lot going on under the hood…

…Sahm (19:52): The Sahm Rule itself, even the real time, has false positives. And then just this bigger conversation of history might not repeat. The one thing on Barry’s is there are cases, you have to go further back in history, there are times where we go into a recession with a low or lower unemployment rate than now. It is not recent. And we have a mix – I talked a lot about the labour supply that’s definitely in the mix. I spent some time looking at that 0.5. When we get across that threshold, what do the contributions from different types of unemployed – you can be because you were laid off, which Barry mentioned, you could be because you’re a new entrant to the workforce, you left a job. We see quite a bit of variation, the contributions. It is true right now we’re much more, there’s more of the entrants, the new job seekers, the coming back to the labour force. They’re a bigger contributor to getting across that 0.5 threshold than most recessions. But you go back to the ‘70s when the labour force is not that different. So it’s hard to pull it out. I’m not in the ironclad, recession is not a given, nor I think what I read – the history – that tightly. And yet I think there are real risks and as with Barry, I was, say in 2022, “A recession is coming,” or “We need a recession.” I was adamantly, I’ve never had a recession call in this whole time. I was kind of close when we got to Silicon Valley Bank but I have not had a recession call in and part of what I could say in 2022 was look at the labour market, look at consumers. We are still in a position of strength, but much less. And the momentum is not good.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Faccebook), and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 04 August 2024:

1. No More EU Fines for Big Tech – John Loeber

The EU takes an aggressive stance toward American Big Tech. Citing concerns about privacy and monopolization, it has enacted countless regulations, and fined Google and Meta for billions of dollars. In the last six months, EU regulators have kicked this motion into overdrive:

  • They adopted the Digital Markets Act (DMA), which they used to immediately open investigations into Apple, Google, and Meta.
  • They adopted the AI Act to constrain AI applications.
  • They slapped Apple with a $2B fine.
  • In July alone, they opened antitrust proceedings against Nvidia, antitrust investigations into Google, and threatened to fine Twitter over seemingly-trivial Blue Checks.

The posture is clear: the EU is not satisfied with the bloodletting-to-date and is raising its demands from Big Tech. The AI Act and DMA both may assess penalties as a percentage of global turnover, and are so broad in scope that European regulators are emboldened to pursue tech giants for practically limitless amounts of money…

…The EU’s framework goes so far as to assess fines on a percentage of global turnover:

  • GDPR: up to 4% of global turnover (top-line revenue);
  • AI Act: up to 7% of global turnover;
  • DMA: up to 20% of global turnover;

These just keep getting more expensive! The idea of issuing fines based on global revenue for local violations of law is a brazen stretch of legal convention:

  1. Penalties must be commensurate with damages;
  2. Courts may assert their authority only over subjects in their jurisdiction.

The legal convention would be for the EU to assess fines based on EU revenues, not global revenues. Permitting fines based on global revenue would set disastrous precedent: if the EU can set fines based on global revenue, why can’t any other country? Any other big market with a little bit of leverage could try to extract a slice of the pie. Why shouldn’t India, which has ~500M Meta users, start fining Meta for 10% of its global revenue? Why shouldn’t Brazil do the same? Or Nigeria? And why should they keep their fines to Big Tech? Why don’t they fine Exxon Mobil for a percentage of global revenue?

Permitting this scope would set terrible precedent, and it has no legal legitimacy. Not only must Big Tech refuse to comply, but the US must reject it as a matter of national interest and international order…

…The EU might account only for 7% of Apple’s global revenue. 7% is still a big market, but Apple is by no means dependent on it. Especially considering the exceptionally high level of operational headache in complying with European requirements, if it comes to be Apple’s view that the fines-as-percentage-of-global-revenue cannot be avoided, then it may be rational to pull out…

…The EU doesn’t have true local alternatives. If it pursues Nvidia on Antitrust grounds: does it really want Nvidia GPUs to be replaced by, say, Huawei GPUs? Does it want Facebook to be replaced by VK? If EU regulators are motivated by concerns over unaccountable, outside influences, I might suggest that American Big Tech is still their best option…

…Never forget: these Big Tech products are, for the most part, cloud services. They can simply be turned off remotely, from one minute to the next. Hypothetically, if Big Tech were to coordinate, play true hardball, and shut off EU-facing products, the EU economy would grind to a halt overnight. Imagine the fallout from hundreds of millions of people suddenly not having email anymore. Without AWS, GCP, Azure, etc. things simply wouldn’t work. We live in a digital world; the dependencies are everywhere. It’d be like when OPEC constrained oil supply in the 70s, except percolating much more deeply and instantaneously throughout economies.

Of course, it’s very unlikely for Big Tech to withdraw from the EU entirely. That would be drastic. The reality is subtler, and we’re seeing it play out right now: Meta is not making its multimodal Llama models available in the EU. Apple isn’t going to bring Apple Intelligence to the EU. These are important, state-of-the-art products. If you believe at all that AI is promising or important, then EU businesses and consumers will suffer from not having access to them…

…Maybe multimodal Llama AI is not important for EU consumers today. But what if the best radiology AI assistant gets built on Llama AI — and EU patients can’t have access? Or an EU business needs the best-in-class AI to remain globally competitive? What if Apple Intelligence can automatically call an ambulance for you if you have a heart attack — but not in the EU?…

…The EU must compete or cooperate. Either one is fine. But it would be ill-advised to continue the current regime of low-grade economic harassment of its nominal allies by syphoning off fines and imposing obnoxious requirements.

2. 4 Key Lessons Learnt in Legacy Planning – Christopher Tan

In the plans that clients want us to put in place for them, one of the common requests is to put in place structures to prevent their children from squandering their inheritance. This is not just limited to young beneficiaries but beneficiaries who can be as old as in their 50s!

The lack of trust is largely due to many of these children not needing to work for the good life that they have been enjoying from a young age…

…But it is not that parents do not know this. No sensible parent starts off their parenting journey with the intention of spoiling their children to such an extreme. It usually begins in a small way, unintentionally, incrementally, and by the time they realise what they might have done, it is too late.

When we give our children too many good things in life, especially when they are still young, we deny them the opportunity to learn the importance of delayed gratification and we do not allow them to foster resilience and independence, which can cause them to have a self-entitlement mentality…

…When I first started my firm in 2001, this new “baby” began to consume me and took time away from my wife and two young children.

Well-meaning friends warned me not to chase wealth at the expense of my family. “But I am not even trying to be richer. I am just trying to survive!” I retorted. Finally, it came to a point in my life where I did not have a relationship with my family.

Thankfully, I realised it early enough to turn around. Otherwise, I would have lost my family…

…In all my work with my clients, I have realised that behind every legacy and estate plan, there is a message of love. Unfortunately, this is lost in the legal documents and structures that are put in place.

I have always encouraged my clients to share their gifting plans with their beneficiaries. Share not just the “what and how” of the plan but also share the “why”.

But as Asians, some of us may not be so willing to communicate our emotions so openly, especially before our passing. In this case, one can consider using the Letter of Wishes (LOW).

The LOW is a non-legally binding document by the settlor to guide the protectors and trustees on how they wish their assets to be managed. But instead of writing it like an instruction manual, write it like a love letter to your loved ones.

3. Nike: An Epic Saga of Value Destruction – Massimo Giunco

A month ago. June 28th, 2024. Nike Q2 24 financial results. 25bn of market cap lost in a day (70 in 9 months). 130 million shares exchanged in the stock market (13 times the avg number of daily transactions). The lowest share price since 2018, – 32% since the beginning of 2024.

It wasn’t a Wall Street session. It was the judgement day for Nike.

The story started on January 13th, 2020, when John Donahue became CEO of Nike, replacing Mark Parker. Together with Heidi O’Neill, who became President of Consumer, Product and Brand, he began immediately to plan the transformation of the company.

A few months later, after hist first tour around the Nike world, the CEO announced – via email – his decisions (using the formula “dear Nike colleagues, this is what you asked for…”):

1)    Nike will eliminate categories from the organization (brand, product development and sales)

2)    Nike will become a DTC led company, ending the wholesale leadership.

3)    Nike will change its marketing model, centralizing it and making it data driven and digitally led…

Clearly, one important support came from the brand investments. The marketing org. dramatically changed its demand creation model and pumped – over the years – billions of dollars into performance marketing/programmatic adv to buy (and the word “buy” is the proper one, otherwise I would have used “earn”) a fast-growing traffic to the ecommerce platform (we will talk about that later).

After a few quarters of good results (as I said, inflated by the long tail of the pandemic and the slow resurrection of the B&M business), things started to take unexpected directions. Among them:

a) Nike – that had been a wholesale business company since ever, working on a well- established “futures” system – did not have a clear knowledge and discipline to manage the shift operationally. Magically (well, not so magically), inventory started to blow up, as all the data driven predictions (the “flywheel” …) were simply inconclusive and the supply chain broke up. As announced by the quarterly earnings releases, the inventory level on May 31st, 2021, was 6.5bn $. On May 31st, 2022, it was 8.5bn $. On November 30th, 2022, it reached 10bn $. Nike didn’t know anymore what to produce, when to produce, where to ship. Action plans to solve the over-inventory issues planted the seed of margin erosion, as Nike started to discount more and more on its own channels – especially Nike.com (we will talk later about it)…

…The CEO of Nike doesn’t come from the industry. So, probably he underestimated consumer behavior and the logic behind the marketplace mechanisms of the sport sneakers and apparel distribution. Or wasn’t aware of them. At the end, he is a poorly advised “data driven guy”, whatever it means. It is more difficult to understand why the President of the Consumer, Product and Brand, a veteran of the industry, one of the creators of the Women’s category in Nike, a professional with an immense knowledge of the company and the business, approved and endorsed all of this. Maybe, excess of confidence. Or pure and simple miscalculations… hard to know…

What happened in 2020? Well, the brand team shifted from brand marketing to digital marketing and from brand enhancing to sales activation. All in. Because of that, the CMO of that time made a few epic moves:

a) shift from CREATE DEMAND to SERVE AND RETAIN DEMAND, that meant that most of the investment were directed to those who were already Nike consumers (or “members”).

b) massive growth of programmatic adv investment (as of 2021, to drive traffic to Nike.com, Nike started investing in programmatic adv and performance marketing the double or more of the share of resources usually invested in the other brand activities). For sure, the former CMO was ignoring the growing academic literature around the inefficiencies of investment in performance marketing/programmatic advertising, due to frauds, rising costs of mediators and declining consumer response to those activities. Things that were suggesting other large B2C companies – like Unilever and P&G – to reduce those kind of DC investments in the same exact period… Because of that, Nike invested a material amount of dollars (billions) into something that was less effective but easier to be measured vs something that was more effective but less easy to be measured. In conclusion: an impressive waste of money.

c) elevation of Brand Design and demotion of Brand Communication. Basically, style over breakthrough creativity. To feed the digital marketing ecosystem, one of the historic functions of the marketing team (brand communications) was “de facto” absorbed and marginalized by the brand design team, which took the leadership in marketing content production (together with the mar-tech “scientists”). Nike didn’t need brand creativity anymore, just a polished and never stopping supply chain of branded stuff…

Obviously, the former CMO had decided to ignore “How Brands Grow” by Byron Sharp, Professor of Marketing Science, Director of the Ehrenberg-Bass Institute, University of South Australia. Otherwise, he would have known that: 1) if you focus on existing consumers, you won’t grow. Eventually, your business will shrink (as it is “surprisingly” happening right now). 2) Loyalty is not a growth driver. 3) Loyalty is a function of penetration. If you grow market penetration and market share, you grow loyalty (and usually revenues). 4) If you try to grow only loyalty (and LTV) of existing consumers (spending an enormous amount of money and time to get something that is very difficult and expensive to achieve), you don’t grow penetration and market share (and therefore revenues). As simple as that…

He made “Nike.com” the center of everything and diverted focus and dollars to it. Due to all of that, Nike hasn’t made a history making brand campaign since 2018, as the Brand organization had to become a huge sales activation machine. An example? The infamous “editorial strategy” – you can see the effects of it if you visit its archive, the Nike channel on YouTube or any Nike account on Instagram – generated a regurgitation of thousands of micro-useless-insignificant contents, costly and mostly ineffective, all produced to feed the bulimic digital ecosystem, aimed to drive traffic to a platform that converts a tiny (and when I say tiny, I mean really tiny…) fraction of consumers who arrive there and disappoints (or ignores) all the others.

4. Getting bubbly – Owen A. Lamont

Is the U.S. stock market currently in an AI-fueled bubble? That’s the question I asked back in March, and my answer was “No, not even close.” Since then, new data has come in, and my answer has changed. As of July 2024, I still think we’re not in a bubble, but now we are getting close.

Here are my previously discussed Four Horsemen:

  • First Horseman, Overvaluation: Are current prices at unreasonably high levels according to historical norms and expert opinion?
  • Second Horseman, Bubble beliefs: Do an unusually large number of market participants say that prices are too high, but likely to rise further?
  • Third Horseman, Issuance: Over the past year, have we seen an unusually high level of equity issuance by existing firms and new firms (IPOs), and unusually low levels of repurchases?
  • Fourth Horseman, Inflows: Do we see an unusually large number of new participants entering the market?

What I said before was, “As of March 2024, we may perhaps hear the distant hoofbeats of the First Horseman (overvaluation), who has not traveled far since he last visited us, but there is no sign yet of the other three.”

What’s changed is the Second Horseman, who is now trotting into view. But there’s still no sign of the other two horsemen; for the aggregate U.S. stock market, we see neither issuance nor inflows…

… The table shows that, as has been widely reported, CAPE is very high today and has only been higher around prior bubbles in 2021 and 1999. The market ain’t cheap.

The only point I want to make is that the 2021 bubble was different from 1999/2000 in one key respect: interest rates. In 1999, both nominal and real rates were high and the excess CAPE yield was negative, implying that there was an obvious alternative to investing in overpriced stocks. In 2021, in contrast, both nominal and real rates were very low and the excess CAPE yield was positive, so that one could argue that stocks were fairly priced relative to bonds.

Today looks closer to 1999 than to 2021: a stock market that looks high relative to bond markets. So in that sense, today’s market looks more bubbly than 2021, though less bubbly than 1999…

…Talking to academic economists in mid-July 2024, I got a 1998ish vibe. When I asked them if they thought the market is overvalued, they almost all said yes, sometimes adding “of course” or “definitely” and mentioning megacap tech stocks. I don’t think the overvaluation sentiment among finance professors is as strong and uniform as it was in 1999, but it is far stronger than it was in 2021.

I’m guessing the gap between public and private utterances mostly reflects the slow pace of academic research. There were many economists studying stock market overvaluation in 1999 because the market had been overvalued for years. In contrast, today we see mostly visceral reactions to high prices as opposed to formal analysis…

I previously showed a table with survey data from Yale’s U.S. Stock Market Confidence Indices,[5] and I said that in order for the Second Horseman to be present:

“I need 65% or more respondents agreeing that “Stock prices in the United States, when compared with measures of true fundamental value or sensible investment value, are too high.”

Below, I show an updated table where I have just added a new row for July 2024. We are not quite at my proposed threshold of 65%, but we‘ve reached 61%, mighty close. With 61% of individual investors saying the market is overvalued but 75% saying that the market is going up, it appears that bubble beliefs are emerging…

…Other evidence suggests bubble beliefs emerging within specific segments of the market. For example, a recent survey found that 84% of retail investors expected the tech sector to outperform in the second half of 2024, but 61% said AI-related stocks were overvalued.

5. Does the Stock Market Care Who the President Is? – Ben Carlson

I took a look back at every president since Herbert Hoover to see how bad stock market losses have been for each four-year term in office…

…Every president saw severe corrections or bear markets on their watch. The average loss over all four-year terms was 30 percent. The average loss under a Republican administration was 37 percent while the average loss under the Democrats was 24 percent. But these differences don’t really tell you much about the two parties. The stock market does not care about Republicans or Democrats.

For example, if you look at the stock market performance under both Republicans and Democrats going back to 1853, two full presidential terms before Lincoln took office, the performance is fairly similar. Total returns under Democrats were 1,340 percent, the total returns under Republicans were 1,270 percent.

Presidents have far less control over the markets than most people would have you believe. There are no magical levers they can pull to force stocks to rise or fall. Policy decisions often affect the economy with a lag. And the economy and stock market are rarely operating in lock-step. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 28 July 2024:

1. Open Source AI Is the Path Forward – Mark Zuckerberg

In the early days of high-performance computing, the major tech companies of the day each invested heavily in developing their own closed source versions of Unix. It was hard to imagine at the time that any other approach could develop such advanced software. Eventually though, open source Linux gained popularity – initially because it allowed developers to modify its code however they wanted and was more affordable, and over time because it became more advanced, more secure, and had a broader ecosystem supporting more capabilities than any closed Unix. Today, Linux is the industry standard foundation for both cloud computing and the operating systems that run most mobile devices – and we all benefit from superior products because of it.

I believe that AI will develop in a similar way. Today, several tech companies are developing leading closed models. But open source is quickly closing the gap. Last year, Llama 2 was only comparable to an older generation of models behind the frontier. This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.

Today we’re taking the next steps towards open source AI becoming the industry standard. We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models…

…Many organizations don’t want to depend on models they cannot run and control themselves. They don’t want closed model providers to be able to change their model, alter their terms of use, or even stop serving them entirely. They also don’t want to get locked into a single cloud that has exclusive rights to a model. Open source enables a broad ecosystem of companies with compatible toolchains that you can move between easily…

…Developers can run inference on Llama 3.1 405B on their own infra at roughly 50% the cost of using closed models like GPT-4o, for both user-facing and offline inference tasks…

…One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it’s clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing…

… I expect AI development will continue to be very competitive, which means that open sourcing any given model isn’t giving away a massive advantage over the next best models at that point in time…

…The next question is how the US and democratic nations should handle the threat of states with massive resources like China. The United States’ advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies. Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don’t lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.

2. How a long-term approach to stock investments pays off in spades – Chin Hui Leong

Let’s look at the S&P 500’s performance between May 2004 and May 2024, a 20-year period which produced an average annual return of 10.2 per cent per year.

Here’s the shocker: If you missed the market’s 10 best days, your double-digit gains would shrink to only 6 per cent per year. If you missed the top 20 days, your returns would plummet to a mere 3.3 per cent, barely keeping up with inflation.

But don’t bet on timing your entry either. During this period, seven of the 10 best days occurred within 15 days of the 10 worst days. In other words, unless you can day trade with precision multiple times in a row, you are better off just holding your stocks through the volatility…

…Here’s another thing. History has shown that the longer you hold, the better your chances of reaping a positive return. From 1980 to 2023, the S&P 500 delivered positive returns in 33 out of the 43 years.

For the math geeks, that’s a win rate of over 76 per cent, far better than a coin flip. To top it off, there hasn’t been a single 20-year period since 1950 where the stock market has seen negative returns…

…While compounding is powerful, blindly buying any stock isn’t the answer. Many are not worthy to be held over long periods. Quality is the key. For a stock to compound, you need its underlying business to be built to last…

…What if you are wrong in your assessment of a business?…

..I submit to you that the lessons you learn holding a stock for the long term will far outweigh any other lessons you pick up from the stock market. Each stock, whether it turns out to be a winner or loser, will provide invaluable lessons you can apply in the future.

As you learn more over time, you’ll get better at picking the right stocks to hold. After all, as the late Nelson Mandela once said: “I never lose, I either win or I learn.”

3. What We Can Learn From The Oil Market – 1980 – Gene Hoots

Autumn 1980, the energy sector was 33% of the S&P 500 Index. Two personal incidents illustrate the mindset about energy, that we now know was unjustified mania…

…One investment advisor visited me in the fall of 1980. He had recently been an Assistant Secretary in the Department of Energy in Washington, Clearly, he was better informed than most about the world oil market. His company was overweight in oil stocks, and he laid out their case.

Oil had hit a new high, $39 a barrel in June. A few weeks before, he had met with the Saudi Oil Minister, Sheik Zaki Yamani. Everyone in the world was listening to Yamani who was setting Saudi oil prices; Yamani seemed to be the most powerful man in the world. My advisor said that in his meeting, Yamani “had personally assured that by April 1981 oil would hit $100 a barrel” – 2 ½ times the current price – a frightening thought…

… I gave my annual pension fund report to the RJR board finance committee. This year, taking my cue from the very conservative Capital Guardian Trust advisors, I (cautiously) stated my concern that oil stocks were becoming too big a part of the market. I did NOT say that oil stocks would decline, rather, that they might not be a bargain relative to other stocks. No sooner had I made the comment than one of the directors interrupted and asked, “Did you say oil stocks are going down?” His tone made it clear that he strongly disagreed with what I had said. I clarified and moved on with my talk, but the board clearly thought that I was completely wrong about oil…

…Spring 1981, the price of crude was far below $100 a barrel, even a bit below $39. Oil would not reach $100 until February 2008, 27 years later. When it comes to major economic and market inflection points, there are no experts!…

…Over the next two years, oil stocks dropped on average 35-50% and many of the smaller companies went bankrupt. 43 years later, the Energy Sector is 3.6% of the S&P 500. $100 invested in the energy stocks at the end of 1980 would have returned $493 and $100 in everything else would have returned $5,787 – 3.5% vs. 9.8% annually (without dividends).

4. Sometimes a cut is just a cut – Josh Brown

When is a rate cut not an emergency rate cut? When it’s a “celebratory rate cut” – a term coined by Callie Cox, whom you should be subscribed to immediately by the way.

Callie’s making the point that sometimes the Federal Reserve cuts because they can and they should – policy is overly restrictive relative to current conditions. And sometimes they cut because they have to – an emergency cut with even more emergency cuts to come later…

…The rate cutting cycles that stand out in our memories are the emergency ones. So there is a reflex in market psychology where we automatically equate cutting cycles with oncoming recessions. We need to stop that nonsense…

…Interest rate cuts have not historically meant a “slam dunk” recession call. Sometimes a cut is just a cut. The Y axis is S&P 500 performance rebased to 100 on the left scale and on the right scale it’s the date of the first interest rate cut of the cycle. The X axis is days after the first cut. You can plainly see that in many cases after the first cut we did not have a recession (the blue lines). There are even some instances where we did have a recession (red lines) but stock market performance did not go negative from the time of the first cut.

Which means the range of outcomes after the initial cut are all over the place. Crafting a narrative for what will happen to either the stock market or the economy (or both) as a result of the initial interest rate cut is an exercise in telling fairy tales.

5. AFC on the Road – Turkmenistan – Asia Frontier Capital

We decided to visit Turkmenistan in May 2024 after the third AFC Uzbekistan Fund Tour. Turkmenistan borders Uzbekistan to the west and happens to be one of the least visited countries in the world with what’s purported as being one of the ten hardest visas in the world to obtain…

…Upon receiving the invitation letter for our visa from the tour agency we used in Turkmenistan, we went to the Turkmen embassy in Tashkent. Warned of how chaotic the embassy is and how long it could take, along with a customary light interrogation, we were prepared to be patient. However, our interaction at the embassy was the polar opposite.

We provided our invitation letter and visa form along with our passports and the gentleman on the other side of the glass said to wait five minutes. Not being our first time dealing with a government agency in this part of the world, “5 minutes” often means 30 minutes or one hour. However, after approximately 5 minutes we were called and given our passports with our shiny green Turkmen visas pasted inside…

…The day after our May 2024 AFC Uzbekistan Fund Tour, we took the evening Afrosiyob (fast train) which takes four hours from Tashkent to Bukhara, arriving around 23:00. We took in the sights of the ancient city around midnight. For anyone going to Uzbekistan, Bukhara is a must see, much more so than Samarkand, especially as the old city is lit up at night.

The following morning at 06:30 we were picked up by a taxi for the two-hour drive to the Uzbek-Turkmen border where we exited the taxi and continued on foot. The border was easy to cross on the Uzbek side, taking five minutes as there was only us and a group of four Chinese tourists. We crossed no-man’s land in a minivan to the Turkmen side where we took a Covid-19 PCR test (just a money-making opportunity) which costs USD 33 each. Then we proceeded to the Turkmen immigration building via another, this time Soviet, minivan (nicknamed a “bukhanka” as it is shaped like a Soviet loaf of bread called bukhanka) where we met our Turkmen tour guide for the next 4 days (foreigners cannot freely travel in Turkmenistan, save for a 72-hour transit visa), completed our customs declaration forms (which were not in English), then they took our fingerprints and checked each luggage item thoroughly and finally proceeded onto another bukhanka to the border exit. There, after a final confirmation from a border guard that we had our visas stamped, we entered the parking lot, surrounded by the sprawling Karakum desert (which covers 80% of Turkmenistan).

We then took a twenty-minute drive to the nearby city of Türkmenabat, formerly Novy Chardzhou, the second largest city in the country, hosting a population of ~250,000, for a quick lunch before a back-breaking four-hour drive with our modern Japanese 4-wheel drive SUV to the ancient city of Merv on one of several roads to be that resembled the moon (and probably was a similar experience to what riding in the back of a dump truck full of rocks must feel like). On the drive, we passed a handful of wandering camels, some large petrochemical facilities (Turkmenistan hosts the world’s fourth largest natural gas reserves behind Russia, Iran, and Qatar), and hundreds of trucks with either Iranian, Turkish, or local number plates. We suspected that all the Iranian and Turkish trucks were in transit to Uzbekistan.

After about 2 hours into the journey, a brand new nicely paved 4 lane highway (resembling a German Autobahn) appeared parallel to our “tank track” road with a few trucks from time to time on it. After a short while, we innocently asked our tour guide why we can’t use it too and his answer was “it costs money”. To our surprise after a few minutes our driver drove off the “tank tracks” and followed another SUV which led us to the Autobahn. For about 30 minutes we were able to drive at about 120 km/h (instead of the maximum 50 km/h on the “tank tracks”) and realized that this road was actually still closed as from time-to-time construction works were taking place. Finally, we had to exit the Autobahn since a bridge was still under construction and a dirt track led us back to the normal road. However, before entering the normal road we had to pass by a guard (he was obviously a construction worker) and our driver handed him the equivalent of 50 USD cents for the “informal toll”…

…The former President of Turkmenistan, Gurbanguly Berdimuhamedov, is famous for his obsession with Guinness World Records. So it is only natural that at Ashgabat International Airport we encountered our first such world record, that of the world’s largest bird-shaped (seagull) building (according to Guinness World Records) with a wingspan of 364 meters.

The passenger terminal is also host to the world’s largest carpet, at 705 square meters. Opened in 2016, the airport is as modern as anything you see in Istanbul or Hong Kong. As we departed the airport, we passed by the world’s biggest fountain complex and thereafter we stopped to take a photo; our first glimpse of the ostentatious capital. We then drove to the Sports Hotel which is part of a massive complex built for the “2017 Asian Indoor and Martial Arts Games”, where the stadium, clearly visible from our hotel rooms, showcased the world’s largest statue of a horse…

…Only a few days before travelling to Turkmenistan, our broker in Uzbekistan casually told us during a dinner that the country “seems to have had” a stock exchange but its website (https://www.agb.com.tm/en/) did not work for the last 2 years and emails he sent to them were never answered so he was not sure if the stock exchange was still operating. Of course we were very surprised after we found the exchange’s website on Google and that it was operating again and updated (even in English) with new information and price quotations. The next day we wrote an official email to the CEO of the Ashgabat Stock Exchange but as of the day of publishing this travel report we never received a reply – what do you expect? Naturally, we asked our tour guide if we could visit the stock exchange and try to arrange a meeting, which of course we were denied since “you are travelling on a tourist visa and not with a business visa” we were told…

…One of the most fascinating things about it and Turkmenistan is the country’s exchange rate.

The official exchange rate is 3.5 manats to 1 USD. However, the black-market rate is 19.5 manats to the USD. If you order something in your hotel and charge it to your room, say a coffee for 40 manats, you will be billed at the official rate leading it to cost USD 11.42. However, if you pay cash, that coffee’s price collapses all the way down to a more normal USD 2.05…

…What is typical in many countries is a difference in pricing for hotels between locals and foreigners. Our hotel, the Sports Hotel costs approximately USD 85 per person per night. However, for a local, a suite costs 170 manats, or USD 8.71 at the black market rate. And, no that is not a typo!

Before returning to the hotel, we visited the modern shopping mall opposite our hotel in order to stock up on food and alcohol in an upscale supermarket. The shopping mall was full of local shops – and no international brands with the exception of LC Waikiki.

In the supermarket most of the goods were from either local, Iranian or Turkish companies. There were only a few international brands, but the big U.S. brands and European brands were almost all missing – just a few infamous German brands (no Ricola or Lindt chocolate for Thomas)…

…As we drove out of the ghost town that is Ashgabat, we crossed a bridge into a neighborhood with traditional homes that look similar to what you see in the rest of Central Asia, where it appeared the majority of Ashgabat’s population (about 1 million) actually lives. There was traffic, bus stops and buses were full, and some of the houses were very beautiful, while none of the construction was white marble!

As we drove further on the highway it became increasingly obvious, we were moving further afar from the stage the President set, for the infrastructure grew worse and worse until we were again driving on roads that resembled the moon (little did we know how much worse the road would get).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Meta Platforms and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 July 2024:

1. How a brush with death shaped my long game – Eric Markowitz

Last February, I opened my laptop and began writing a goodbye letter to my 18-month-old daughter.

“Dear Bea,” I began. “I want you to know how much I loved you…” I then carefully organized passwords to my computer, e-mail, and online brokerage accounts. My wife and I sat across from each other on the couch in stunned silence.

Hours earlier, I was told by ER doctors that I’d need emergency brain surgery to remove what they called a “rapidly enhancing lesion” in the center of my cerebellum, the part of my brain just above the brainstem. The lesion was about the size of a walnut.

At that point, doctors were unsure what it was. They explained it could either be a Stage 4 glioblastoma — terminal brain cancer — or an abscess that could pop at any point. If it was an abscess, the infection would likely prove fatal as well, given its proximity to my brainstem…

…That night, hours before the brain surgery, I laid in bed unable to sleep. I remember thinking about the crushing irony of my particular situation. For the last several years, I had built my professional identity around the idea of long-termism. I wrote a weekly newsletter about long-term investing; about compounding over many decades…

…And yet, here I was: 35 years old, and out of time. No more compounding. No more long-termism…

…At that precise moment, the idea of long-termism or “playing the long game” began to feel almost embarrassing — or ridiculous. The idea was like an act of hubris. The future isn’t earned; we’re lucky to experience it…

…Before this episode, I never had a significant health problem. But the truth is that I wasn’t living an entirely healthy, long-term-oriented lifestyle. I was constantly stressed at work. I had stopped exercising. I was glued to my phone — and to the market. In the months leading up to my condition, we were having a rough year, and it was all I could think about. I’d dream about stock prices. I’d wake up in a panic.

Despite the ideals of long-termism I professionally and publicly promoted, I was, in fact, living a lifestyle that was just the opposite. I was myopically focused on the short-term —on success, on the day-to-day. I avoided seeing friends; my marriage was becoming strained. Things were unraveling…

…The craniotomy was a tough procedure. They removed a large chunk of skull in the back of my head, spread open my brain with forceps, and removed the lesion… 

…Finally — and it’s easy in hindsight to breeze over the days it took — the report came back conclusive: an infection. Not cancer.

Later, I’d find out that typical abscesses rupture after 10 days or so. Mine had been in my head for at least 4 weeks. No doctor could explain it. I had a ticking time bomb in my brain that simply didn’t explode. Maybe the detonator malfunctioned…

…When people ask about how the experience has changed me, I simply say I’m re-committed to playing the long game.

Playing the long game isn’t just about structure and process and systems that are designed to withstand the long-term: it’s about the joy and gratitude of getting to play the game in the first place. For me, up until that point in my life, I had been making short-term decisions that led to stress and burnout. And, in retrospect, my “always on” lifestyle likely led to my near-fatal brush with death. Stress and playing short-term games quite literally nearly killed me.

My focus was all on the wrong things.

Coming out of this experience, I proactively shifted my focus. I decided to make both personal and business decisions that would create an environment where the most important things in my life could flourish long after I was gone. I read more. I talked to new people. I made more effort in my relationships — I no longer think about getting through the day, but what I’m building over the long-run. I put down my phone. I made new connections. I asked, “how can I set up my life today to ensure my kids — and their kids — will be set up?” In business, I asked, “how can I set up my business today to ensure it exists in 50 years — or even 100 years?” 

2. A borrower’s struggles highlight risk lurking in a surging corner of finance – Eric Platt and Amelia Pollard

Wall Street’s new titans have differed significantly in valuing the $1.7bn of debts they provided to workforce technology company Pluralsight, highlighting the risk that some private credit marks are untethered from reality…

…Private loans by their very nature rarely trade. That means fund managers do not have market data to rely on for objective valuations.

Instead they must draw on their own understanding of the value of the business, as well as from third-party valuation providers such as Houlihan Lokey and Kroll. They also can see how rivals are marking the debt in securities filings.

The funds share details of each individual business’s financial performance with its valuation provider, which then marks the debt. The fund’s board and audit committee ultimately sign off on those valuations…

…The loans to Pluralsight were extended in 2021, as part of Vista Equity Partners’ $3.5bn buyout of the company. It was a novel loan, based not on Pluralsight’s cash flows or earnings, but how fast its revenue was growing. Regulated banks are unable to provide this type of credit, which is deemed too risky. A who’s who of private credit lenders — including Blue Owl, Ares Management and Golub Capital — stepped in to fill the void.

The seven lenders to Pluralsight who report their marks publicly disclosed a broad range of valuations for the debt, with a Financial Times analysis showing the gulf widened as the company ran into trouble over the past year. The firms disclose the marks to US securities regulators within their publicly traded funds, known as BDCs, which offers a window into how their private funds may be valuing the debt.

Ares and Blue Owl marked the debt down to 84.9 cents and 83.5 cents on the dollar, respectively, as of the end of March. Golub had valued the loan just below par, at 97 cents on the dollar. The other four lenders, Benefit Street Partners, BlackRock, Goldman Sachs and Oaktree, marked within that range…

…The most conservative mark implies a loss across the lenders of nearly $280mn on the $1.7bn debt package. But Golub’s mark would imply a loss of just $50mn for the private lenders.

Some lenders have marked the loan down further since May, people familiar with the matter said.

Vista, for its part, started marking down its valuation of Pluralsight in 2022, cutting it to zero this year. Vista is expected to hand the keys to the business to the lenders in the coming weeks, with one person noting the two sides had made progress in recent talks…

…A publicly traded loan that changes hands below 80 cents on the dollar typically implies meaningful stress, a cue to investors of trouble. But as Pluralsight illustrated, that kind of mark never materialised until it became clear Vista might lose the business.

3. Private Equity’s Creative Wizardry Is Obscuring Danger Signs – Kat Hidalgo, Allison McNeely, Neil Callanan, and Eyk Henning

Even though buyout firms say they see green shoots in the M&A market, they’re deep into a third year of higher rates and scant opportunity to sell assets at decent prices, and they’ve been forced into a host of wheezes to keep things going: “Payment in kind” (PIK) lets PE-owned companies defer crippling interest payments in exchange for taking on even more costly debt; “net asset value” loans allow cash-strapped buyout firms to borrow against their holdings…

…The amount of distressed debt owed by portfolio businesses of the 50 biggest PE firms has climbed 18% since mid-March to $42.7 billion, according to data compiled by Bloomberg News using rankings from Private Equity International. “We expect defaults to go up,” Daniel Garant, executive vice president and global head of public markets at British Columbia Investment Management Corp., another Canadian pensions giant, told Bloomberg recently.

A key challenge for regulators is that much of PE’s borrowing was arranged with loose legal terms at a time when lenders were fighting for deals, making it easier today to use financial wizardry to keep sickly businesses alive.

“You don’t know if there are defaults because there are no covenants, right?” says Zia Uddin of US private credit firm Monroe Capital. “So you see a lot of amend and extend that may be delaying decisions for lenders.”

All this additional debt makes it tougher, too, for PE owners hoping for exits.

Take Advent International and Cinven. They took on heavy debts when buying TK Elevator including a roughly €2 billion ($2.1 billion) PIK note they loaded onto the lift maker that’s swelled to about €3 billion, according to people with knowledge of the situation. The tranches carry an interest rate of 11%-12%…

…In Europe, most private credit borrowers have been turning to PIK when reworking debt obligations, according to data from Lincoln International. In the US, Bloomberg Intelligence reckoned in a February note that 17% of loans at the 10 largest business development companies — essentially vehicles for private credit funds — involved PIK…

…One way firms try to keep investors sweet is by borrowing against a portfolio of their own assets, known as a NAV loan, and using the cash to help fund payouts. NAV lenders sometimes charge interest in the mid to high teens, and some borrowers have used holiday homes, art and cars as collateral…

…The proliferation of NAV, PIK and similar has also deepened connections between PE firms and their credit cousins, a possible contagion risk if things go wrong. In the US almost 80% of private credit deal volume goes to private equity-sponsored firms, according to the Bank for International Settlements…

…CVC Capital Partners came up with a novel use of extra leverage during its March IPO of Douglas AG. It borrowed €300 million from banks, injecting it as equity in the German beauty retailer to strengthen its balance sheet, and pledging Douglas shares as collateral in a so-called margin loan, according to the offering’s prospectus.

A fall of 30% to 50% from the IPO price would trigger a margin call, according to people with knowledge of the matter who declined to be identified as the information is private. The stock is down about a quarter since the listing…

…A new BIS report warns that “a correction in private equity and credit could spark broader financial stress,” citing potential knock-on effects on the insurers that heavily invest in these funds and on banks as the “ultimate providers of liquidity.”

“Some features in the financial markets have probably postponed the impact of the rise on interest rates, for example fixed rates, longer maturities and so on,” Agustin Carstens, BIS’s general manager, told Bloomberg TV last week. “These can change, and will be changing in the near future.”

4. China’s subsidies create, not destroy, value – Han Feizi

A common narrative bandied about by the Western business press is that China’s subsidized industries destroy value because they are not profitable – from residential property to high-speed rail to electric vehicles to solar panels (the subject of the most recent The Economist meltdown).

If The Economist actually knows better and is just doing its usual anti-China sneer, then it is par for the course and we give it a pass. But if this opinion is actually held – and all indications are that it is – then we are dealing with something far more pernicious. 248 years after the publication of Adam Smith’s “The Wealth of Nations” and the West has lost the economic plot…

…To be unable to comprehend this crucial point is to never have properly understood Adam Smith. “The Wealth of Nations” was never about the pursuit of profits.

They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.

The entire point of enlightened self-interest was supposed to be the secondary/tertiary effects that improve outcomes for all.

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.

What we want from the butcher, the brewer and the baker are beef, beer and bread, not for them to be fabulously wealthy shop owners. What China wants from BYD and Jinko Solar (and the US from Tesla and First Solar) should be affordable EVs and solar panels, not trillion-dollar market-cap stocks. In fact, mega-cap valuations indicate that something has gone seriously awry. Do we really want tech billionaires or do we really want tech?…

…The much-heralded multi-trillion dollar valuations of a handful of American companies (Microsoft, Apple, Nvidia, Alphabet, Amazon and Meta) – all of which will swear up and down and all day long that they are not monopolies – are symptoms of serious economic distortion. How much of their valuation is a result of innovation and how much is due to regulatory capture and anti-trust impotence?

It’s hard to say. China stomped on its tech monopolies and now manages to deliver similar if not superior products and services – able to make inroads into international markets (e.g. TikTok, Shein, Temu, Huawei, Xiaomi) – at always much lower prices.

The Western business press, confusing incentives with outcomes, lazily relies on stock markets to determine value creation. The market capitalization of a company is an important but entirely inadequate measure of economic value…

…What China has done in industry after industry is to flatten the supply curve by subsidizing hordes of producers. This spurs innovation, increases output and crushes margins. Value is not being destroyed; it’s accruing to consumers as lower prices, higher quality and/or more innovative products and services.

If you are looking for returns in the financial statements of China’s subsidized companies, you are doing it wrong. If China’s subsidized industries are generating massive profits, policymakers should be investigated for corruption.

A recent CSIS report estimated that China spent $231 billion on EV subsidies. While that is certainly a gross overestimation (the think tank’s assumption for EV sales tax exemption is much too high), we’ll go with it. That comes out at $578 per car when spread over all ~400 million cars (both EV and ICE) on China’s roads.

The result has been a Cambrian explosion of market entrants flooding China’s market with over 250 EV models. Unbridled competition, blistering innovation and price wars have blinged out China’s EVs with performance/features and lowered prices on all cars (both EV and ICE) by $10,000 to $40,000. Assuming average savings of $20,000 per car, Chinese consumers will pocket ~$500 billion of additional consumer surplus in 2024.

What multiple should we put on that? 10x? 15x? 20x? Yes, China’s EV industry is barely scraping a profit. So what? For a measly $231 billion in subsidies, China has created $5 to $10 trillion in value for its consumers. The combined market cap of the world’s 20 largest car companies is less than $2 trillion…

…The more significant outcomes of industrial policy are externalities. And it is all about the externalities.

To name just a few, switching to EVs weens China from oil imports, lowers particulates and CO2 emissions, provides jobs for swarms of new STEM graduates and creates ultra-competitive companies to compete in international markets.

Externalities from the stunning collapse of solar panel prices may be even more transformative. Previously uneconomic engineering solutions may become possible from mass desalinization to synthetic fertilizer, plastics and jet fuel to indoor urban agriculture. China could significantly lower the cost of energy for the Global South with massive geopolitical implications.

The city of Hefei in backwater Anhui province has achieved spectacular growth in recent years through shrewd investments in high-tech industries (e.g. EVs, LCD, quantum computing, AI, robotics, memory chips)…

…While returns for traditional venture capital investments are dictated by company profits, the Hefei model is more flexible. Returns can be collected through multiple channels from taxing employment to upgrading workforces to increasing consumer surplus. The internal hurdle rate can be set lower if positive externalities are part of the incentive structure.

5. Dear AWS, please let me be a cloud engineer again – Luc van Donkersgoed

I’m an AWS Serverless Hero, principal engineer at an AWS centric logistics company, and I build and maintain https://aws-news.com. It’s fair to say that I am very interested in everything AWS does. But I fear AWS is no longer interested in what I do.

This post is about AWS’ obsession with Generative AI (GenAI) and how it pushes away everything that makes AWS, well, AWS…

…Then 2024 came around, and somehow AWS’ focus on GenAI took on hysterical proportions. It started with the global AWS summits, where at least 80% of the talks was about GenAI. Then there was AWS re:Inforce – the annual security conference – which was themed “Security in the era of generative AI”…

…And this is the crux: AWS is now focused so strongly on GenAI that they seem not to care about anything else anymore – including everything that made developers love them and made them the leading cloud provider on almost every metric…

…I like GenAI. I use it extensively at work and for the AWS News Feed. I use ChatGPT to shape new ideas, Copilot to speed up development, and Claude to generate summaries. The point is that all these features add to an existing business. This business has customers, data, business rules, revenue, products, marketing, and all the other things that make a business tick. And most businesses had these things before 2022. GenAI allows us to add new features, and often faster than before. But GenAI has no value without an existing product to apply it to….

…But AWS and I are growing apart. I feel the things I value are no longer the things they value. By only talking about GenAI, they implicitly tell me databases are not important. Scalable infrastructure is not important. Maintainable applications are not important. Only GenAI is…

…In summary, AWS’ implicit messaging tells developers they should no longer focus on core infrastructure, and spend their time on GenAI instead. I believe this is wrong. Because GenAI can only exist if there is a business to serve. Many, if not almost all of us developers got into AWS because we want to build and support these businesses. We’re not here to be gaslighted into the “GenAI will solve every problem” future. We know it won’t.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 July 2024:

1. Idea Brunch with “Made in Japan” – Edwin Dorsey and Made in Japan

For me, Japan is an interesting opportunity set because there’s a strong case to be made for several inflection points that are not all related. A few that come to mind:

  • Governance improvements: Japan always had a lot of companies with loads of cash on the balance sheet making them look ‘cheap’. The issue has always been that this cash was never for the shareholders so the market discounted this appropriately. In the last 1.5 years, however, the Tokyo Stock Exchange has cracked down on companies with weak governance/capital allocation policies and low valuation. They name and shame the companies that don’t try to improve their corporate value and are implementing a host of other measures to incentivize responsible capital allocation. I think this sends a signal to the global investor community that Japan is trying to become less of a value trap.
  • Interest rates/Inflation: Post 2008 Financial Crisis, Japan’s interest rates have been close to zero for over a decade. This is in a country that has been deflationary for so long and we’ve been gradually moving away from that. Inflation seems to be returning and interest rates are ‘normalizing.’ This could be the moment to wake up the animal spirits of Japan again, to take on more risk and for businesses to command pricing power. If inflation sustains itself at some level, it will no longer make rational sense for businesses and individuals to hold on to cash like they did in a deflationary economy where that was rewarded as their purchasing power increased. Now the opposite will happen which means they are incentivized to put the cash to work. This won’t just be businesses investing but also for individuals too. The government just made it way more attractive to do that through its new NISA scheme.
  • NISA: Japan has set up its new tax-free investment scheme for households called the Nippon Individual Savings Account (NISA). The first iteration was garbage but this one is promising. It was set up by the government to incentivize households to allocate their excess cash savings into the stock market. Household savings allocated to equities has been notoriously small, less than 20% or so. By providing more liquidity in the markets it could help the financial markets function better and make it also easier for institutions to participate in areas which were previously too illiquid.
  • Consolidation: I think we’re entering a phase of consolidation amongst Japanese SMEs which has been the backbone of Japanese society. We have an issue where many aging owners are not able to find successors for their businesses. There’s been a stigma around M&A in the past but this is starting to melt away and becoming a viable option. We’re also starting to see more young talent flowing into the M&A space. Moreover, with low interest rates, we’re seeing increased interest from foreign PE firms as well – which all tells me that we’re at an interesting juncture for industry consolidation.
  • Digitalization: One thing that you’re starting to see after Covid is the need for a more digital Japan has come to the forefront. We’ve been embarrassingly late to digital/software adoption but this was the turning point where we realized it was necessary. The government set up a Digital Agency to help adoption and provide various subsidy schemes to encourage the use of more software. We even have the term ‘DX’ short for Digital Transformation now added to the lexicon. There’s also the ‘digital cliff’ as it is called here. A lot of IT systems being used by corporate Japan today are super old something like more than 60% will be 20 years or older by 2025. So a lot of IT spending currently is going to maintaining these systems rather than building out new ones. Many people imagine Japan as this futuristic place, but you’ll be amazed how much paper we still use!…

…One of the contradictions I’ve felt about Japan is that large-cap growth in Japan gets priced at ridiculously high multiples. It’s not uncommon to see these things trade at 40 times P/E or higher. This is presumably because the cost of capital in Japan is low and in a deflationary economy where the population is declining, growth is rare. However, when you look at these small companies in great competitive positions that are growing double digits with lots of room to grow, you can find them trading for single-digit earnings multiples! The delta is so big that I call this the ‘chasm’. If you look at some of the large-cap growth companies, these also traded at very low multiples early on but as they continued to grow earnings per share at some point brokers start to cover it, institutions start to pile in and the stock re-rates quite significantly and that contradiction gets resolved. Some of these large caps are expensive and can de-rate as interest rates rise, but the gap is large enough that I still think it’s more likely that these small companies will re-rate than the large caps de-rating down to where these small caps are valued.

2. The Last 72 Hours of Archegos – Ava Benny-Morrison and Sridhar Natarajan

An Archegos staffer re-lived the craziness of being in an airport security line while on a call with panicked banks, trying to head off catastrophe. A Credit Suisse trader described nabbing a Citi Bike on his day off to reach the office and untangle billions tied to Bill Hwang’s family office. And in the midst of it all, a junior Goldman Sachs manager recounted a call from the dying firm as it pleaded for the return of almost half a billion dollars it accidentally sent the lender.

Wall Street’s trial of the decade has offered vivid glimpses of the 72 hours that obliterated Hwang’s $36 billion fortune. One after another, Wall Streeters told a New York jury their version of how his secretive family office — and its pileup of wild wagers on jerry-rigged spreadsheets — ultimately crumbled and saddled banks with more than $10 billion in losses.

But it’s not mere scenes. Weeks of testimony have exposed cringeworthy misjudgments and costly blunders in various camps throughout the crisis — hardly Wall Street’s preferred image of calculated risk-taking. Bankers, for example, painfully acknowledged how they relied on sometimes-vague or evasive trust-me’s from Archegos while doling out billions in firepower for Hwang’s bets. That confidence melted into confusion that’s been replayed in the courtroom of a 90-year-old judge. Prosecutors are trying to make the case that Hwang manipulated the market and defrauded lenders…

…Jefferies calls CEO Rich Handler, who is on holiday in Turks and Caicos with a spicy margarita on the way. They tell him Archegos isn’t answering their calls. Handler says he’s going to get his cocktail and he wants Archegos positions gone and a tally of losses by the time he comes back. It was one of the few banks that escaped with minimal losses…

…As ViacomCBS and Discovery slump, Archegos capital plummets too. The family office is wiped out by the end of the day — just one week after Hwang gathered staff at his corporate apartment and talked about ways to grow the fund to $100 billion…

…Three years after the Archegos flameout exposed the audacity of Hwang’s investing, weeks of testimony have also served as an indictment of sorts of the system that enabled him.

Bank insiders on the witness stand have described extending billions of dollars in financing while relying on the equivalent of pinky promises to understand the size and shape of his portfolio, an approach that culminated with more than $10 billion in losses at a handful of lenders. Courtroom testimony and exhibits also revealed a lack of skepticism among those gatekeepers until it was far too late.

3. An Interview with Daniel Gross and Nat Friedman About Apple and AI – Ben Thompson, Daniel Gross, and Nat Friedman

Let’s start with the current belle of the ball, Apple. Apparently we have a new obvious winner from AI. In case you’re keeping track, I think Google was the obvious winner, then OpenAI was the obvious winner, then Microsoft, then Google again, then everyone just decided screw it, just buy Nvidia — I think that one still holds actually — and now we are to Apple, which by the way does not seem to be using Nvidia. Here’s a meta question: has anything changed in the broader environment where we can say with any sort of confidence, who is best placed and why, or is this just sort of the general meta, particularly in media and analysts like myself, running around like chickens with their heads cut off?

NF: I think one thing that really plays to Apple’s favor is that there seems to be multiple players reaching the same level of capabilities. If OpenAI had clearly broken away, such that they were 10 times better or even 2 times better than everyone else in terms of model quality, that would put Apple in a more difficult position. Apple benefits from the idea that either they can catch up or they have their choice of multiple players that they can work with, and it looks like we have somewhere between three and five companies that are all in it to win it and most of whom are planning to offer their models via APIs.

You have Google, OpenAI, Anthropic, you have X, you have Meta and so if you’re on the side of application building, generally this is great news because prices are going to keep dropping 90% per year, capabilities are going to keep improving. None of those players will have pricing power and you get to pick, or in Apple’s case, you can pick for now and have time to catch up in your own first party capabilities. The fact that no one’s broken away or shown a dominant lead, at least in this moment, between major model releases. We haven’t seen ChatGPT-5 yet, we haven’t seen Q* yet. Yeah, on current evidence, I think that’s good for people who are great at products, focus on products and applications and have massive distribution…

Yeah, I mean I was writing today, I wrote about Apple three times this week, but the latest one was I perceive there being two risk factors for Apple. One is what you just said, which is one of these models actually figures it out to such a great extent that Apple becomes the commodity hardware provider providing access to this model. They’ll have a business there, but not nearly as a profitable one as they’re setting up right now where the models are the commodity, that’s risk factor number one.

Risk factor number two is, can they actually execute on what they showed? Can this on-device inference work as well as they claim? Will using their own silicon, and I think it’s probably going to be relatively inefficient, but given their scale and the way that they can architect it, they can probably pull it off having this one-to-one connection to the cloud. If they can do it, that’s great, but maybe they can’t do it. They’re doing a lot of new interesting stuff in that regard. Of those two risk factors, which do you think is the more important one?

DG: I don’t fully understand and I never fully have understood why local models can’t get really, really good, and I think that the reason often people don’t like hearing that is there’s not enough epistemic humility around how simple most of what we do is, from a caloric energy perspective, and why you couldn’t have a local model that does a lot of that. A human, I think, at rest is consuming like 100 watts maybe and an iPhone is using, I don’t know, 10 watts, but your MacBook is probably using 80 watts. Anyway, it’s within achievable confines to create something that has whatever the human level ability is, it’s synthesizing information on a local model.

What I don’t really know how to think about is what that means for the broader AI market, because at least as of now we obviously don’t fully believe that. We’re building all of this complicated data center capacity and we’re doing a lot of things in the cloud which is in cognitive dissonance with this idea that local models can get really good. The economy is built around the intelligence of the mean, not the median. Most of the labor is being done that is fairly simple tasks, and I’ve yet to see any kind of mathematical refutation that local models can’t get really good. You still may want cloud models for a bunch of other reasons, and there’s still a lot of very high-end, high-complexity work that you’re going to want a cloud model for, chemistry, physics, biology, maybe even doing your tax return, but for basic stuff like knowing how to use your iPhone and summarizing web results, I basically don’t understand why local models can’t get really good.

The other thing I’d add in by the way that’s going to happen for free is there’s going to be a ton of work both on the node density side from TSMC, but also on the efficiency side from every single major AI lab, because even though they run their models in the cloud, or because they run their models in the cloud, they really care about their COGS. You have this process that’s happened pretty durably year-over-year, where a new frontier model is launched, it’s super expensive to run and then it’s distilled, quantized or compressed so that the COGS of that company are more efficient. Now if you continue to do that, yeah, you do sort of wonder, wait a minute, “Why can’t the consumer run this model?”. There’s a ton of economic pressure to make these models not just very smart, but very cheap to run. At the limit, I don’t know if it’s going to be like your Apple TV, sort of computer at home is doing the work, or literally it’s happening in your hands, but it feels like local models can become pretty powerful…

And where’s OpenAI in this? I analogized them to FedEx and UPS relative to Amazon, where Amazon just dumps the worst tasks on them that Amazon doesn’t want to do and they take all the easy stuff. But at the same time, one of my long-running theses is is that OpenAI has the opportunity to be a consumer tech company and they just got the biggest distribution deal of all time. Where do you perceive their position today as opposed to last week?

DG: I don’t fully understand the value of the distribution from the Apple deal. Maybe it makes sense, maybe it’s the Yahoo-Google deal. I think the question in AI is, if you’re working on enterprise, that’s one thing. If you’re working on consumer, the old rules of capitalism apply and you need a disruptive user interface such that people remember to use your product versus the incumbents and maybe that was chat.openai.com.

Which is now chatgpt.com, by the way.

DG: Chatgpt.com, or maybe that’s not enough. I think you saw a hint, not necessarily of just how OpenAI, but all of these labs sort of see themselves going in their product announcement where they created a thing that you just talk to, and it’s quite possible that maybe that is sufficient to be a revolutionary new user interface to the point where they can create their own hardware, they can basically command the attention of customers.

But I sort of think the general rule in the handbook is, if you’re going to be in consumer, you want to be at the top of the value chain. I mean, certainly it’s a mighty and impressive company, but the deal with Apple doesn’t really signal top of value chain. So the question is, really the ancient question we’ve been asking ourselves on this podcast for years now, which is, “What is the new revolutionary user interface that actually causes a change in user behavior?”.

Does that mean that Google is the most well-placed? They have all the smartphone attributes that Apple does, they should have better technology as far as models go. Does it matter that they’re worse at product or trust, like they don’t have the flexible organization that you were detailing before? We spent a lot of time on Google the last time we talked, has anything shifted your view of their potential?

DG: I think it really all depends on whether you can make an experience, and it always has depended on whether you can make an experience that’s good enough to justify a change in user behavior.

I’d argue for example, that there was a period in time where even though the actual interface was pretty simple, generating high-quality images was enough to cause a dramatic shift in user behavior. Midjourney is Midjourney not because it has some beautiful angled bar to pinch-and-zoom thing. It’s just like that was the remarkable miracle that it had. It made really good images, and it gave it some sticking power. So it’s this tension between defaults and inferior product and new revolutionary experiences, and whether they have enough to break the calcification of the incumbent.

It’s quite possible that if no one has any new brilliant ideas that Google, even though the models don’t seem to be as excellent, at least to the consumer’s eye, that they survive just because they have some Android user base, they certainly have Google.com. I will say the thing that has been surprising to me is while the technical capabilities of Google’s model seem impressive, the consumer implementation is actually I think worse than, “Just okay”. I thought their integration of language models into search was abysmal, sorry, to be totally frank. It was referencing Reddit comments that weren’t real facts, it’s not that hard to fix this sort of thing. So they need to be doing the bare minimum I think to maintain their status in the hierarchy. It’s possible they don’t do that, it’s possible that a new revolutionary user interface is also created, it’s also possible that they catch up and they bumble their way through it and they’re just fine.

But this is, I think the main question to the challenger labs, if they’re going in the direction of a consumer product is, “How do you make something that is so great that people actually leave the defaults?”, and I think we always underestimate how excellent you need to be. Enterprise things are a little bit different, by the way, and OpenAI is a very good lemonade stand just on enterprise dynamics, but consumer is in a way easier to reason about. You just have to have a miracle product and if that doesn’t happen, then yeah, maybe you should be long Google and Apple and the existing incumbents…

…NF: We’re in a bubble, in my opinion, no question. Like the early Internet bubble in some ways, not like it in other ways. But yeah, just look at the funding rounds and the capital intensity of all this, it’s crazy.

But bubbles are not bad for consumers, they’re bad for the investors who lose money in them, but they’re great for consumers, because you perform this big distributed search over what works and find out what does and even the failed companies leave behind some little sedimentary layer of progress for everyone else.

The example I love to give his Webvan, which was a grocery delivery service in the Internet bubble, and because they didn’t have mobile, they had to build their own warehouses because they couldn’t dispatch pickers to grocery stores, and they tried to automate those warehouses, and then because the Internet was so small, they didn’t have that much demand. There were not that many people ordering groceries on the web and so they failed and they incinerated a ton of capital and you could regard that as a total failure, except that some of the people at Webvan who worked on those warehouses, went off to found Kiva Systems, which did warehouse automation robots, which Amazon bought, and then built tens of thousands of them, and so Webvan’s robot heritage is powering Amazon warehouses and some of those executives ended up running Amazon Fresh and they eventually bought Whole Foods and so all that led to a lot of progress for other people.

The other thing, of course, is that a lot of money gets incinerated and a lot of companies fail, the technology moves forward, the user — putting URLs at the end of movie trailers, people learned about URLs, but some great companies are built in the process and it’s always a minority. It’s always a small minority, but it does happen. So yeah, I think we’re clearly in some kind of bubble, but I don’t think it’s unjustified. AI is a huge revolution and incredible progress will be made, and we should be grateful to venture capital for philanthropically funding a lot of the progress that we’ll all enjoy for decades…

It is interesting to think about in the context of human intelligence, like to what extent you look at a baby, you look at a kid and how they acquire knowledge. I’m most inspired to do more research on babies that are blind or babies that are deaf, how do they handle that decrease in incoming information in building their view of the world and model of the world? Is there a bit where we started out with the less capable models, but when we do add images, when we do add videos, is there just an unlock there that we’re underestimating because we’ve overestimated text all along? I’m repeating what you said, Nat.

NF: Yeah, Daniel was way ahead on this. I think Daniel said that in our first conversation together, and this is a really active area of research now, is how can we synthesize the chain of the internal monologue, the thinking and the dead ends and the chain of thought that leads to the answer that’s encoded in the text on the Internet.

There was the Quiet-STaR paper and the STaR paper from [Eric] Zelikman who’s now at xAI. I don’t know what relation if any of that bears to Q*, but that’s basically what he did is to use current models to synthesize chains of reasoning that lead to the right answers where you already know the answer and then take the best ones and fine-tune those and you get a lot more intelligence out of the models when you do that. By the way, that’s one of the things the labs are spending money on generating is, “Can I get a lawyer to sit down and generate their reasoning traces for the conclusions that they write and can that be fed into the training data for a model and then make the models better at legal reasoning because it sees the whole process and not just the final answer?” — so chain of thought was an important discovery and yet it’s not reflected in our training data as widely as it could be.

4. Mining for Money – Michael Fritzell

I read Trevor Sykes book The Money Miners recently. It’s a book about Australia’s 1968-70 speculative mining bubble. Consider it a historical reference book about a bygone era…

…The free market price of nickel started rising from early 1969 onwards, from £1,500 per ton in January to £2,000 by March.

After a nickel miner strike in Canada, the free market price skyrocketed to £4,250 per tonne and eventually £7,000. The nickel rally was on…

…The company that came to be associated with the nickel boom the most was a small Kambalda miner called Poseidon…

…Poseidon’s fortunes changed when it hired full-time prospector Ken Shirley - an old friend of Norm Shierlaw. Ken lived in a caravan, living a lifestyle of moving around the bush to make new discoveries. His travels took him to Mount Windarra north of Kalgoorlie. He discovered minerals and pegged 41 claims along an iron formation stretching 11 kilometers.

In April 1969, Shirley sent in samples from Mount Windarra for assay and found 0.5% copper and 0.7% nickel together with associated platinum. The consulting geologists who analyzed the sample called it “very encouraging” and “intensely interesting”…

…On 29 September, Poseidon’s directors made their first public announcement about the discovery at Windarra. It said that the second drill hole had encountered nickel and copper but didn’t mention anything about the grade.

Just a few days after, on 1 October, they issued a more comprehensive statement showing 3.6% nickel at depths of 145-185 feet. This meant that Poseidon had struck nickel - the biggest nickel discovery in the history of Australia.

The announcement sparked a massive rally in the price of Poseidon. On 2 October, speculators flooded the Sydney Stock Exchange building after hearing about Poseidon in the press. Many of them were unable to reach the trading floor. On that day, on of the boards collapsed but prices continued to be updated on it will the staff refastened the ropes. Speculators didn’t want to miss an opportunity to buy…

…On 19 November 1969, Poseidon made an announcement confirming the strike length and width of the discovery. But strangely enough, it didn’t give any details about the assays from the drill holes. Despite the lack of information, the market took the report positively, causing Poseidon’s share price to rise further to AU$55.

Broker research departments issued reports, dreaming and imagining what Poseidon could be worth. These valuation exercises went along these lines:

  • If the strike length was 1,500 feet, the width was 65 feet, and the depth was 500, that meant a total orebody of 48 million cubic feet, assuming the orebody is a neat rectangular block
  • The orebody contained 13 cubic feet to the ton, which meant about three million tons of ore
  • With an average grade of 2.0-2.5% nickel, the orebody could contain about 70,000 tons of nickel
  • At an average price of AU$5,000 per ton, the orebody could be worth AU$350 milllion
  • There will also be costs involved, including for labor, equipment, finance, infrastructure, etc. Say around AU$200 million.
  • Over a mine life of 15 years, you could then calculate an income stream over time of the remaining AU$150 million worth and figure out that you could get earnings of AU$10 million per year
  • Capitalize that number, and you could have justified a share price of AU$60 for Poseidon. Others, like Panmure Gordon in London, ended up with a value of AU$380/share.

Using a forward P/E multiple against expected earnings from Mount Windarra, the price didn’t seem so high. And speculators therefore felt comfortable bidding up the price to even higher levels…

…At Poseidon’s annual meeting in December 1969, long queues also formed outside the event. When the doors opened, 500 people rushed into the building. But due to a lack of seats, about 200 of them had to stand at the back while the meeting went on.

At the AGM, a discussion started about a potential rights issue to fund future capital expenditures. Instead, a share placement was proposed to a select number of individuals at AU$5 per share - a massive discount to the then-prevailing share price of AU$100 - suggesting severe dilution without raising much capital.

This was a huge problem because Poseidon had struck nickel but not enough capital to actually develop the mine.

A geologist speaking at the AGM mentioned that the zone in which the drilling had taken place indicated four million tons of ore. Participants flooded out of the meeting trying to calculate what 2.4% times 4 million tonnes might imply in terms of nickel resources. Enthusiasm boiled over.

Investors rushed out of the AGM to public telephone booths to call their brokers. At the start of the AGM to the end, the share price ran from AU$112 to AU$130. Once the press caught wind of the story, the price rallied further to AU$185.

No one rang a bell at the top of the market, but some lone voices expressed concern about how far the market had run:

  • A London stock broker called R. Davie said that “A lot of Australian stocks, to put it mildly, are highly suspect”.
  • Melbourne firm A Holst & Co predicted that in a few years’ time, the majority of present “gambling stocks” would be bitter memories to those who continued to hold them.

In February 1970, Poseidon reached a market capitalization of AU$700 million, or about AU$10 billion in today’s money. This represented about 3x the market cap of the Bank of New South Wales. And one-third the value of BHP, even though Poseidon hadn’t even begun developing any mine…

…Poseidon’s stock price peaked at around AU$280 per share. The market was waiting for Poseidon to announce how it would fund the development of its mine in Windarra. Yet nothing was announced. Meanwhile, the share price started declining.

By the end of February, almost all other speculative stocks on the board had also fallen significantly, with some losing half their value.

What led to this sudden change in sentiment?

  • A major contributing factor was that nickel prices peaked and started declining from late 1960s onwards. The higher prices would eventually provide an incentive to search for new orebodies. Mines started coming online in a number of new number of new countries. World production of nickel skyrocketed.
  • At the tend of 1969, there were 145 mining stocks listed in Sydney, compared with just 86 at the start of the year. And there were another 100 more mining companies queuing up to float and eventually list on the exchange. Supply eventually met the demand for scrip.
  • Another factor was higher capital costs as Australian interest rates rose sharply
  • Yet another factor was rising inflation as the operating costs of a mine shot up

It didn’t help that Poseidon’s eventual grade was almost half what was originally reported, with the grade falling from 3.6% to 2.4%. Combine that with much lower nickel prices and sharply higher development costs, and you have all the ingredients of a boom turning to bust…

…Looking back at the 1969-70 mining boom, not a single major deposit was discovered. Though it is true that the AU$850 million raised during the boom did help fund the development of new mines.

In the subsequent five years, Poseidon turned out to be a massive disappointment to investors. It soon realized that it would need AU$50 million to develop its Windarra mine, yet it only had AU$2 million left in cash and liquid assets. The solution was to team up with Western Mining Corporation, which took a 50% stake in the project.

But Poseidon incurred debt in the process. It tried to deal with its debt problems by its stake in the mine. But nobody wanted to buy it. And so in 1976, Poseidon defaulted on its debt and was delisted from the Australian exchanges.

During the bankruptcy, Poseidon’s 50% interest in Windarra was sold to Shell Australia for AU$30 million. But by that time, nickel prices had declined so much that Windarra had become only marginally economic. With these lower nickel prices, Shell saw no way of making the mine financially viable and it therefore shut down Windarra in 1978. The Poseidon dream was gone.

Perhaps the biggest lesson from the bust was that most exploration companies fail. The book quoted one study from Ontario Canada on mining claims between 1907 and 1953. About 6,600 mining companies had been formed during those 46 years, but only 348 reached production stage. Out of those, 294 failed to show a taxable profit. And only 54 companies ended up paying a dividend. In other words, the success rate was less than 1%.

5. Falkland Islands – The Next Big Thing? – Swen Lorenz

The 3,600 residents of the remote Falkland Islands could soon experience an “economic boom” that has the potential to “transform the islands’ entire economy”.

So reported by the Daily Telegraph on 30 June 2024…

…The islands have since seen an initial oil exploration boom, and exploitable oil reserves were found in 2010. Sadly, the oil price fell off a cliff in 2014, which killed the prospect of actually producing oil in the Falklands. The share prices of the fledgling Falkland oil companies all fell over 90%, many went under altogether and disappeared from public markets…

…As the Daily Telegraph just reported:

“The Falkland Islands has opened the door to oil exploration in its waters for the first time in history, in a move that could trigger an economic boom for locals.

The territory’s ruling council has asked islanders if they will back the scheme to extract up to 500m barrels of oil from the Sea Lion field, 150 miles to the north.

Details of the scheme were released without fanfare in the Falkland Islands Gazette, an official government publication, signed off by Dr Andrea Clausen, director of natural resources for the Falkland Islands government.

‘A statutory period of consultation will run from June 24, 2024 to August 5, 2024… regarding Navitas’ proposals for the drilling of oil wells and offshore production from the Sea Lion field,’ it said.

The territory’s ruling council has asked islanders if they will back the scheme to extract up to 500m barrels of oil from the Sea Lion field, 150 miles to the north. …. The field is thought to contain 1.7bn barrels of oil, making it several times bigger than Rosebank, the largest development planned for the UK’s own North Sea, estimated to hold 300m barrels.”

Are we about to see the Falkland Islands hype 2.0?…

…Now that Keir Starmer has wiped the floor with Rishi Sunak, will anything change?

It’s unlikely.

As the Daily Telegraph put it:

“Labour … has made accelerating the net zero transition a key part of its pitch to the electorate. Sir Keir Starmer’s party has promised to ban all new oil and gas exploration in British waters. This ban would not affect the Falklands, as it is the local administration there who have a say over drilling rights to surrounding waters.

Many within the Falklands government have wanted to make the islands a centre for oil production. John Birmingham, deputy portfolio holder for natural resources, MLA (Member of the Legislative Assembly), said: ‘Offshore hydrocarbons have the potential to be a significant part of our economy over the coming decades.

In a statement, the Falklands Islands government said: ‘We have the right to utilise our own natural resources. The Falkland Islands operates its own national system of petroleum licensing, including exploration, appraisal and production activities related to its offshore hydrocarbon resources.”

It’s all taken a long time, but the investment thesis behind the Falkland Islands oil discoveries could finally play out.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. e currently have a vested interest in Apple, Alphabet (parent of Google), Amazon, Meta Platforms, and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 07 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 July 2024:

1. Etched is Making the Biggest Bet in AI – Etched

In 2022, we made a bet that transformers would take over the world.

We’ve spent the past two years building Sohu, the world’s first specialized chip (ASIC) for transformers (the “T” in ChatGPT).

By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either.

But for transformers, Sohu is the fastest chip of all time. It’s not even close.

With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs…

…By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling. We are living in the largest infrastructure buildout of all time.

Scaling the next 1,000x will be very expensive. The next-generation data centers will cost more than the GDP of a small nation. At the current pace, our hardware, our power grids, and pocketbooks can’t keep up…

…Santa Clara’s dirty little secret is that GPUs haven’t gotten better, they’ve gotten bigger. The compute (TFLOPS) per area of the chip has been nearly flat for four years…

…No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.

Suddenly, that’s changed:

  • Unprecedented Demand: Before ChatGPT, the market for transformer inference was ~$50M, and now it’s billions. All big tech companies use transformer models (OpenAI, Google, Amazon, Microsoft, Facebook, etc.).
  • Convergence on Architecture: AI models used to change a lot. But since GPT-2, state-of-the-art model architectures have remained nearly identical! OpenAI’s GPT-family, Google’s PaLM, Facebook’s LLaMa, and even Tesla FSD are all transformers…

…We believe in the hardware lottery: the models that win are the ones that can run the fastest and cheapest on hardware. Transformers are powerful, useful, and profitable enough to dominate every major AI compute market before alternatives are ready…

  • …As models scale from $1B to $10B to $100B training runs in the next few years, the risk of testing new architectures skyrockets. Instead of re-testing scaling laws and performance, time is better spent building features on top of transformers, such as multi-token prediction.
  • Today’s software stack is optimized for transformers. Every popular library (TensorRT-LLM, vLLM, Huggingface TGI, etc.) has special kernels for running transformer models on GPUs. Many features built on top of transformers aren’t easily supported in alternatives (ex. speculative decoding, tree search).
  • Tomorrow’s hardware stack will be optimized for transformers. NVIDIA’s GB200s have special support for transformers (TransformerEngine). ASICs like Sohu entering the market mark the point of no return. Transformer killers will need to run on GPUs faster than transformers run on Sohu. If that happens, we’ll build an ASIC for that too!…

…On GPUs and TPUs, software is a nightmare. Handling arbitrary CUDA and PyTorch code requires an incredibly complicated compiler. Third-party AI chips (AMD, Intel, AWS, etc.) have together spent billions on software to little avail.

But since Sohu only runs transformers, we only need to write software for transformers!

Most companies running open-source or internal models use a transformer-specific inference library like TensorRT-LLM, vLLM, or HuggingFace’s TGI. These frameworks are very rigid – while you can tweak model hyperparameters, changing the underlying model code is not really supported. But this is fine – since all transformer models are so similar (even text/image/video ones), tweaking the hyperparameters is all you really need.

2. Evolution of Databases in the World of AI Apps – Chips Ahoy Capital

Transactional Database vendors like MDB focus on storing and managing large volumes of transactional data. MDB also offers Keyword Search & rolled out Vector Search (albeit late vs competitors). Historically MDB Keyword Search has not been as performant as ESTC in use case utilizing large data sets or complex search queries & has less comprehensive Search features to ESTC…

…A vector database stores data as high-dimensional vectors rather than traditional rows and columns. These vectors represent items in a way that captures their semantic meaning, making it possible to find similar items based on proximity in vector space.

Real-World Example:

Imagine you have an online store with thousands of products. Each product can be converted into a vector that captures its attributes, like color, size, and category. When a customer views a product, the vector database can quickly find and recommend similar products by calculating the nearest vectors. This enables highly accurate and personalized recommendations.

In essence, a vector database helps in efficiently retrieving similar items, which is particularly useful in applications like recommendation systems & image recognition…

…RAG combines the strengths of Vector Search and generative AI models to provide more accurate and contextually relevant responses. Here’s how it works: 1) A user submits a query 2) the system converts the query into a vector and retrieves relevant documents or data from the vector database based on similarity 3) the retrieved documents are fed into a generative AI model (LLM), which generates a coherent and contextually enriched response using the provided data.

Multimodal models integrate multiple data types (text, images, audio) for comprehensive understanding and generation. It is crucial for vector databases to support multimodal data to enable more complex and nuanced AI applications. PostGres is a dominant open source vendor in the database market (scored #1 as most used Vector DB in recent Retool AI survey) but on it’s own it does NOT seem to include native support for multi-modality in it’s Vector Search. This limits the use cases it can be applied or used to without using an extension or integration to other solutions…

…Simple AI Use Cases:

Similarity Search has been one of the first and most prominent use cases of using GenAI. When a query is made, the database quickly retrieves items that are close in vector space to the query vector. This is especially useful in applications like recommendation engines &  image recognition where finding similar items is crucial. These use cases have been in POC since last year, and are starting to move into production later this year.

Complex AI Use Cases:

Enter Generative Feedback Loop! In a Generative Feedback Loop, the database is not only used for Retrieval of data (main use case in Similarity Search). But it also provides Storage of Generated Data. The database in this case stores new data generated by the AI model if deemed valuable for future queries. This in my view changes the relationship that the AI Application has with a database as it then has to store data back in. A key example for Generative Feedback Loop is an Autonomous Agent…

…An AI autonomous agent and a database work together to perform complex tasks efficiently. The relationship between a database and an AI Agent at first seems similar to other use cases, where the database holds all necessary data and the AI Agent queries the database to retrieve relevant information needed to perform its tasks.

The key difference here is the Learning and Improvement aspect of AI Agents. Instead of just containing historical data, the database has been updated with new data from user interactions and agent activities. The AI Agent then uses this new data to refine its algorithms, improving its performance over time…

…A real life example could be an E-commerce Chatbot. The customer buys a product and leaves a review for that product. The database then updates the new purchase and feedback data, and the AI Agent learns from this feedback to improve future recommendations. In this scenario, the database is not just being queried for data, but it is storing data back from the interaction, the AI Agent is learning from this, creating what is referred to as a Generative Feedback Loop.

3. The Big Bad BREIT Post – Phil Bak

So here it is, our analysis of Blackstone’s Real Estate Income Trust. The data presented is as-of the original publication of June 2023. It should be noted that over the past year everything has played out as we warned, including the gating of Starwood’s SREIT. Last thing I’ll say: I’d have much preferred to be wrong…

…Given the vital role that “NAV” plays in fundraising and performance reporting, it’s surprising that a greater amount of transparency is not provided by sponsors into their valuation methodology. Remind me again why they don’t provide a comprehensive explanation for each input in the DCF model?  Contrary to popular assumption, NAV is not based on appraisals that utilize sales comparisons. Instead, it’s based on an opaque discounted cash flow (DCF) methodology that is based on assumptions that are at the discretion of the sponsor who realizes fee streams pegged to the asset values they assign.

BREIT’s self-reported performance is – by their own admission – “not reliable.” Why we didn’t take a closer look at it before is as much a mystery as how they compute it. Management can’t just pull numbers out of thin air, and they’ve done nothing illegal, but they have a lot of discretion on where they estimate share values to be.

According to their prospectus, Blackstone values the fund itself once a month; then once a year it brings in an outsider who prepares a valuation based on their direction. But in its March 28, 2023 prospectus amendment, BREIT removed the steps in bold.  (1) a third-party appraisal firm conducts appraisals and renders appraisal reports annually; (2) an independent valuation advisor reviews the appraisal reports for reasonableness; (3) the advisor (Blackstone) receives the appraisal reports and based in part on the most recent appraisals, renders an internal valuation to calculate NAV monthly; (4) the independent valuation advisor reviews and confirms the internal valuations prepared by the advisor. (5) BREIT will promptly disclose any changes to the identity or role of the independent valuation advisor in its reports publicly filed with the SEC.

The verbiage in their disclosures doesn’t suggest that their calculation will be better than relying on market prices. The highlighted portions seem to be saying that Blackstone uses baseless returns in their SEC filings. They are not using a methodology prescribed by the SEC or any regulatory body. They do not adhere to any accounting rules or standards. Nor is their monthly NAV calculation audited by an independent public accounting firm. Blackstone uses it solely to determine the price at which the fund will redeem and sell shares. The NAV also happens to dictate the fees they can earn…

…One of BREIT’s big selling points was the ability to get a dividend of around 4% when interest rates were near zero, but the fund cannot – and has never been able to – cover the dividend payment. The current Class S distribution of 3.74% and Class I yield of 4.6% aren’t fully earned based on a key REIT cash-flow measure: Available Funds from Operations (AFFO). AFFO is used to approximate the recurring free cash flow from an income producing real estate vehicle and calculate the dividend coverage.

Blackstone reports AFFO, but their reported number is janky. It omits the management fees they charge.  Their rationale is that they have not taken their fees in cash but instead converted their $4.6 billion in fees into I-Shares, which is a class of BREIT shares that has no sales cost load.  But their election to accept shares is optional, the shares they receive are fully earned and they can redeem their shares at stated NAV.  What’s more, they have redemption priority over other BREIT investors; there is no monthly or quarterly redemption limitation.  Blackstone has already redeemed $658 million in shares.

BREIT’s AFFO also omits recurring real estate maintenance capital expenditures and stockholder servicing fees which are part of the sales load. Computing an AFFO more consistent with public company peers would result in a payout ratio for the first half of 2023 of more than 250%.

BREIT, unlike most big public REITs, has only covered about 13% of their promised dividend distribution. There’s not a single year in which they could cover their payment if everybody elected to receive it. Since inception, the company has delivered $950 million in AFFO and declared $7.3 billion in distributions.  That’s a stunning 768% dividend payout ratio…

…BREIT is levered approximately 49% against NAV and closer to 60% as measured against cost – the average cost of BREIT’s secured borrowings stands at approximately 5.5 % before hedges so the cost of their debt exceeds the yield. There are few ways you can turn these numbers into a double digit return.  Rents would have to go to the moon. The only way there can be positive leverage over a holding period (IRR) is if there is a shedload of positive income growth. And that’s exactly what BREIT has baked in the valuation cake. Interest rates went up so the NPV should be way down but – in a fabulous coincidence – future cash flow expectations went up by just enough to offset it. The numerator where revenue growth shows up made up for the rise in rates in the denominator…

…Here’s the BREIT Story in a nutshell: They’ve reported an annual return since inception for its Class S investors north of 10% with real estate investments that have a gross current rate of return of less than 5% on their cost.  They’ve been buying assets at a 4% cap rate, paying a 4.5% dividend and reporting 10+% returns. And nobody has called bullshit…

…By taking BREIT’s current NOI and dividing it by the NAV, investors can compute the implied cap rate on BREIT’s portfolio as they are valuing it – and compare it with public REITs. Interest rates have moved 200-300 basis points in recent months, and in public markets elevated cap rates have driven a 25% decline in values. A recent analysis of two vehicles in the non-traded REIT space concluded that both funds are being valued at implied cap rates of approximately 4.0% when publicly traded REITs with a similar property sector and geographic are trading at an implied cap rate closer to 5.75% . Applying that 5.75% cap rate to BREIT would result in a reduction in shareholder NAV of more than 50%. The current valuation of roughly $14.68/ share should be closer to $7-8/share.

4. Grant Mitchell — The Potential of AI Drug Repurposing – Jim O’Shaughnessy and Grant Mitchell

[Grant:] I was leading teams that were really pioneering the use of large medical record databases to identify subpopulations where a drug might perform better, might be higher in efficacy or better in safety. And we realized that that’s really, in a way, it’s kind of drug repurposing. It’s taking a drug and finding a population where it works a little bit better in a drug that already exists.

And as David was working in the lab and I was working in the data, we kind of came together and we say, “Can we automate what we’ve done? Can we scale what we’ve done in just one disease?” And given the explosion and the amount of data that exists out there and the improvements in the way that we can harmonize and integrate the data into one place, and then the models that have been built to analyze that data, we thought that maybe it would be possible. And we would check in every few years. 2016, 2017, it wasn’t really possible. We had this dream for a long time. 2018, 2019 is probably when I was talking to you and I was thinking about can we do this?

And really, lately it’s become possible, especially with, like I said before, more data, structured better. You have models like these large language models that are able to digest all of medical literature, output it in a structured fashion, compile it into a biomedical knowledge graph, these really interesting ways to display and analyze this kind of data. And ultimately, that’s how Every Cure was formed, was the concept that the drugs that we have are not fully utilized to treat every disease that they possibly can, and we can utilize artificial intelligence to unlock their life-saving potential.

Jim: Just so incredibly impressive. And a million questions spring to mind. As you know, my oldest sister, Lail, died of lupus. And when you said the cytokine storm, she had a kind of similar thing where she would go into remission, and then there’d be a massive attack, and it wasn’t like clockwork like your colleague’s, but when she died in 1971, it was like nobody knew very much at all about the disease. And in this case, did you find that the cure that worked for your colleague, was that transferable to other people with this similar disease?

Grant: Yeah, so the cure that worked for him, we studied his blood, we sampled his lymph nodes, we did immunohistochemistry and flow cytometry and basically found that their cytokines were elevated, another molecule called VEGF was elevated, there’s T cell activation. This all pointed towards something called the mTOR pathway. And started looking at different drugs that would hit that pathway, settled on a drug called Sirolimus. Sirolimus has been around for decades. It’s actually isolated from a fungus found in the soil on Easter Island. It’s amazing, right? And it shuts down the overactivation of this pathway that leads to this cascade that causes this whole cytokine storm.

For David it works perfectly, and it also works for about a third of the other patients that have a disease like David. And so that’s resulted in the benefit to countless thousands and thousands of patients’ lives. It’s a pretty thrilling and satisfying and motivating thing to be able to figure something like that out and to be able to do it, they have the opportunity to do it more and at scale and have the opportunity to save potentially millions of lives is a huge motivation for my team…

…[Grant:] So we couldn’t quite piece it together, and it was really an aha moment that this should be designed as a nonprofit, and it should be an AI company, because if you want to build the world’s best AI platform for drug repurposing, you’re going to need the world’s best dataset to train it, and you’re not going to get your hands on all the data that you want to get your hands on if you’re a competitor to all these people that are trying to use this data.

So we’re collaborative. We’re non-competitive. We are not profit-seeking. Our primary goal is to relieve patient suffering and save patient lives. So I’ll get to your question about how we’re utilizing that kind of resiliency data that I mentioned before. But first I’m going to help you understand how we use it. I’m going to describe the kind of data set that we’re constructing, and it’s something called a biomedical knowledge graph. It’s well known in the areas and the fields that we’re in, but maybe not a commonly known term to the layman, but it’s effectively a representation in 3D vector space of all of the biomedical knowledge we have as humanity, every drug, every target, every protein, every gene, every pathway, cell type, organ system, et cetera, and how they relate to different phenotypes, symptoms, and diseases.

And so every one of those biomedical concepts that I just described would be represented as a node, and then every relationship that that concept has with another relationship, like a drug treats a disease, there would be an edge. They call it a semantic triple. Drug, treats, disease. So you’ve got a node, an edge, and a node. And imagine a graph of every known signaling molecule and protein and a concept you can imagine, tens of millions of nodes, even more edges, representing all of human knowledge in biology. And that’s what multiple people have constructed. Actually, NIH funded a program called the NCATS Translator Program where a number of these knowledge graphs have been constructed. Other groups are doing it. A lot of private companies have their own. We are compiling them and integrating it with an integration layer that kind of takes the best from the top public ones, and then layers in additional proprietary data that we get from other organizations or data that we generate on our own.

And the example that you just mentioned, a company that is working on tracking genetic diseases and groups of people with the same genetic disease and looking at subpopulations within that group where there might be some resilience to the mutation, and then studying their genome to say, “Okay, what other proteins are being transcribed that might be protective against this mutation?”, and then going out and designing drugs that might mimic that protection. Well, how’s that data going to fit into my knowledge graph? Well, you can imagine that now if I have the data set that they’re working with, I know that there’s a mutation that results in a disease. So a gene associated with disease, that’s a node, an edge, and a node. And I also know that this other protein is protective of that disease.

So that just information that goes into the graph. And the more truth that I put into that graph, the more I can train that graph to identify patterns of successful examples of a drug working for a disease, and then it can try and find that pattern elsewhere where it either identifies nodes and edges that should already be connected or are connected in our knowledge base but no one has actually acted on, or it can maybe even generate a hypothesis on a totally new edge that is novel and has never been considered by experts before. So to answer your question, again, is we’re not doing that work ourselves, but we integrate the knowledge from that work so it can train our models and so we can pursue drug repurposing ideas…

…[Grant:] We’re not designing novel compounds. We think that there’s so much low-hanging fruit with the 3000 drugs that already exist that we are going to spend years and years unlocking the life-saving potential of those. And the reason why we’re focused there is because that is the fastest way to save human lives. If you develop a novel compound, you have to go all the way through the entire clinical development of an approval process. IND, phase one, phase two, phase three trials. This takes years and years and hundreds of millions of dollars, whereas in certain scenarios in drug repurposing, just like with my co-founder David, within weeks of us coming up with the hypothesis that this drug might work for him, as long as we could find a physician that would prescribe it to him, it went directly into his human body just weeks later.

So that brings me to this issue that I think we’re going to see, and you as an investor might make yourself aware of, is that there’s going to be lots and lots of failures in the world of AI-driven drug discovery. And that’s because not only are you an AI company that’s generating hypotheses, you’re also a biotech company that has to validate a novel compound and bring it all the way through the clinic through clinical trials and through regulatory approvals and into patients. So here you are an AI company, you’ve hired up your team of 50 data scientists and experts, and you come up with your hypothesis and you say, “Okay, great.”

You’re not Amazon that gets to A/B test where they’re going to put a button on the user interface and then they get feedback by the end of the day and okay, move the button here instead of here. When you come up with your hypothesis after your AI team says, “Okay, this is what the drug we’re going to move forward with,” you now have to go through potentially 10 years and hundreds of millions of dollars of additional development. So you don’t know if your AI team built anything of value. You don’t have that validation feedback loop that you do in other AI consumer-based organizations. So now you’re juggling sustaining an AI corporation that doesn’t have a feedback loop while you have to also pay for the clinical development of a drug. And so it’s a tension that’s hard, hard to manage.

And drug repurposing solves that tension. It allows us to go from hypothesis to validation in a much tighter feedback loop. So what we’re doing is something that both helps patients in the fastest and cheapest way possible, but also, the happy accident is that we push forward the field of data-driven drug discovery because we can inform our models in a faster feedback loop…

…[Grant:] One thing I learned when I was at Quantum Black and at McKinsey is, and we would go up against other machine learning organizations. I remember one time they put us head to head with another group and they said, “Okay, whoever comes with the best insights in the next three months, we’re going to pick to go with a longer contract going forward. And two seemingly similar teams working on the same dataset. We came up with a totally different recommendations than the other team did, and what was actual differentiator between the teams was that we had five medical degrees on our team, not just a bunch of data scientists, but data scientists plus medical experts. And in every step of the way that you’re building these knowledge graphs and designing these algorithms, you’re interfacing with medical expertise to make sure you imbue it with clinical understanding, with biological rationale of how this is actually going to work and how to interpret the typically really messy medical data.

And so if you think about the matrix that we’re producing, this heat map of 3000 drugs cross-referenced with 22,000 diseases creates 66 million possibilities, and we then score those possibilities from zero to one, and normalize them across the whole landscape. So that’s a tricky thing to do is drug A for disease X compared to drug B for disease Y, how do you compare the possibilities of each of those in zero to one? So we create that normalized score, and then we start looking at the highest scores and then filter down from there to say, “Okay, of all the highest probability of success opportunities here, which ones are going to impact patients the most, and which ones can we prove out quickly and efficiently in a lowcost trial with a few metapatients and high signal, so we can do this in three to six to 12 months births and suppose of five-year trial times?”

And the thing to think about, back to the comment about we need medical expertise highly integrated with what we’re doing is that even if you take the top thousand scores there, you’re still in the 0.001% of the highest ranking of scores, and now you got to pick amongst your thousand to get down to the top five. To get down to the top one, what is my first shot on goal going to be? That better be successful for all the things that I’m working on here, and it better help patients and really better work. So the AI can’t do that. You need a really smart head of translational science to make that last sort of decision of what’s going to go into patients and how it’s all going to work…

… [Grant:] we’re a nonprofit because we want to build the world’s best AI platform and we need the best data set to do it to save as many lives as we possibly can with drugs that already exist. So since the drugs already exist, it’s kind of a funny thing. I say we’re the smallest and the biggest pharma company in the world. We’re the biggest because every single drug that already exists is in our pipeline. We’re the smallest because we don’t own any of them. And then we take those drugs and we go after diseases that are totally neglected by the pharmaceutical industry. So it’s by design has to be a nonprofit.

5. How Bull Markets Work – Ben Carlson

Halfway through the year, the S&P 500 was up 15.3%, including dividends.

Despite these impressive gains the bull market has been relatively boring this year.

There have been just 14 trading days with gains of 1% or more. There has been just a single 2% up day in 2024. And there have only been 7 days of down 1% or worse.

Small moves in both directions.

Bull markets are typically boring like this. Uptrends tend to be these slow, methodical moves higher. Bull markets don’t make for good headlines because they’re made up of gradual improvements.

Bear markets, on the other hand, are where the excitement happens. Downtrends are full of both big down days and big up days…

..The best and worst days happen at the same time because volatility clusters. Volatility clusters because investors overreact to the upside and the downside when emotions are high…

…It’s also interesting to note that even though the S&P 500 is having a boring year, it doesn’t mean every stock in the index is having a similar experience.

While the S&P is up more than 15% there are 134 stocks down 5% or worse while 85 stocks are down 10% or more so far this year.

Stock market returns are concentrated in the big names this year, but it’s normal for many stocks to go down in a given year.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 30 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 30 June 2024:

1. An Interview with Scale AI CEO Alex Wang About the Data Pillar for AI – Ben Thompson and Alex Wang

When you saw that there was going to be a third pillar, yet no one was there, did you have any particular insights on how that would work, or was it just a matter of, “There’s a problem space that needs solving, and we’ll figure out how to solve it in the future”?

AW: Yeah. Probably the most formative, immediate experience was that I was training one of a neural network at this time on a single GPU in Google Cloud and using TensorFlow, and it was a neural network that detected emotion based on a photo of someone’s face, and all I did basically was I took the tutorial for ImageNet, so basically literally the tutorial code for a very different image recognition algorithm, and then I just swapped out the data set and then pressed “Enter”. Then 12 hours later, I had a neural network that smashed any of the other methods on this problem of recognizing emotion from images.

So the takeaway there is actually, data is what matters most.

AW: Yeah. From problem to problem, data is the only thing that varies, is maybe the better way to put it, and as a programmer, you kind of realize, “Oh, actually data is what’s doing all the actual programming and my insight into the problem doesn’t actually matter, it’s just all embedded in the data set that the model ends up getting trained on”.

So I think, A) I knew that data was very important. I remember this realization, the model ended at some performance, and I was like, “Okay, I’ve got to make this model better,” and so then I was like, “Okay, how am I going to improve on this data set?”, and then there was the second light bulb, which is that this is an incredibly painful process. You open up all the images and then you go through and you just look at, “Okay, are the labels for all the images correct?”, and then you’re like, “Okay, what new images should I get to pull into this?”, and then, “How am I going to get those labeled?”, and so all of the core operations, so to speak, of updating or changing or improving the data set were incredibly painful.

So I started the company in 2016, and this was an era where there was a broad-based recognition that platforms, particularly developer platforms that made very ugly things very easy were good businesses. It was already clear that AWS was ridiculously successful as a business, the most successful enterprise business that had ever existed, and then Stripe, it was also clearly recognized that Stripe was very successful, and so as a student of those companies realized that, “Hey, we should take this incredibly messy and complicated thing that exists today, and then figure out how to turn that into a beautiful developer UX and if we can accomplish that, then there’s a lot of value to be had here”.

There’s a lot to unpack there. Just as a broader philosophical point, do you think that insight about data still holds? So it’s not just that there’s three pillars, compute, algorithm, and data, but actually data is the most important, and just like you saw before, is it more complicated now or is even more the case?

AW: Yeah, I think it’s proving to be more and more the case. I was at an event with a lot of other AI CEOs recently, and one of the dinner conversations is, “Okay, compute, power, data: which do you run out of first?”, and the consensus answer around the room is data, and I think the data wall has become over the past few months, a pretty commonly debated topics. “Are we hitting a data wall in LLM development, or are we just fundamentally coming against the limits of data?” Even the most liberal assumptions around, let’s assume that you really did train on all human-generated text, which no sensible person does because you filter out all the bullshit, but if you did train on all human-generated texts, even then we will run out by 2027, 2028.

So just overall in terms of the sheer amount of data that’s necessary to keep up with scaling, we’re very clearly hitting some meaningful wall, and then if you look at, I think a lot of the model performance improvements as of late, or sort of the big gains in models, my personal reason, I think a lot of that actually boils down to data, and innovations on how to use data, and innovations on basically the data-intensive parts of the AI stack…

How have the needs of the market shifted then? You mentioned that you were getting at this before and I interrupted. You start out with images for self-driving cars, today it’s all about these text-based models. What is entailed in going from images to text?

AW: We had an interesting mid-step here, which is broadly speaking, I think the shift as the models have increased in intelligence is towards greater levels of expertise. But basically, we started autonomous vehicles and then starting about 2020 we actually started working with the government, the US government and this was driven because I grew up in Los Almos and realized that AI is likely a very important technology for our security.

We can do a side bit here, you wrote a very interesting piece on Substack in 2022, The AI War and How to Win It. Give me your thesis here and why you think it’s a big deal.

AW: Yeah, I think that the basic gist is first, if you look at the long arc of human history, it is punctuated by war. In some sense, human history is all about war, and then if you look at the history of war, then the history of war in some sense is all about technology. If you look at particularly the transitions from World War I to World War II to future wars, the Gulf War for example, the most significant bit so to speak, or the largest factor in how these wars end up playing out really, is access to technology. Obviously this is deep to my upbringing, grew up in Los Alamos, basically every year you have a multi-day history lesson on Los Alamos National Lab and the origins thereof.

So then you think about, “Okay, what are the relevant technologies today that are being built?”, and there’s a host of technologies I think are important, hypersonic missiles, space technology, et cetera. But AI is, you could very easily make the case, that it is the most important. If you could solve problem solving, then all of a sudden you have this incredibly powerful advantage.

If you believe that AI is really important for hard power, for American hard power, which is very important for I think ensuring that our way of life continues, then the most shocking thing for me was looking at, was going through and looking at the things that the CCP [Chinese Communist Party] were saying about AI, and there are CCP officials who have very literally said, “We believe that AI is our opportunity to become the military superpower of the world”. That we believe that roughly speaking, they said, “Hey, the Americans are not going to invest enough into AI, and so we’ll disrupt them by investing more into AI proportionally, and if we do so, even though we spend a lot less on our military, we will leapfrog them in capability”. This is, I think as a startup person, this is the core Innovator’s Dilemma or the core disruptive thesis that the CCP had basically a disruptive thesis on war powered by artificial intelligence.

This is basically the idea that you’re going to have these autonomous vehicles, drones, whatever, of all types controlled by AI, versus the US having these very sophisticated but operated by humans sort of systems, and the US will fall into the trap of seeking to augment those systems instead of starting from scratch with the assumption of fully disposable hardware.

AW: Yeah, I think there is at its core two main theses. One is perfect surveillance and intelligence in the sort of CIA form of intelligence, and this I think is not that hard to believe. Obviously, in China, they implemented cross-country facial recognition software as their first killer AI app, and it doesn’t take that much to think, “Okay, if you have that, then just extend the line and you have more or less full information about what’s happening in the world” and so that I think is not too hard to imagine.

Then the hot war scenarios is to your point, yeah, autonomous drone swarms of in land, air or sea that are able to coordinate perfectly and outperform any human.

I think when people hear AI, they think about the generative AI, LLMs, OpenAI, whatever it might be, and assume that’s a US company, Google’s a US company, et cetera, and so the US is ahead. This is obviously thinking about AI more broadly as an autonomous operator. Is the US ahead or what’s your perception there?

AW: I think that on a pure technology basis, yes, the US is ahead. China’s caught up very quickly. There’s two very good open source models from China. One is YiLarge, which is the model from Kai-Fu Lee‘s company, 01.ai. And then the other one is Qwen 2, which is out of Alibaba and these are two of the best open source models in the world and they’re actually pretty good.

Do they use Scale AI data?

AW: No, we don’t serve any Chinese companies for basically the same reasons that we’re working with the US military. YiLarge is basically a GPT-4 level model that they open-sourced and actually performs pretty well, so I think that on the technology plane, I think the US is ahead and by default I think the US will be maintaining a lead.

There’s an issue which Leopold Aschenbrenner recently called a lot of attention to, which is lab security. So we have a lead, but it doesn’t matter if, it can all be espionaged away basically and there’s this case recently of this engineer from Google, Linwei Ding who stole the secrets of TPU v6 and all these other secrets.

And wasn’t discovered for six months.

AW: Yeah, it wasn’t discovered for six months and also the way he did it was that he copy-pasted the code into Apple Notes and then exported to a PDF, and that was able to circumvent all the security controls.

So how does this tie into this middle stage for you of starting to sign government contracts? What were those about?

AW: Yeah, so I basically realized, and the punchline of what I was going through was that the United States was, by default, going to be bad at integrating AI into national security and into the military and a lot of this is driven by, for a while — this is less true now, but for a while — tech companies actively did not want to help the DOD and did not actively want to help US military capabilities based on ideology and whatnot, and even now the DOD and the US government are not really that great at being innovative and have a lot of bureaucracy that prevent this. So I decided basically like, “Hey, Scale, we’re an AI company, we should help the US government”.

We started helping them and we started working with them on all of their data problems that they needed to train specialized image detectors or specialized image detection algorithms for their various use cases, and this was the first foray into an area that required a lot of expertise to be able to do effectively, because at its core, the US government has a lot of data types and a lot of data that are very, very specialized. These are specialized sensors that they pay for, they’re looking at things that generally speaking the general population doesn’t care about, but they care a lot about — movement of foreign troops and the kinds of things that you might imagine military cares about — and so required data that was reflective of all of the tradecraft and nuance and capabilities that were necessary, so this was one of the first areas.

We actually have a facility in St. Louis, which have people who are by and large trained to understand all this military data to do this labeling.

So this was a clear separation then from your worldwide workforce?

AW: Yeah, exactly. It was a clear break in the sense that we were doing problems that almost anyone in the world could, with enough effort, do effectively and do well, to almost like the Uber driver, a very broad marketplace view, to something that required niche expertise and niche capability to do extremely well.

This sort of phase transition of data — there’s sort of a realization for us that, “Oh, actually in the limit almost all of the data labeling, almost all the data annotation is going to be in the specialized form”, because the arc of the technology is, first we’re going to build up all this generalized capability, and this will be the initial phase building of all these general capability, but then all the economic value is going to come from specializing it into all these individual specific use cases and industries and capabilities and it flowing into all the niches of the economy…

So where does synthetic data come into this?

AW: Yeah, synthetic is super fascinating. So I think that this has become super popular because we’re hitting a data wall, in some ways the most seductive answer to the data wall is, “Oh, we’ll just generate data to blow past the data wall”, generate data synthetically using models themselves. I think the basic results are that, at a very high level, synthetic data is useful, but it has a pretty clear ceiling because at it’s core you’re using one model to produce data for another model, so it’s hard to blow past the ceiling of your original model at a very fundamental level.

It’s a compressed version of what went into the original model.

AW: Yeah, exactly. It’s a very good way to compress insight from one model to get to another model, but it’s not a way to push the frontier of AI, so to speak…

So basically this is huge problem everyone is running into, it’s incredibly hard to solve and so someone is going to need to solve it and you’ve been working on it for eight to ten years or however long it’s been. The thesis seems pretty fairly straightforward, even if the margins are not necessarily going to be Nvidia-style margins, given that you have to use hundreds of thousands of humans to do that.

AW: Yeah and I think the other key nuance here, the other interesting thing, is today our revenue is 1% of Nvidia’s because, by and large, the budgets are mostly allocated towards compute. I think as with any portfolio optimization problem, in time, if data is actually the biggest problem, the percent of budgets that are allocated to data versus compute will slowly shift over time. So we don’t have to be half the budgets, even if we get to 5% of the budgets or 10% of the budgets versus 1% of the budgets, then there’s a pretty incredible growth story for data.

2. My Stock Valuation Manifesto – Vishal Khandelwal

1 .I must remember that all valuation is biased. I will reach the valuation stage after analyzing a company for a few days or weeks, and by that time I’ll already be in love with my idea. Plus, I wouldn’t want my research effort go waste (commitment and consistency). So, I will start justifying valuation numbers.

2. I must remember that no valuation is dependable because all valuation is wrong, especially when it is precise (like target price of Rs 1001 or Rs 857). In fact, precision is the last thing I must look at in valuation. It must be an approximate number, though based on facts and analysis.

3. I must know that any valuation method that goes beyond simple arithmetic can be safely avoided. If I need more than four or five variables or calculations, I must avoid that valuation method…

…10. I must remember that good quality businesses often don’t stay at good value for a long time, especially when I don’t already own them. I must prepare in advance to identify such businesses (by maintaining a watchlist) and buy them when I see them priced at or near fair values without bothering whether the value will become fairer (often, they do).

11. I must remember that good quality businesses sometimes stay priced at or near fair value after I’ve already bought them, and sometimes for an extended period of time. In such times, it’s important for me to remain focused on the underlying business value than the stock price. If the value keeps rising, I must be patient with the price even if I need to wait for a few years (yes, years!)…

…13. Ultimately, it’s not how sophisticated I am in my valuation model, but how well I know the business and how well I can assess its competitive advantage. If I wish to be sensible in my investing, I must know that most things cannot be modeled mathematically but has more to do with my own experience in understanding businesses.

14. When it comes to bad businesses, I must know that it is a bad investment however attractive the valuation may seem. I love how Charlie Munger explains that – “a piece of turd in a bowl of raisins is still a piece of turd”…and…“there is no greater fool than yourself, and you are the easiest person to fool.”

3. I Will F****** Piledrive You If You Mention AI Again – Nikhil Suresh

I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders…

…Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught…

…Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f****** catastrophe…

…A friend of mine was invited by a FAANG organization to visit the U.S a few years ago. Many of the talks were technical demos of impressive artificial intelligence products. Being a software engineer, he got to spend a little bit of time backstage with the developers, whereupon they revealed that most of the demos were faked. The products didn’t work. They just hadn’t solved some minor issues, such as actually predicting the thing that they’re supposed to predict. Didn’t stop them spouting absolute gibberish to a breathless audience for an hour though! I blame not the engineers, who probably tried to actually get the damn thing to work, but the lying blowhards who insisted that they must make the presentation or presumably be terminated.

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India…

…I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.

However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.

The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms because a market algorithm works out that humans can be converted into gloobnar, a novel epoxy which is in great demand amongst the aliens the next galaxy over for fixing their equivalent of coffee machines. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound. However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company’s bottom line in this scenario, so we won’t pay it any more attention.

A second outcome is that it turns out that the current approach does not scale in the way that we would hope, for myriad reasons. There isn’t enough data on the planet, the architecture doesn’t work the way we’d expect, the thing just stops getting smarter, context windows are a limiting factor forever, etc. In this universe, some industries will be heavily disrupted, such as customer support.

In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I’ll use once every five years. I was actually happy about this, and it’s a real edge over other applications. But if you don’t have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don’t actually have any internal documentation worth retrieving. Fix. Your. Shit.

The final outcome is that these fundamental issues are addressed, and we end up with something that actually actually can do things like replace programming as we know it today, or be broadly identifiable as general intelligence.

In the case that generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the future. Is that clear now? Having your team type in import openai does not mean that you are at the cutting-edge of artificial intelligence no matter how desperately you embarrass yourself on LinkedIn and at pathetic borderline-bribe award ceremonies from the malign Warp entities that sell you enterprise software5. Your business will be disrupted exactly as hard as it would have been if you had done nothing, and much worse than it would have been if you just got your fundamentals right. Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this. If we thread the needle between moderate impact and asteroid-wiping-out-the-dinosaurs impact, everything will be changed forever and your tepid preparations will have all the impact of an ant bracing itself very hard in the shadow of a towering tsunami.

4. Palmer Luckey and Anduril want to shake up armsmaking – Schumpeter (The Economist)

The war in Ukraine has been a proving ground for these sorts of weapons—and for Mr Luckey’s company. He visited Kyiv two weeks into the war. “What we’ve been doing was tailored for exactly the type of fight that’s going on and exactly what we predicted was going to happen,” he argues, pointing to three lessons.

One is the importance of drones that can navigate and strike autonomously, even in the face of heavy jamming of their signals and obscurants like metal-filled smoke clouds. Many existing drones have struggled with this, says Mr Luckey, because they lack “multi-modal” sensors, such as optical and infrared cameras, to substitute for GPS, and do not have enough built-in computing power to use the latest object-recognition algorithms.

Second is the observation that software is eating the battlefield. Imagine that Russia begins using a new type of jammer. Mr Luckey says that the data can be sent back immediately to generate countermeasures, which are then remotely installed on weapons at the front line without having to change any hardware. A recent study by the Royal United Services Institute, a think-tank in London, noted that drones in Ukraine needed to have their software, sensors and radios updated every six to 12 weeks to remain viable. Anduril, claims Mr Luckey, is “literally pushing new updates…every single night”.

His third lesson from Ukraine is that weapons must be built in vast quantities—and therefore cheaply. He laments that Russia produces shells and missiles far more cheaply than America does: “The US is now on the wrong side of an issue that we were on the right side of during the Cold War.” Anduril makes much of the fact that its production processes are modelled not on big aerospace firms, but automotive ones.

5. What It Really Takes to Build an AI Datacenter – Joe Weisenthal, Tracy Alloway, and Brian Venturo

Tracy (19:48):

Can I ask a really basic question? And we’ve done episodes on this, but I would be very interested in your opinion, but why does it feel like customers and AI customers in particular are so, I don’t know if addicted is the right word, but so devoted to Nvidia chips, what is it about them specifically that is so attractive? How much of it is due to the technology versus say maybe the interoperability?

Brian (20:18):

So you have to understand that when you’re an AI lab that has just started and it is an arms race in the industry to deliver product and models as fast as possible, that it’s an existential risk to you that you don’t have your infrastructure be your Achilles heel. Nvidia has proven to be a number of things. One is they’re the engineers of the best products. They are an engineering organization first in that they identify and solve problems, they push the limits, they’re willing to listen the customers and help you solve problems and design things around new use cases. But it’s not just creating good hardware, it’s creating good hardware that scales and they can support at scale.

And when you’re building these installations that are hundreds of thousands of components on the accelerator side and the InfiniBand link side, it all has to work together well. And when you go to somebody like NVIDIA that has done this for so long at scale with such engineering expertise, they eliminate so much of that existential risk for these startups. So when I look at it and I see some of these smaller startups saying, we’re going to go a different route, I’m like, what are you doing? You’re taking so much risk for no reason here. This is a proven solution, it’s the best solution and it has the most community support go the easy path because the venture you’re embarking on is hard enough.

Tracy (21:41):

Is it like the old, what was that old adage? No one ever got fired for buying Microsoft. Is it like no one IBM? Yeah, yeah. Or IBM, something like that.

Brian (21:50):

The thing here is that it’s not even, nobody’s getting fired for buying the tried and true and slower moving thing. It’s getting fired for buying the tried and true and best performing and bleeding edge thing. So I look at the folks that are buying other products and investing in other products almost as like they’re trying, they almost have a chip on their shoulder and they’re going against the mold just to do it.

Joe (22:14):

There are competitors to NVIDIA that they claim cheaper or more application specific chips. I think Intel came out with something like that. First of all, from the CoreWeave perspective, are you all in on Nvidia hardware?

Brian (22:31):

We are.

Joe (22:32):

Could that change

Brian (22:33):

The party line is that we’re always going to be driven by customers, right? And we’re going to be driven by customers to the chip that is most performant provides the best. TCO is best supported right now and in what I think is the foreseeable future, I believe that is strongly Nvidia…

…Joe (23:30):

What about Meta with PyTorch and all their chips?

Brian (23:33):

So their in-house chips, I think that they have those for very, very specific production applications, but they’re not really general purpose chips. And I think that when you’re building something for general purpose and there has to be flexibility in the use case while you can go build a custom ASIC to solve very specific problems, I don’t think it makes sense to invest in those to be a five-year asset if you don’t necessarily know what you’re going to do with it…

…Joe (25:31):

Let’s talk about electricity. This has become this huge talking point that this is the major constraint and now that you’re becoming more vertically integrated and having to stand up more of your operations, we talked to one guy formerly at Microsoft who said one of the issues is that there may be a backlash in some communities who don’t want their scarce electricity to go to data centers when they could go to household air conditioning. What are you running into right now or what are you seeing?

Brian (25:58):

So we’ve been very, very selective on where we put data centers. We don’t have anything in Ashburn, Virginia and the Northern Virginia market I think is incredibly saturated. There’s a lot of growing backlash in that market around power usage and just thinking about how do you get enough diesel trucks in there to refill generators that they have a prolonged outage. So I think that there’s some markets where it’s just like, okay, stay away from that. And when the grids have issues and that market hasn’t really had an issue yet, it becomes an acute problem immediately.

Just think about the Texas power market crisis back in, I think it was 2021, 2020 where the grid wasn’t really set up to be able to handle the frigid temperatures and they had natural gas valves that were freezing off at the natural gas generation plants that didn’t allow them to actually come online and produce electricity no matter how high the price was, right?

So there’s going to be these acute issues that people are going to learn from and the regulators are going to learn from to make sure they don’t happen again. And we’re kind of siting our plants and markets where our data centers and markets where we think the grid infrastructure is capable of handling it. And it’s not just is there enough power? It’s also on things.

AI workloads are pretty volatile in how much power they use and they’re volatile because every 15 minutes or every 30 minutes, you effectively stop the job to save the progress you’ve made. And it’s so expensive to run these clusters that you don’t want to lose hundreds of thousands of dollars of progress. So they take a minute, they do what’s called checkpointing where they write the current state of the job back to storage and at that checkpointing time, your power usage basically goes from a hundred percent to like 10% and then it goes right back up again when it’s done saving it.

So that load volatility on a local market will create either voltage spikes or voltage sags. A voltage sag is what you see is what causes a brownout that we used to see a lot of times when people would turn their air conditioners on. It’s thinking through, okay, how do I ensure that my AI installation doesn’t cause a brownout when people are turning during checkpointing, when people are turning their air conditioners on?

That’s the type of stuff that we’re thoughtful around, how do we make sure we don’t do this right? And talking to engineer NVIDIA’s engineering expertise, they’re working on this problem as well, and they’ve solved this for the next generation. So it’s everything from is there enough power there? What’s the source of that power? How clean is it? How do we make sure that we’re investing in solar and stuff in the area to make sure that we’re not just taking power from the grid to also when we’re using that power, how is it going to impact the consumers around us?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Microsoft, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 June 2024:

1. The C Word – Jonathan Clements

ON SUNDAY MORNING, May 19, I was enjoying croissants and coffee with Elaine at the kitchen table, while watching the neighborhood sparrows, finches, cardinals and squirrels have their way with the bird feeder. All was right in our little world, except I was a little wobbly when walking—the result, I suspected, of balance issues caused by an ear infection.

It was going to be a busy week, and I figured that it would be smart to get some antibiotics inside me, even if visiting the urgent care clinic on Sunday might be more expensive than contacting my primary care physician on Monday and perhaps having to go in for an appointment.

Long story short, I ended the day in the intensive care unit of a local hospital, where the staff discovered lung cancer that’s metastasized to my brain and a few other spots. This, as you might imagine, has meant a few changes in my life, and there will be more to come.

I have no desire for HumbleDollar to become HumbleDeathWatch. But my prognosis is not good. I’ve had three brain radiation treatments and I started chemotherapy yesterday, but these steps are merely deferring death and perhaps not for very long. I’ll spare you the gory medical details. But as best I can gather, I may have just a dozen okay months ahead of me…

The cliché is true: Something like this makes you truly appreciate life. Despite those bucket-list items, I find my greatest joy comes from small, inexpensive daily pleasures: that first cup of coffee, exercise, friends and family, a good meal, writing and editing, smiles from strangers, the sunshine on my face. If we can keep life’s less admirable emotions at bay, the world is a wonderful place.

We can control risk, but we can’t eliminate it. I’ve spent decades managing both financial risk and potential threats to my health. But despite such precautions, sometimes we get blindsided. There have been few cancer occurrences in my family, and it’s never been something I had reason to fear. Chance is a cruel mistress.

It’s toughest on those left behind. I’ll be gone, but Elaine and my family will remain, and they’ll have to navigate the world without me. I so want them to be okay, financially and emotionally, and that’s driving many of the steps I’m now taking…

Life’s priorities become crystal clear. Even at this late stage, I believe it’s important to have a sense of purpose, both professionally and personally. I can’t do much about the fewer years, and I have no anger about their loss. But I do want the time ahead to be happy, productive and meaningful.

2. Central Banking from the Bottom Up – Marc Rubinstein

From his office a few blocks from the River Rhine in Dusseldorf, Theo Siegert had been scouring the world for investment opportunities. His research process had thrown up an under-appreciated banking stock headquartered across the border in Switzerland, and he started building a stake. Siegert knew a bit about the banking business – he was already a non-executive director of Deutsche Bank – but this stock was different. In his home country, as in many others, central banks tend not to trade freely on the stock exchange. Not so in Switzerland. Before long, Siegert had become the largest shareholder of the Schweizerische Nationalbank, the Swiss National Bank…

…It would be difficult for the Swiss National Bank to pursue its mandate – ensuring that money preserves its value and the economy develops favorably – if it also had to pander to the demands of private shareholders. So it limits private shareholders to voting just 100 of their shares – equivalent to a 0.1% position – leaving Siegert with 4,910 shares on which he is ineligible to vote. And it caps the dividend at 15 Swiss Francs a share, equivalent to a 0.4% yield at today’s price of 3,850 Swiss Francs. Of the remaining distributable net profit, a third accrues to the central government and two-thirds to regional cantonal governments.

As a result, the 10.4 kilograms of gold per share the bank carries and its 1.2 million Swiss Francs of overall net assets per share (at March valuations) remain out of grasp for private shareholders. At best, the stock is a safe haven, providing a preferred return in a strong currency, with no counterparty risk…

…The trouble was, 2022 wasn’t a good year for asset prices, leaving the Swiss National Bank highly exposed…

…Having earned 174 billion Swiss Francs cumulatively over the prior thirteen years, the Swiss National Bank lost 133 billion Swiss Francs in a single year in 2022, equivalent to 17% of GDP. It canceled its dividend for only the second time in over 30 years, signaling that there is risk in a 0.40% dividend after all.

And although asset markets recovered in 2023, strength in the Swiss Franc during the year – partly driven by the bank selling down some of its foreign assets – led to a record foreign exchange hit, triggering another overall loss (of 3 billion Swiss Francs) and another canceled dividend. Fortunately, 2024 has so far been better and, as of the first quarter, over 40% of the two-year loss has been recovered…

…In some cases, such large losses have eaten into capital, leaving many central banks operating on negative equity. As a private sector analyst, this looks frightening, but explicit government support makes it moot. Even before the current spate of losses, some central banks, including those in Chile, the Czech Republic, Israel and Mexico, carried on their business for years with negative capital. A study from the Bank for International Settlements concludes that none of them compromised on their ability to fulfill their mandate.

Because it maintains both a distribution reserve to carry forward some profit and a currency reserve that is not distributable, the Swiss National Bank did not slip into negative equity despite its large loss. At the end of 2023, its equity to asset ratio stood at 7.9% and by the end of March, it was up to 14.3%. That contrasts with the Federal Reserve, which has $43 billion of capital supporting $7.3 trillion of assets, not including almost a trillion dollars of unrealized losses.

But going forward, the business of central banking will grow more challenging. Not only do higher rates expose central banks to losses related to assets purchased in the past, they also make it difficult to generate net interest income on the current balance sheet. Seigniorage income still persists but the falling use of cash may erode it in future years. Meanwhile, commercial bank deposits – which form the bulk of a central bank’s liabilities (449 billion Swiss Francs in the case of the Swiss National Bank, compared with 76.3 billion Swiss Francs of banknotes) – are typically remunerated at market rates, which are higher than yields on legacy securities. Central banks are paying a floating rate while locked into a (lower) fixed rate on their assets.

The challenge is evident in a closer look at the Swiss National Bank. In the era of negative interest rates, it earned income on sight deposits it held on behalf of commercial banks. In 2021, the last full year of negative rates, that income was 1.2 billion Swiss Francs. Having raised rates to 1.50%, the relationship flipped and the central bank began paying interest to commercial banks, which in 2023 amounted to 10.2 billion Swiss Francs. With the yield on Swiss Franc-denominated securities still low, net interest income on the book came to a negative 8.7 billion Swiss Francs…

…From its most recent high of 7,900 Swiss Francs at the beginning of 2022, the Swiss National Bank stock price has halved. Against its muted profit outlook, this is no surprise: The golden era of central bank profitability is likely over…

…For others, though, it’s fine. As the general manager of the Bank for International Settlements noted last year, “Unlike businesses, central banks are designed to make money only in the most literal sense.” Viewing central banks as stocks is instructive, but fortunately for the economy at large, there is more to them than that.

3. Reports of the petrodollar system’s demise are ‘fake news’ – here’s why – Joseph Adinolfi

Earlier this week, reports circulating widely on social-media platforms like X offered up a shocking proclamation: A 50-year-old agreement between the U.S. and Saudi Arabia requiring that the latter price its crude-oil exports in U.S. dollars had expired on Sunday.

The collapse of the accord would inevitably deal a fatal blow to the U.S. dollar’s status as the de facto global reserve currency, various commentators on X opined. Surely, financial upheaval lay ahead…

…But as speculation about an imminent end to the U.S. dollar’s global dominance intensified, several Wall Street and foreign-policy experts emerged to point out a fatal flaw in this logic: The agreement itself never existed…

…The agreement referred to by Donovan is the United States-Saudi Arabian Joint Commission on Economic Cooperation. It was formally established on June 8, 1974, by a joint statement issued and signed by Henry Kissinger, the U.S. secretary of state at the time, and Prince Fahd, the second deputy prime minister (and later king and prime minister) of Saudi Arabia, according to a report found on the Government Accountability Office’s website.

The agreement, as initially envisioned, was intended to last five years, although it was repeatedly extended. The rational for such a deal was pretty straightforward: Coming on the heels of the 1973 OPEC oil embargo, both the U.S. and Saudi Arabia were eager to flesh out a more formal arrangement that would ensure each side got more of what it wanted from the other.

The surge in oil prices following the OPEC embargo was leaving Saudi Arabia with a surplus of dollars, and the Kingdom’s leadership was eager to harness this wealth to further industrialize its economy beyond the oil sector. At the same time, the U.S. wanted to strengthen its then-nascent diplomatic relationship with Saudi Arabia, while encouraging the country to recycle its dollars back into the U.S. economy…

…According to Donovan and others who emerged on social-media to debunk the conspiracy theories, a formal agreement demanding that Saudi Arabia price its crude oil in dollars never existed. Rather, Saudi Arabia continued accepting other currencies – most notably the British pound (GBPUSD) – for its oil even after the 1974 agreement on joint economic cooperation was struck. It wasn’t until later that year that the Kingdom stopped accepting the pound as payment.

Perhaps the closest thing to a petrodollar deal was a secret agreement between the U.S. and Saudi Arabia reached in late 1974, which promised military aid and equipment in exchange for the Kingdom investing billions of dollars of its oil-sales proceeds in U.S. Treasurys, Donovan said. The existence of this agreement wasn’t revealed until 2016, when Bloomberg News filed a Freedom of Information Act request with the National Archives…

…Still, the notion that the petrodollar system largely grew organically from a place of mutual benefit – rather than some shadowy agreement established by a secret cabal of diplomats – remains a matter of indisputable fact, according to Gregory Brew, an analyst at Eurasia Group…

…Even more importantly as far as the dollar’s reserve status is concerned, the currency or currencies used to make payments for oil (BRN00) (CL00) are of secondary importance. What matters most when it comes to the dollar maintaining its role as the world’s main reserve currency is where oil exporters like Saudi Arabia decide to park their reserves, Donovan said.

4. On the Special Relativity of Investment Horizons – Discerene Group

We believe that it is hard for corporate executives to think long-term if they are overwhelmingly rewarded for short-term results. In their paper, “Duration of Executive Compensation,”2 Radhakrishnan Gopalan, Todd Milbourn, Fenghua Song, and Anjan Thakor developed a metric for “pay duration.” It quantifies the average duration of compensation plans of all the executives covered by an executive intelligence firm’s survey of 2006-2009 proxy statements. The average pay duration for all executives across the 48 industries in their sample was just 1.22 years. We think that such performance-based compensation duration borders on the absurd for leaders of ostensibly multi-decade institutions buffeted by so many factors beyond their short-term control.

Perhaps unsurprisingly, incentives drive behavior.3 Executive-pay duration was longer in firms that spent more on R&D, firms with a higher proportion of independent board directors, and firms with better stock-price performance. Conversely, firms that offered shorter pay duration to their CEOs were more likely to boost short-term earnings with abnormal accruals of operating expenses.

In a survey4 of 401 US CFOs conducted by John Graham, Campbell Harvey, and Shiva Rajgopal,   80% of survey participants reported that they would decrease discretionary spending on R&D, advertising, and maintenance to meet earnings targets. 55.3% said that they would delay starting a new project to meet an earnings target, even if such a delay entailed a sacrifice of value. 96.7% prefer smooth to bumpy earnings paths, keeping total cash flows constant. One CFO said that “businesses are much more volatile than what their earnings numbers would suggest.” 78% of survey participants would sacrifice real economic value to meet an earnings target.

Likewise, Daniel Bergstresser and Thomas Philippon have found5 that the more a CEO’s overall compensation is tied to the value of his/her stock, the more aggressively he/she tends to use discretionary “accruals” to affect his/her firm’s reported performance…

…According to the World Economic Forum and International Monetary Fund, the average holding period of public equities in the US has fallen from >5 years in 1975 to ~10 months in 2022…

…Another effect of short-termism has been to encourage firms to shed or outsource functions formerly considered to be critical to businesses, including R&D, manufacturing, sales, and distribution, thus creating atomized and fragile slivers of businesses that nevertheless often command illogically lofty valuations. For example, in recent times, aerospace, pharmaceuticals, and software companies that do not attempt to sustain going-concern investments and instead seek to continually acquire other companies in order to hollow out such companies’ engineering, R&D, and/or sales/distribution teams — thereby eliminating all possible sources of competitive advantage — have been feted as “asset-light” and “high-ROIC” poster children of their respective industries.

5. An Interview with Terraform Industries CEO Casey Handmer About the Solar Energy Revolution – Ben Thompson and Casey Handmer

But let’s dig into this solar thing. What is driving the cost curve decrease that was forecasted in 2011 that attracted you? And that has absolutely manifested over the last 10 years, famously exceeding every official projections for future costs. It always ends up being cheaper, faster than people realize. What is the driver of that?

CH: Well, so actually even Ramez Naam’s predictions were too conservative. No one, back then, predicted that solar would get as cheap as it has now. If you look at the DOE’s predictions in 2012 for how long it would take for us to get to current solar costs, their best guesses were 2150, and I don’t know if I’ll live that long.

So of course their entire roadmap for decarbonization didn’t include this, but now we have it. Can we use it? Yes, we sure as hell can and we sure as hell should, because it’s a massive gift that enables us to — we don’t have to de-growth in order to stop emitting pollution into the atmosphere. We can build our way out of the climate crisis by just increasing energy consumption and making energy cheaper for everyone.

In terms of how it gets cheaper, well, essentially, as I say, once the technology is inside the tent of capitalism, it’s generating value for people. It tends to attract wealth, it tends to attract capital, and that capital can be used to do things like hire manufacturing process engineers, and they’re very, very clever and they work very hard, particularly probably hundreds of thousands of engineers working at various solar factories in China right now. And sooner or later, they will find every possible configuration of matter necessary to force the price down. So same as with Moore’s law, essentially, we’ve just seen steady improvements.

Yeah, I was going to ask, is this an analogy to Moore’s law or is it actually the same sort of thing? Moore’s law is not a physical law, it is a choice by companies and individuals to keep pushing down that curve. Number one, what I get from you is that’s the same sort of concept here, but number two, are the actual discoveries actually similar to what’s going on?

CH: Yeah, actually to a large extent because it’s a silicon-based technology.

Right, exactly.

CH: There’s a lot of commonality there, but I think Moore’s law is not a law of nature, it’s what we call a phenomenological law, an emergent law. But basically all it says is there’s a positive feedback loop between cost reductions, increases in demand, increase in production, and cost reductions. So provided that the increase in demand, the induced demand as a result of the cost reduction, exceeds the cost reduction for the next generation of technology, you have a positive feedback loop. Otherwise, it’ll converge at some point, right? You’ll achieve maybe a 10x cost reduction and then it’ll stop, and we start to hit diminishing returns on all these technologies. But if you look at Moore’s law, it’s actually a series of maybe 20 or 30 different overlapping technology curves that kind of form this boundary of technology throughout time, and you see the same thing in solar technology if you really look under the hood and see what’s going on.

But yeah, the fundamental thing is there’s just enormous demand for solar at lower and lower prices and so manufacturers are justified in investing the capital they need in order to hit those prices and then the feedback mechanism keeps going. Solar manufacturing itself is a brutally competitive business which is both good and bad, it means like if you decide that you want to compete in solar, you don’t have to be at it for 50 years in order to compete. If you can capitalize, you can build a solar factory and if you’re smart enough and you work hard enough, in five years you can be in the top 20 manufacturers globally which is huge. Talking about billions of dollars of revenue every year just because everyone’s existing capital stock gets depreciated really quickly.

Right. But to your point, it’s also commodity then, right? So how do you actually build a sustainable business?

CH: Well, picks and shovels essentially. So actually one of the things that we like to say at Terraform, and I’m jumping the gun slightly here, but Terraform’s product essentially is a machine that converts solar power into oil and gas, so it bridges these two technology spans. It allows you to arbitrage essentially economically unproductive land that would otherwise just be getting hot in the sun. You throw some solar panels on there, that’s your computing hardware, but that’s not very useful, right? I could hand you an H100 but doesn’t do anything for you until you’ve got software to run on it and the software allows the raw computing power of that H100 to become useful for an end consumer…

Actually let’s run through some of the objections to solar power and then I think that will inherently get to some of these things. So we talked about the nuclear bit, what happens when the sun doesn’t shine?

CH: Yeah, so we’re actually seeing this in California right now. It creates a time arbitrage, right? If you have the ability to store power during the day and then release it during the night, you can make an incredible amount of money and that’s why we’ve seen battery deployments in California, for example, increased by I think a factor of 10x in the last four years, and the effect of that is it’s basically allowing people to transport power, or transport energy, through time in much the same way that power lines, transmission lines, allow people to transport electricity through space.

So what is happening with the battery cost curve? Because if that’s sort of an essential component to make this happen-

CH: Same thing, same story.

For the same reasons?

CH: Exactly the same reasons, same story. Battery manufacturing is probably a little bit more complex and not quite as well-developed as silicon solar panel manufacturing, but we’re seeing year-on-year growth of battery manufacturing. It’s like well over 100%, so it’s actually growing faster than solar, and then the cost improvement’s not quite as steep, but it’s easily like 5% or 10% per year depending on which technology you’re looking at.

In 2021, for example, it was extremely confidently predicted that lithium ion batteries would never get under $100 per kilowatt hour at the cell level and the pack level, and of course Tesla was widely mocked for claiming that they would be able to get ultimately below $100 bucks per kilowatt hour at the pack level. But then again, I think January this year or December last year, a Chinese manufacturer came out with a sodium ion battery cell, which is at $56 per kilowatt hour, so it’s like a 2x reduction in cost on top of what is already considered cutting edge, and we just go down from there.

Now, sodium ion batteries might not be perfectly suited for all kinds of applications, but they’re probably cheaper to produce than the lithium ion batteries. We know they’re cheaper to produce in lithium batteries and they’re more than capable of doing the sort of load shifting required to essentially store power during the day and then use it in the evening.

Are we in a situation already, or do we still have a bit to go, where the sort of combined weighted cost of solar, which is much cheaper than nuclear as you talked about, plus batteries, which sounds like it’s still more expensive now, but when you combine the two is it already lower?

CH: Yeah, so again just look at the data, right — the market reveals its preference. CleanTechnica ran an article almost five years ago now showing that in Texas they were developing battery plants 10:1 compared to gas peaker plants. Texas runs its own its own grid under slightly different rules where you can basically just build and connect and then the grid can force you to curtail if they’ve got overproduction, but that typically means it’s a more liquid market. And even in Texas, which is certainly not ideologically committed to solar, and actually incidentally this year deployed more solar than California did.

Yeah, I was going to say.

CH: Also Texas has the cheapest natural gas in the history of the universe, but they’re deploying more battery packs than they are gas peaker plants 10:1…

…CH: But I just want to say there’s a conception that, oh, solar and batteries only are on the grid because they’re massively subsidized and they’re actually screwing everything up. That’s actually, that’s not true. Solar and batteries is what’s keeping the grid working right now, it’s the only thing that’s providing expanded capacity.

The major challenge with additional solar development, particularly here in the States, is we now have this ten-year backlog or kind of development queue before you can connect your solar array to the grid, and the reason for that is the grid is old and it’s kind of overwhelmed, and it’s not able to transport all that power effectively to market.

Of course, one solution to this is just to build more grid. Another solution is to put some batteries on the grid. And, you know, the third solution is basically just build batteries and solar wherever you can, it’s actually working really well.

Then obviously what Terraform is doing is taking this otherwise un-utilized capacity for solar development and then pouring it into another aspect of our civilization’s absolutely unquenchable thirst for energy. Just to give you some hard numbers here, roughly a third of U.S. energy is consumed in the form of electricity and about two-thirds in the form of oil and gas. So even if we successfully electrified huge amounts of ground transportation and also moved all of the electricity grid to say wind, solar and a bit of nuclear and some batteries and maybe some geothermal or something like that, so completely decarbonize the grid, that would only deal with about a third of the economy. Two-thirds of the economy still runs on oil and gas and so that’s what Terraform is here to try and deal with.

One more question on the batteries.

CH: Yeah.

There’s always been, or the common refrain has been, we need a battery breakthrough, we need something completely new. Is the take, and you mentioned the sort of sodium ion, but even with terms of lithium ion, is the actual expectation or is the actual realization in your expectation going forward that actually the technology we have — sure, it’d be great to get a breakthrough, but there’s actually way more improvements and in what we have that will carry us a long way?

CH: Lithium ion batteries are already amazing. I mean, they’ve been around for about 35 years now, I think they were first commercialized for Panasonic camcorders or something and even then they were extremely compelling. They pushed NiCAD [nickel-cadmium] out of the market almost instantaneously, which is the previous battery chemistry and numerous applications. They’re more than good enough.

You say, “Well, I’d like a battery breakthrough”. Why? “Because I want to run my supersonic electric jet off batteries.” Well, good luck with that. But for all ground transportation purposes, for static backups, for all these kinds of applications, not only is the technology already great, it’s got a 30 year history of manufacturing at scale. We know how to make it safe, we know how to make it cheap, it’s extremely compelling and the numbers speak for themselves.

Battery manufacturing capacity expansion is not just happening for no reason, there’s enormous untapped demand for batteries. The way I like to think of it is what’s your per capita lithium ion allocation? Maybe in 1995, you might have a Nokia 3210 with — actually that would be after 1995 — but with a small lithium ion battery in it. So you’ve got 10 grams per person of lithium ion battery and nowadays my family has two electric cars, and that’s probably most of our batteries.

Yeah, now we have laptops, we have computers.

CH: But in terms of the bulk mass, like 400 kilograms per person or something for people to have electric cars and then if you have a static backup battery in your house and then maybe a share of your per capita part of the grid scale batteries and so on. I think it could easily scale to a couple of tons per lithium ion battery per person, particularly in like the more energy intensive parts of the United States.

Is that a large number? No, not really. I easily have a couple of tons per person in terms of steel just in my cars. I easily have probably 50 tons of concrete per person in terms of my built environment. I don’t actually think this is a particularly large number, I just think it’s unusual to see in such a short span of time some product go from the size of your thumb to the size of a large swimming pool, a large hot tub or something like that, in terms of your per capita allocation.

Where are we at as far as availability of say lithium or of all the various rare minerals or rare earths, whether that go into both solar and batteries?

CH: Yeah, I mean, again, I’m not a super expert on batteries, but the cure for high prices is high prices. Lithium is the third most common element in the universe, there’s no shortage of it. You could argue there’s a shortage of lithium refining capacity in the United States, particularly if you’re concerned about strategic vulnerability.

It’s like the rare earth thing, right? Rare earths are not actually rare. It’s just the actual ability to refine them.

CH: They’re super common, and actually solar solves that. It turns out that you can electrically catalytically separate rare earth elements using cheap solar power, more significantly lower environmental impact and much lower cost than traditional refining, and I have some friends working on that.

It is certainly true that batteries, people are concerned about cobalt. Actually, I have some cobalt here, here’s a cube of cobalt on my desk. Cobalt is a fabulous metal, but there’s not a huge amount of it necessarily. It’s not scarce like gold, but the mining situation is not quite sorted out. But at the same time, like almost all the major battery manufacturers use almost no cobalt right now because they’re able to adapt their processes to basically optimize their costs towards the cheaper materials.

Capitalism solves this, we don’t have to worry too much about it, there’s literally hundreds of thousands of chemists out there right now who are solving this problem right now, you don’t have to lose sleep over it, it is a completely commoditized production system…

What happens with old solar panels and old batteries? Obviously this is an objection to nuclear which is nuclear waste, and the good thing with nuclear waste is it’s really not that much. We’re talking about this deployment of massive amounts of solar panels, all these batteries. Where are we at in 10, 20 years if this build out happens? Is that a potential issue?

CH: I’m not too worried about it. And again, you need to look at your waste stream on a per capita basis. If we deployed as many solar panels as I want to, how many solar panels will you end up disposing of? I think if you ground them up it’d be one garbage bag per year. For a suburban family, we probably have 1,000 garbage bags of trash every year that gets landfilled.

But to talk about specifics, batteries I think are prime targets for recycling because the materials in them are essentially, as Elon Musk once said, super concentrated for the raw materials you need to make batteries. There’s multiple companies out there, including Redwood Materials, that are doing exclusively battery recycling, or battery component recycling, which is super obvious. That said, as battery production increases, even if you recycle all the old batteries, it will only be 1% of the input stream or something, but I just don’t see a future where we have giant piles of batteries lying around.

Then as far as solar panels go, they’re like a layer of silicon dioxide, which is glass, a layer of silicon, which used to be glass, and then a layer of silicon dioxide and maybe some aluminum around the edges. Well, you can strip off the aluminum and recycle that trivially, we’ve been recycling aluminum for 100 years, and the glass is glass. You can grind it up and landfill it, it’s basically sand.

People will say, “Oh, what about cadmium or something?” — well first, solar uses a cadmium telluride process to make their solar panels. But again, the amounts involved are trivial, they’re inert, they’re solid, they can’t run or leach or anything like that, I’m not too worried about it. As far as the sort of trash that humans routinely landfill, solar panels would actually significantly increase the purity of our dumps because they’re so inert compared to everything else…

…CH: One of the things I like to say is that oil and gas is so common in our civilization, it’s invisible because every single thing that you see with your eyes is a surface that’s reflecting light, it’s usually pigmented or made of plastic, and that pigment or plastic is made of oil or it’s made of natural gas. So unless you go outside and look at a tree, which is ultimately made of a kind of plastic also derived from sunlight and air, it’s extremely difficult to lay your eyes on anything that’s not made of hydrocarbons and obviously, so we’re extremely bullish about growth.

Now it could be the case that there’s zero growth. It could be the case that the oil and gas industry just motors along at about $8 trillion of revenue per year, which is about $1 billion per hour. So just in the time we’ve been talking, it’s $1 billion, which is just insane. But I actually think that once we unlock these cheaper forms of hydrocarbons that it will promote substantial growth, particularly in the energy-intensive industries.

So just to underscore the vision here, I get really, really fired up about this, because when I think of aviation and how amazing it is, and how we’ve only had it as a species for about a hundred years, and it’s only really been something that we can enjoy in jet transport for maybe 50 years. But actually the people who routinely fly on aircraft, and I know that you’re one of them because you’re an expert obviously, and myself, it’s probably only 50 million people on earth who’ve ever had that experience of flying in a jet, I don’t know more than 10 times in their life. Wouldn’t it be incredible if that number was 500 million or 5 billion, but to get there from here in terms of fossil fuel consumption, emits a lot of CO₂, but it also requires a huge amount of fuel. Aviation currently consumes about 2% of the world’s oil and gas just to fly less than 1% of the world’s population around, and so obviously we need to bring on a new source of fuel.

So when you think, well, what is a nice climate-positive version of aviation? Is it like the European model where we force airlines to make customers pay for carbon sequestration or carbon credits or something like that, which is either extremely expensive or extremely fraudulent or both, but in any case makes aviation more expensive and less accessible to people, just makes it more exclusive? Or do we say, “Why don’t we solve both these problems at once, and just bring online enormous new supply of high quality, cheap gas and natural gas for the future liquefied natural gas powered supersonic aircraft?”

At the same time it just happens to be carbon-neutral, so you don’t have to worry about CO₂ emissions, it’s not polluting the atmosphere with new CO₂ from the crust, and at the same time, instead of Boeing producing 500 aircraft a year, Boeing and maybe a few more startups can be producing 10,000 aircraft per year to service this kind of massive explosion in demand driven by economic expansion. That is a sick vision, that is so cool, we should absolutely do this as quickly as we can.

I think whether or not Terraform plays a huge role in this process or not, and I’m certainly intending for it to be — currently we’re leading this process — the economics is inevitable that we’re going to switch over to synthetic fuel sooner or later, and when we do, it’s going to get really, really cheap because we’re running it off solar power and when it gets really, really cheap, we’re going to do amazing aviation and other energy applications, and increase manufacturing and maybe some little bit of geo-engineering on the side to keep things in check, increase water supply in dry areas and so on. Why wait until 2060? We could have this done in 2040 if we just apply ourselves the right way and find the right business model…

How does it work? Give the non-physicist overview of how Terraform works.

CH: Yeah, sure. So from a customer’s perspective on the outside, essentially what a Terraformer does is it allows you to build your own oil and gas well in your backyard, regardless of the fact that you don’t own a drill rig, and in fact you don’t live anywhere near where oil and gas occurs naturally, which is again pretty cool. But how does it work under the hood? Well, it consumes electricity and most of that electricity gets used locally.

Actually I should state the Terraformer itself sits in the solar array, and that’s to reduce the cost of transmission of electricity, which would be absolutely prohibitive in this case, and the electricity gets used to capture CO₂ from the air and to split water into hydrogen and oxygen. We throw the oxygen away like trees do, we take the hydrogen and we react that in a classical old school chemical reactor with the CO₂ to produce methane and water. Then we can separate the water out because it condenses at a much higher temperature from the methane and we’re just left over with methane plus a little bit of leftover CO₂ and hydrogen and a tiny bit of water vapor. That’s natural gas, right?

Actually, when you get natural gas out of the ground, if you did have a drill rig and you did live in a place where natural gas occurs and you drill a hole in the ground, gas comes out. Well now you’ve got to build a well top and a bunch of other stuff that’s actually really complicated, and you might have a blowout and then what comes out of the ground is like between 10 and 80% natural gas and a bunch of other contaminants on top of that which have to be removed before you can sell it.

We don’t have that problem. What we produce is the pure product. It’s really compellingly elegant the way we do this. There’s no geology risk, plug-and-play once you plug it in it just generates a predictable amount of gas every day for however long the system lasts, which is most likely measured in decades.

In this case, you don’t have a battery capital cost, I presume it only runs when then suns out, right?

CH: Yeah, that’s absolutely correct. And I’ll say for anyone who’s considering doing a hardware tech startup, well, there is basically a recipe that we’ve stumbled upon for taking any existing industry and then applying it to solar power and getting the benefit of that extremely cheap power.

The first is you have to get the CapEx way, way down because your utilization is low, you’re only using your plant maybe 25% of the time, so you have to get the cost down by at least a factor of four. Then on top of that, you also have to make it compatible with the sun coming up and going down. So time variability, which is difficult, but not impossible. We have many processes that we can routinely throttle up and down in our everyday lives so you understand this intuitively, but if you can do that, and it sounds impossible, of course, “I just want a chemical reactor that’s 1/10 the size and 1/4 the cost and I can ramp it up and down”.

Well, the way you make this work is you just use more power. So you say, “Well, I don’t care about efficiency quite as much because my power is so cheap”, and that’s what makes it easy. But if you can do this, then you have —

You have to change that core assumption. Whereas almost every invention today is all about increasing the efficient use of power, and the whole point of solar is, “What if we assume power is basically infinite, but it’s bounded by time, then what would we do?”.

CH: It’s like cycles in your computer are basically free or on your cell phone or something…

Desalination seems like a potentially massive win here and very pertinent to the American West for example. But this idea that if you assume energy is infinite, we’re not short of water on earth, we’re short of water without salt.

CH: That’s right, yeah. I mean there are some places where it’d be relatively difficult to transport even fresh water from the ocean, but in California that’s not the case. California is at the end of the Colorado River, which is declining, and California of course has senior water rights, we take about 5 million acre feet of water per year.

So unlike Terraform, which is definitely developing new proprietary technology in-house, it’s quite exciting, but with solar desalination, you don’t need any new technology. You just go and build a plant essentially with stuff you can buy off the shelf. How much would it cost to build a plant that is able to substitute 100% of California’s water extraction from the Colorado River, essentially doubling Southern California’s water supply, and at the same time allowing you to fix the Salton Sea and also set up a massive light metals industry and a bunch of other things?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Tencent, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 June 2024:

1. Saying Goodbye: 30 Investing Lessons After 19% CAGR Over 7 Years – Eugene Ng

I had a near-death/paralysis accident over 10 years ago where I broke my neck. Thankfully, I survived it, but my neck still remains broken to this very day. Life is extremely precious, and I want to live my remaining life to the fullest, and positively impact as many people as I can…

…With a degree in economics and finance, and despite working in banking for over 11 years, I was ill-equipped from the onset to invest well. I decided to start from first principles, asking basic questions? What are stocks? Which they are part ownership stakes in business. Why stock prices rise, and eventually how much?

Eventually I came to realise that growth of revenues, profits and free cash flows matter the most over 5-10 years and beyond, not changes in valuation multiples. That’s why my favourite investing saying is where revenues, profits and free cash flows flow, the stock price eventually goes.

Could investing in this stock generate sufficient returns? Once you take the red pill, once the eyes see what truly matters, you can no longer un-see…

…Most investors are focused in not making errors of commission, or a Type I error, which is making a bad investment when you think it is a good one.

Instead, I am focused making less errors of omission, or Type II errors, rejecting a good investment when I think it is a bad one. Because the maximum a loser can lose is theoretically limited at 100%, but the maximum upside a missed winner can go higher is theoretically infinite…

…Ultimately, your investing strategy and style is unique to you. It must be comfortable to you, it must suit your personality and your strengths. Everyone’s investment portfolio is going to look different.

Most importantly, you must be able to sleep well at night. After some time, you will come to realise if your strategy is truly repeatable and scalable over the long-term…

…Investing in stocks is investing in businesses, and having some of the best CEOs running some of the best companies in the world with their employees working for you 24/7. When you view it that way, it changes your perspective in life…

…Wanted to share a personal story where we recently had a pair of olive-backed sunbirds building their hanging nest on our olive tree, at our balcony in our home in Singapore. We were delighted to welcome them to our home. It was an untidy nest, and our balcony floor was littered with fallen nest materials, but we didn’t mind.

Eggs have been laid, and the female sunbird has been incubating on and off during the day and full time at night over the last week. We are looking forward to see the eggs hatch in the coming week, hear the chicks chirp for the first time, watch them get older and fledge, and then get ready to take flight and leave the nest.

It was amazing to see how timely and beautiful this was, as it reminded me deeply of the journey that I am going to embark on with a new beginning. 

2. A Revolution in Biology – Kasra

Our conventional picture of biology is that everything happens in a bottom-up manner: molecular mechanisms dictate the functions of cells, which dictate the functions of your organs and which ultimately control your body. What is the thing at the very bottom of this hierarchy—the foundation for everything else in life? The genome. Genes are considered the fundamental code of life, so when it comes to figuring out questions of how the body develops, or how to cure diseases or change specific biological traits, we tend to look there…

…That is, until Michael Levin (and many others) entered the scene. They came in and said: genes are great, and they do contain much of the necessary information for building our bodies. But they don’t contain all of it, and they are not always a useful level of abstraction of understanding how the body develops, and consequently they are not always the best way to intervene with biology (e.g. to regenerate damaged organs, or to cure diseases like cancer). If you’ve ever done any programming, you know that there are many levels of abstraction—higher-level and lower-level programming languages, higher-level and lower-level API’s—at which you can try to understand or manipulate the software that runs in your computer. Levin’s point is that genes are like machine code, and modern-day programmers never think about machine code—they think about higher-level software constructs like objects, modules, and applications. The bold claim embedded in his work—the real revolution here—is that higher levels of abstraction and control meaningfully exist in biology. And one of the ways in which this higher level of abstraction manifests is in something called the bioelectric network of the organism.

We usually think of neurons as the only cells in our body that produce intelligent behavior by communicating in large networks. Neurons are constantly communicating with each other in the form of electrical patterns on their membrane and neurotransmitters, which are chemicals that transfer messages between cells. But it turns out that cells throughout the body have the exact same building blocks for such communication. They do the same communication, but slower. Levin and company call this the bioelectric network, as distinguished from a neural network.

In the past few decades we’ve discovered all the ways in which bioelectric networks distributed through the body do the same kinds of things that brains do: store memories, solve problems, and guide development. To get a sense of the bioelectric network in action, we have to talk about a mind-blowing creature called the planarian. This little critter (about 2cm in length) is a developmental “genius” of sorts: it doesn’t age, it doesn’t get cancer, and it is extremely regenerative, capable of regenerating any part of its body that gets cut off, even if it’s cut up into more than 250 pieces…

…Imagine taking one of these worms and splitting it into two. You now have two half-worms, and each of those half-worms is tasked with rebuilding the rest of its body. There’s a crucial decision here that the cells have to make: what part of the body do we already have, and what part do we need to build? One of the half-worms needs to produce a tail, and the other half-worm needs to produce a head. But the cells are at the very middle of the body, extremely far (from a cell’s perspective) from both the head and the tail. How do the cells have any idea what they should generate?

The answer, at least in part, is that all along the body the cells of the worm have a gradient of “resting membrane potentials”, which is effectively a stable electrical state. The cells keep track of their “position” in the body in this way, and experiments have demonstrated that the cell’s electrical state relative to the rest of the body is what determines whether it will proliferate into a head or a tail…

…Levin’s team was able to induce the worm to generate two heads instead of one head, by putting it into a solution of drugs that blocked specific ion channels (which in turn altered the electrical state of the cells). They’ve also induced the worm to generate no heads at all, or to generate the head of a different worm species. All of these are living, functional worms, just with a very different body structure…

…Keep in mind a crucial point: in all these experiments, the genes of the worms are never edited. You get a wildly different functional worm with the same genes. And what’s even wilder is that some of these changes are enduring: without any further drugs or modifications, the two-headed worm produces offspring that are also two-headed, indefinitely…

…Levin’s lab and others have already demonstrated an astonishing level of control over development by modulating bioelectric networks. They’ve done things like getting frogs to develop extra limbs, and getting them to develop an eye in their gut, or an eye in their tail that they can actually see out of. The end goal that Levin dreams of is an “anatomical compiler” – a program which takes as input a specification for an arbitrary organ or body plan, and outputs the specific set of chemical and electrical signals needed to generate that organ. Imagine 3-d printing entire synthetic organs and organisms, except instead of having to specify all the micro-level details, you can just give a high-level description like “an extra eye at the tail.” This is Dall-E but for biology. And in the very long run, it could be the answer to virtually all of biomedicine, including traumatic injury, birth defects, degenerative disease, cancer, and aging.

3. The Investing Boom That’s Squeezing Some People Dry – Jason Zweig

The idea is that when you lock your money up for months or years, you’re less likely to panic in a downturn, enabling the managers to amass a portfolio that will pay off in the long run…

…That bumps up against a basic law of financial physics: Eliminating one risk creates another.

An investment that doesn’t trade may have some advantages, but once you buy it, how do you sell it? How deep a haircut, or discount from the reported price, will you take?

Many funds have so far been able to cash out investors at what seems like a fair price. Many haven’t…

…Highlands REIT, a private Chicago-based real-estate fund, is a more-extreme case. The company bought back about 19% of its stock in December at 14 cents a share. For the sellers, that was like getting a haircut with a lawn mower: Highlands’ annual report estimates net asset value at 32 cents per share as of Dec. 15, 2023.

Outsiders are offering an even harsher haircut. On May 20, MacKenzie Capital Management, an investment firm in Orinda, Calif., opened a mini-tender for Highlands’ stock at 4 cents a share, minus a $25 transfer fee. On Lodas, the latest sale was at 10 cents…

…Institutions can sell big blocks of their alternatives, like hedge funds or private equity, to what are called secondary funds at discounts that might run 10% to 30% below net asset value.

In many cases, you should be so lucky.

Often, if you can find a broker willing to buy your alternative investment, the commission can run up to or even exceed 5%. Your haircut could be as deep as 30% to 50%. Depending on the buyer, weeks may go by before you get paid.

Other electronic marketplaces besides Lodas, including Central Trade & Transfer and 1st Trade, also match buyers and sellers of alternatives—typically at gaping discounts to net asset value.

4. Book Summary Part 2: “Our Investing Strategy, who does the market smile upon” – Made In Japan

Right before launching his fund, Hokkaido Takushoku Bank went bankrupt and was undergoing liquidation. He immediately decided to use that opportunity. He went to Sapporo to buy the shares of a specific company from them, which was Nitori a company with almost zero liquidity at the time. Some readers may recognize the name today as the largest furniture retail chain in Japan oft compared to Ikea. They’re known for their value-for-money proposition, providing quality products at an affordable price point, and has been a huge success.

You might not believe this if you look at Nitori’s stock price today but it was an unpopular company back then. According to Kiyohara-san, it was trading at 750 Yen per share at the time. One of the main reasons it seems, was that the furniture market was in decline, making it an unattractive industry to invest in. His thesis was that the market was extremely fragmented. The largest furniture retailer Ootsuka, only had a 5% market share. Nitori was the only vertically integrated manufacturer (others were distributors) and believed this could help them gain share as a cost-effective producer of home furnishings. Nitori was listed on the Sapporo Exchange so no institutional investor would touch it (since it would be impossible to sell). However, when he spoke to IR, he picked up on a key insight. While the Hokkaido economy, which was their main market, was not doing well and they saw a decline in same-store sales in the region, the 3 stores open in Kanto were doing very well. Providing a hint to Nitori’s true competitiveness.

And it’s funny because you can immediately tell he was built differently. After the research was done and when the fund launched he bought as much as he could from the failing bank and at launch it became 25% of his NAV. The stock tripled in a year and in 5 years the stock was a six-bagger. A year later it was a ten-bagger at which point he sold out. If he had held it till now stock would been a hundred bagger. But by 2003 Nitori was starting to get more institutional coverage and attention. He believed it was time to exit when. He says “When the party starts,  that’s when we go home.”

So here was the first lesson, which is that investing in an unpopular, shrinking market can still make you a lot of money. In fact, during the time he owned Nitori the market size halved. He also understood the opportunity to buy shares from distressed sellers, especially for stocks that are listed on some regional exchange that no one looks at…

…2007 Dec – 2009 Feb: “A sick patient getting hit by a truck, 3 times”

Just as the fund narrowly escaped its “matasaki”, it was followed by the 2008 crisis.

Whilst K1J Fund generated incredible returns from their bet in REITs and Real Estate and successfully exited from these. He still owned a lot of cheap real estate stocks in the fund. 3 holdings filed for bankruptcy and 1 went through an Alternate Dispute Resolution (ADR)  The worse part? he owned 45%, 35%, 10% and 20% of the shares outstanding.

Needless to say, it was distressing and he lost weight.

The goal was no longer for him to generate returns in this period. It was simply to survive.

He never said this himself, but what follows is what you call an absolute shitshow. Or as he would put it, “like a sick patient getting hit by a truck 3 times”.

The fund’s top priority was to reduce its leveraged long and short positions to avoid a margin call.

But to add insult to injury, their prime broker Goldman decided to change its margin policy to save themselves. (from 50% to 30%) Which could have been fatal for the fund. Fortunately, Goldman eventually agreed to only implement this in steps, which helped the fund bide some time.

The issue is that in a crisis like this it’s not just one kind of risk that materializes, there are second-order and third-order effects which, in isolation might have a low probability. I believe, however, the odds of secondary and tertiary events no matter how unlikely will increase when the first ‘highly improbable event’ occurs. (You can also apply this to the Livedoor example).

Although not a surprise, the clients that entered in excitement when the fund was killing it in 2005 started redeeming (mainly pensions) and the fund lost half of its clients.

This created a new risk which forced him to reduce his longs which were mainly in small, illiquid companies. A forced selling driven by client redemptions would in effect make you dig your own grave.

So how does he try to solve this problem? He asks these companies to buy back their shares.

From its peak in October 2005 to its trough in February 2009 the fund’s NAV was -72% and its AUM -89%.

This is when you realize most people won’t be able to replicate what he did. I wrote this in part 1. He decides to put almost all of his net worth in the fund to try and save it. He adds “Because that is the responsibility of the manager”. Like a captain being the last to leave a sinking ship, an honorable and brave decision.

I want to reflect here because this is not something most of us could do. It’s really easy to read this as a brave story and just say “wow, awesome”, but never really understand the extent of how hard it was. (This is called the empathy gap in psychology where we underestimate our psychology to make decisions in a certain situation). If your fund is already down heavily, you have clients threatening to leave or have left, your prime broker changing the rules, and you’re being forced to exit your positions at ridiculous valuations, are you ready to risk going broke to save it? Remember your morale at this point is probably at an all-time low. In a world where limited liability corporations are the norm (i.e. the damage to your personal wealth can be legally limited where, at the very worst moment most of us would use to escape) he decided to go all in.

Also don’t forget, he’s had to tell his wife he did just that! (which might’ve been the scariest part!). Apparently, her response to him telling her was “Didn’t you also say that last week?” lol.

But the question begs, why did he do that? His confidence was far from crushed, and he was convinced if he closed his shorts and be as long as possible, that he would make alot of money. Why? because he knew a sudden decline will almost always result in a V-shape recovery. His game was to just survive until then. That is SOME confidence he had.

What’s amazing is that he went to clients telling them it would be foolish to leave now, “the fund can probably double from here”.

In the end, from its trough through Feb 2018 his fund 12x ed…

…Shorts are the most dangerous in a bear market, in this scenario, his game was to maximize his long positions. Maintaining a short book means, your prime broker will usually give you a hard time in these moments and pssibly reduce your margin which also limits your long exposure. The other is that when the market turns and your shorts also move up, this might also force you to reduce your long position (to cover). Understanding this helped him avoid a forced error of omission. Imagine having no choice but to sell your longs which could have multiplied but you were forced to sell them after a little move up to cover your shorts…

…Lasertec (Circa 2020)

  • This was not a fundamental idea, though it did fit the typical target for his shorts: expensive-looking large-cap.
  • He simply saw an opportunity through the lens of Japan’s inherent tax rules.
  • The fourth largest shareholder was the widow of the founder who owned 4.24%.
  • So he thought, what happens if she passes away too?
  • Japan’s inheritance tax is the highest in the world, and her children will have to pay for it by selling shares.
  • In the end, this is really what happened.

This is an important theme for owner-operated businesses, in which inheritance can play an outsized impact on the stock price.

5. An Interview with AMD CEO Lisa Su About Solving Hard Problems
– Ben Thompson and Lisa Su

What was your response in November 2022 when ChatGPT shows up?

LS: Well, it was really the crystallization of what AI is all about.

Obviously you’ve been in the graphics game for a long time, you’ve been thinking about high-performance computing, so the idea that GPUs would be important was not foreign to you. But were you surprised the extent to which it changed the perception of everyone else around you and what happened after that?

LS: We were very much on this path of GPUs for high-performance computing and AI. Actually, it was probably a very significant arc that we started, let’s call it back in the 2017 plus timeframe. We’ve always been in GPUs, but really focusing on-

What was it in 2017 that made you realize that, “Wait, we have these, we thought we bought ATI for gaming, suddenly, there’s this completely different application”?

LS: It was the next big opportunity, we knew it was the next big opportunity. It was something that Mark and I discussed, which was, by putting CPUs and GPUs together in systems and designing them together, we’re going to get a better answer and the first near-term applications were around super-computing. We were very focused on these large machines that would reside at national laboratories and deep research facilities and we knew that we could build these massively parallel GPU machines to do that. The AI portion, we always also thought about it as clearly a HPC plus AI play.

You said before that AI is the killer application for HPC.

LS: Yes.

But you will talk to people in HPC, they’re like, “Well, it’s a little bit different”, to what extent is that the same category versus adjacent categories?

LS: It’s adjacent but highly-related categories, and it all depends on the accuracy that you want in your calculations, whether you’re using the full accuracy or you want to use some of these other data formats. But I think the real key though, and the thing that really we had good foresight on is, because of our chiplet strategy, we could build a highly modular system that could be, let’s call it, an integrated CPU and GPU, or it could be just incredible GPU capability that people needed.

And so, the ChatGPT moment for me was the clarity around, now everybody knew what AI was for. Before, it was only the scientists and the engineers who thought about AI, now everybody could use AI. These models are not perfect, but they’re amazingly good, and with that, I think the clarity around how do we get more AI compute in people’s hands as soon as possible was clear. Because of the way we had built our design system, we could really have two flavors. We had HPC-only flavor, which is what we would call our MI300A and we had AI only flavor, which was the MI300X…

One of the things that does strike me about the contrast is, and one of Nvidia’s really brilliant moves was the acquisition of Mellanox and their portfolio in networking, and to the extent it matters to tie all these chips together, particularly for training.

In your Computex keynote, you talked about the new Ultra Accelerator Link and Ultra Ethernet Link standards, and this idea of bringing lots of companies together, kind of calling back to the Open Compute Project back in the day as far as data centers. Makes perfect sense, particularly given Nvidia’s proprietary solutions have the same high margins, we all know and love, as the rest of their products.

But I guess this is my question about your long-term run — do you think it’s fair to say that, from a theoretical Clayton Christensen perspective, because we’re early in AI, maybe it’s not a surprise, the more proprietary integrated solution is the belle of the ball in many respects? There’s a bit where, yes, being open and modular all makes sense, but maybe that’s not going to be good enough for a while.

LS: I would say it this way. When you look at what the market will look like five years from now, what I see is a world where you have multiple solutions. I’m not a believer in one-size-fits-all, and from that standpoint, the beauty of open and modular is that you are able to, I don’t want to use the word customize here because they may not all be custom, but you are able to tailor.

Customize in the broad sense.

LS: That’s right.

Tailor is a good word.

LS: Tailor is the right word — you are able to tailor the solutions for different workloads, and my belief is that there’s no one company who’s going to come up with every possible solution for every possible workload. So, I think we’re going to get there in different ways.

By the way, I am a big believer that these big GPUs that we’re going to build are going to continue to be the center of the universe for a while, and yes, you’re going to need the entire network system and reference system together. The point of what we’re doing is, all of those pieces are going to be in reference architectures going forward, so I think architecturally that’s going to be very important.

My only point is, there is no one size that’s going to fit all and so the modularity and the openness will allow the ecosystem to innovate in the places that they want to innovate. The solution that you want for hyperscaler 1 may not be the same as a solution you want for hyperscaler 2, or 3.

Where do you think the balance is going to be then, between there being a standard approach versus, “This is the Microsoft approach”, “This is the Meta approach”? There’s some commonality there, but it is actually fairly customized to their use cases and needs. Again, not next year, but in the long run.

LS: I think as you get out three, four or five years, I think you’re going to see more tailoring for different workloads, and what happens is, the algorithms are going to — right now, we’re going through a period of time where the algorithms are just changing so, so quickly. At some point, you’re going to get to the place where, “Hey, it’s a bit more stable, it’s a little bit more clear”, and at the types of volumes that we’re talking about, there is significant benefit you can get not just from a cost standpoint, but from a power standpoint. People talk about chip efficiency, system efficiency now being as important if not more important than performance, and for all of those reasons, I think you’re going to see multiple solutions…

How much inference do you see actually going back to the CPU?

LS: I think a good amount of inference will be done on the CPU, and even as you think about what we’re talking about is the very large models obviously need to be on GPUs, but how many companies can really afford to be on the largest of models? And so, you can see now already that for smaller models, they’re more fine-tuning for those kinds of things, the CPU is quite capable of it, and especially if you go to the edge.

Right. You noted on the last earnings call that the MI300, it’s been supply-constrained, your fastest ramp ever, but is maybe from the expectations of some investors, a little disappointing in the projections for the end of the year. How much do you feel that shift to being demand-constrained is about the 325 coming along, which you talked about this week, versus the fact that just generally Nvidia supply has gone up, as everyone’s trying to figure this stuff out? Yes, your long-term opportunity is being this sort of customized supplier — tailored supplier, sorry, is the word that we’re going for — versus, “Look, I don’t want to say picking up but just we need GPUs, we’ll buy them from anyone”. Where do you feel your demand curves are relative to the competition and the rapid progression of the space?

LS: Again, let me take a step back and make sure we frame the conversation. The demand for AI compute has been off the charts, I think nobody would have predicted this type of demand, and so when I say that there is tightness in the supply chain, that’s to be expected, because nobody expected that you would need this many GPUs in this timeframe. The fact is the semiconductor industry is really good at building capacity, and so that is really what we’ve seen. As we’ve started to forecast-

And so you feel it’s more a function of there’s just so much supply coming online?

LS: Absolutely, and that’s our job. Our job is to make it to a place where you’re not constrained by manufacturing capacity.

Really, for us, it is about ensuring that customers are really ramping their workloads and that is a lot of deep work, deep partnerships that we’re doing with our customers. So honestly, I feel really good about the opportunities here. We’ve been through this before where it’s very similar to what we saw when we did the initial data center server CPU ramps, which is our customers work very closely with us, they get their software optimized, and then they add new workloads, and add more volumes, and that’s what I would expect to happen here, too.

The difference in AI is that I think customers are willing to take more risk, because there’s a desire to get as much, as fast as possible.

Is there a challenge for you, because that desire to take more risks means they’re more accepting of say, high margins to get the leading GPUs or whatever it might be, or the GPU with the largest ecosystem, developer ecosystem?

LS: What I will say is I’m super happy with the progress we’ve made on software.

Fair enough.

LS: What we’re seeing is excellent out-of-box performance. The fact is things just run, the fact is that much of the developer ecosystem wants to move up the abstraction layer, because everybody wants choice.

And you feel you’re going to get to a stage where that move up the abstraction layer is a common layer across companies, as opposed to getting one company internally moves up the abstraction layer, and so they can buy any CPU, but that doesn’t necessarily benefit you going into another company, or do you feel that’s going to be-

LS: I absolutely believe that it’ll be across the industry. Things like PyTorch, I think PyTorch is extremely widely adopted, OpenAI Triton, similar. These are larger industry things where frankly, part of the desire is it takes a long time to program down to the hardware. Everyone wants to innovate quickly, and so the abstraction layer is good from the standpoint of just rapid innovation.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Meta Platforms, Microsoft, and Tencent. Holdings are subject to change at any time.