What We’re Reading (Week Ending 07 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 July 2024:

1. Etched is Making the Biggest Bet in AI – Etched

In 2022, we made a bet that transformers would take over the world.

We’ve spent the past two years building Sohu, the world’s first specialized chip (ASIC) for transformers (the “T” in ChatGPT).

By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either.

But for transformers, Sohu is the fastest chip of all time. It’s not even close.

With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs…

…By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling. We are living in the largest infrastructure buildout of all time.

Scaling the next 1,000x will be very expensive. The next-generation data centers will cost more than the GDP of a small nation. At the current pace, our hardware, our power grids, and pocketbooks can’t keep up…

…Santa Clara’s dirty little secret is that GPUs haven’t gotten better, they’ve gotten bigger. The compute (TFLOPS) per area of the chip has been nearly flat for four years…

…No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.

Suddenly, that’s changed:

  • Unprecedented Demand: Before ChatGPT, the market for transformer inference was ~$50M, and now it’s billions. All big tech companies use transformer models (OpenAI, Google, Amazon, Microsoft, Facebook, etc.).
  • Convergence on Architecture: AI models used to change a lot. But since GPT-2, state-of-the-art model architectures have remained nearly identical! OpenAI’s GPT-family, Google’s PaLM, Facebook’s LLaMa, and even Tesla FSD are all transformers…

…We believe in the hardware lottery: the models that win are the ones that can run the fastest and cheapest on hardware. Transformers are powerful, useful, and profitable enough to dominate every major AI compute market before alternatives are ready…

  • …As models scale from $1B to $10B to $100B training runs in the next few years, the risk of testing new architectures skyrockets. Instead of re-testing scaling laws and performance, time is better spent building features on top of transformers, such as multi-token prediction.
  • Today’s software stack is optimized for transformers. Every popular library (TensorRT-LLM, vLLM, Huggingface TGI, etc.) has special kernels for running transformer models on GPUs. Many features built on top of transformers aren’t easily supported in alternatives (ex. speculative decoding, tree search).
  • Tomorrow’s hardware stack will be optimized for transformers. NVIDIA’s GB200s have special support for transformers (TransformerEngine). ASICs like Sohu entering the market mark the point of no return. Transformer killers will need to run on GPUs faster than transformers run on Sohu. If that happens, we’ll build an ASIC for that too!…

…On GPUs and TPUs, software is a nightmare. Handling arbitrary CUDA and PyTorch code requires an incredibly complicated compiler. Third-party AI chips (AMD, Intel, AWS, etc.) have together spent billions on software to little avail.

But since Sohu only runs transformers, we only need to write software for transformers!

Most companies running open-source or internal models use a transformer-specific inference library like TensorRT-LLM, vLLM, or HuggingFace’s TGI. These frameworks are very rigid – while you can tweak model hyperparameters, changing the underlying model code is not really supported. But this is fine – since all transformer models are so similar (even text/image/video ones), tweaking the hyperparameters is all you really need.

2. Evolution of Databases in the World of AI Apps – Chips Ahoy Capital

Transactional Database vendors like MDB focus on storing and managing large volumes of transactional data. MDB also offers Keyword Search & rolled out Vector Search (albeit late vs competitors). Historically MDB Keyword Search has not been as performant as ESTC in use case utilizing large data sets or complex search queries & has less comprehensive Search features to ESTC…

…A vector database stores data as high-dimensional vectors rather than traditional rows and columns. These vectors represent items in a way that captures their semantic meaning, making it possible to find similar items based on proximity in vector space.

Real-World Example:

Imagine you have an online store with thousands of products. Each product can be converted into a vector that captures its attributes, like color, size, and category. When a customer views a product, the vector database can quickly find and recommend similar products by calculating the nearest vectors. This enables highly accurate and personalized recommendations.

In essence, a vector database helps in efficiently retrieving similar items, which is particularly useful in applications like recommendation systems & image recognition…

…RAG combines the strengths of Vector Search and generative AI models to provide more accurate and contextually relevant responses. Here’s how it works: 1) A user submits a query 2) the system converts the query into a vector and retrieves relevant documents or data from the vector database based on similarity 3) the retrieved documents are fed into a generative AI model (LLM), which generates a coherent and contextually enriched response using the provided data.

Multimodal models integrate multiple data types (text, images, audio) for comprehensive understanding and generation. It is crucial for vector databases to support multimodal data to enable more complex and nuanced AI applications. PostGres is a dominant open source vendor in the database market (scored #1 as most used Vector DB in recent Retool AI survey) but on it’s own it does NOT seem to include native support for multi-modality in it’s Vector Search. This limits the use cases it can be applied or used to without using an extension or integration to other solutions…

…Simple AI Use Cases:

Similarity Search has been one of the first and most prominent use cases of using GenAI. When a query is made, the database quickly retrieves items that are close in vector space to the query vector. This is especially useful in applications like recommendation engines &  image recognition where finding similar items is crucial. These use cases have been in POC since last year, and are starting to move into production later this year.

Complex AI Use Cases:

Enter Generative Feedback Loop! In a Generative Feedback Loop, the database is not only used for Retrieval of data (main use case in Similarity Search). But it also provides Storage of Generated Data. The database in this case stores new data generated by the AI model if deemed valuable for future queries. This in my view changes the relationship that the AI Application has with a database as it then has to store data back in. A key example for Generative Feedback Loop is an Autonomous Agent…

…An AI autonomous agent and a database work together to perform complex tasks efficiently. The relationship between a database and an AI Agent at first seems similar to other use cases, where the database holds all necessary data and the AI Agent queries the database to retrieve relevant information needed to perform its tasks.

The key difference here is the Learning and Improvement aspect of AI Agents. Instead of just containing historical data, the database has been updated with new data from user interactions and agent activities. The AI Agent then uses this new data to refine its algorithms, improving its performance over time…

…A real life example could be an E-commerce Chatbot. The customer buys a product and leaves a review for that product. The database then updates the new purchase and feedback data, and the AI Agent learns from this feedback to improve future recommendations. In this scenario, the database is not just being queried for data, but it is storing data back from the interaction, the AI Agent is learning from this, creating what is referred to as a Generative Feedback Loop.

3. The Big Bad BREIT Post – Phil Bak

So here it is, our analysis of Blackstone’s Real Estate Income Trust. The data presented is as-of the original publication of June 2023. It should be noted that over the past year everything has played out as we warned, including the gating of Starwood’s SREIT. Last thing I’ll say: I’d have much preferred to be wrong…

…Given the vital role that “NAV” plays in fundraising and performance reporting, it’s surprising that a greater amount of transparency is not provided by sponsors into their valuation methodology. Remind me again why they don’t provide a comprehensive explanation for each input in the DCF model?  Contrary to popular assumption, NAV is not based on appraisals that utilize sales comparisons. Instead, it’s based on an opaque discounted cash flow (DCF) methodology that is based on assumptions that are at the discretion of the sponsor who realizes fee streams pegged to the asset values they assign.

BREIT’s self-reported performance is – by their own admission – “not reliable.” Why we didn’t take a closer look at it before is as much a mystery as how they compute it. Management can’t just pull numbers out of thin air, and they’ve done nothing illegal, but they have a lot of discretion on where they estimate share values to be.

According to their prospectus, Blackstone values the fund itself once a month; then once a year it brings in an outsider who prepares a valuation based on their direction. But in its March 28, 2023 prospectus amendment, BREIT removed the steps in bold.  (1) a third-party appraisal firm conducts appraisals and renders appraisal reports annually; (2) an independent valuation advisor reviews the appraisal reports for reasonableness; (3) the advisor (Blackstone) receives the appraisal reports and based in part on the most recent appraisals, renders an internal valuation to calculate NAV monthly; (4) the independent valuation advisor reviews and confirms the internal valuations prepared by the advisor. (5) BREIT will promptly disclose any changes to the identity or role of the independent valuation advisor in its reports publicly filed with the SEC.

The verbiage in their disclosures doesn’t suggest that their calculation will be better than relying on market prices. The highlighted portions seem to be saying that Blackstone uses baseless returns in their SEC filings. They are not using a methodology prescribed by the SEC or any regulatory body. They do not adhere to any accounting rules or standards. Nor is their monthly NAV calculation audited by an independent public accounting firm. Blackstone uses it solely to determine the price at which the fund will redeem and sell shares. The NAV also happens to dictate the fees they can earn…

…One of BREIT’s big selling points was the ability to get a dividend of around 4% when interest rates were near zero, but the fund cannot – and has never been able to – cover the dividend payment. The current Class S distribution of 3.74% and Class I yield of 4.6% aren’t fully earned based on a key REIT cash-flow measure: Available Funds from Operations (AFFO). AFFO is used to approximate the recurring free cash flow from an income producing real estate vehicle and calculate the dividend coverage.

Blackstone reports AFFO, but their reported number is janky. It omits the management fees they charge.  Their rationale is that they have not taken their fees in cash but instead converted their $4.6 billion in fees into I-Shares, which is a class of BREIT shares that has no sales cost load.  But their election to accept shares is optional, the shares they receive are fully earned and they can redeem their shares at stated NAV.  What’s more, they have redemption priority over other BREIT investors; there is no monthly or quarterly redemption limitation.  Blackstone has already redeemed $658 million in shares.

BREIT’s AFFO also omits recurring real estate maintenance capital expenditures and stockholder servicing fees which are part of the sales load. Computing an AFFO more consistent with public company peers would result in a payout ratio for the first half of 2023 of more than 250%.

BREIT, unlike most big public REITs, has only covered about 13% of their promised dividend distribution. There’s not a single year in which they could cover their payment if everybody elected to receive it. Since inception, the company has delivered $950 million in AFFO and declared $7.3 billion in distributions.  That’s a stunning 768% dividend payout ratio…

…BREIT is levered approximately 49% against NAV and closer to 60% as measured against cost – the average cost of BREIT’s secured borrowings stands at approximately 5.5 % before hedges so the cost of their debt exceeds the yield. There are few ways you can turn these numbers into a double digit return.  Rents would have to go to the moon. The only way there can be positive leverage over a holding period (IRR) is if there is a shedload of positive income growth. And that’s exactly what BREIT has baked in the valuation cake. Interest rates went up so the NPV should be way down but – in a fabulous coincidence – future cash flow expectations went up by just enough to offset it. The numerator where revenue growth shows up made up for the rise in rates in the denominator…

…Here’s the BREIT Story in a nutshell: They’ve reported an annual return since inception for its Class S investors north of 10% with real estate investments that have a gross current rate of return of less than 5% on their cost.  They’ve been buying assets at a 4% cap rate, paying a 4.5% dividend and reporting 10+% returns. And nobody has called bullshit…

…By taking BREIT’s current NOI and dividing it by the NAV, investors can compute the implied cap rate on BREIT’s portfolio as they are valuing it – and compare it with public REITs. Interest rates have moved 200-300 basis points in recent months, and in public markets elevated cap rates have driven a 25% decline in values. A recent analysis of two vehicles in the non-traded REIT space concluded that both funds are being valued at implied cap rates of approximately 4.0% when publicly traded REITs with a similar property sector and geographic are trading at an implied cap rate closer to 5.75% . Applying that 5.75% cap rate to BREIT would result in a reduction in shareholder NAV of more than 50%. The current valuation of roughly $14.68/ share should be closer to $7-8/share.

4. Grant Mitchell — The Potential of AI Drug Repurposing – Jim O’Shaughnessy and Grant Mitchell

[Grant:] I was leading teams that were really pioneering the use of large medical record databases to identify subpopulations where a drug might perform better, might be higher in efficacy or better in safety. And we realized that that’s really, in a way, it’s kind of drug repurposing. It’s taking a drug and finding a population where it works a little bit better in a drug that already exists.

And as David was working in the lab and I was working in the data, we kind of came together and we say, “Can we automate what we’ve done? Can we scale what we’ve done in just one disease?” And given the explosion and the amount of data that exists out there and the improvements in the way that we can harmonize and integrate the data into one place, and then the models that have been built to analyze that data, we thought that maybe it would be possible. And we would check in every few years. 2016, 2017, it wasn’t really possible. We had this dream for a long time. 2018, 2019 is probably when I was talking to you and I was thinking about can we do this?

And really, lately it’s become possible, especially with, like I said before, more data, structured better. You have models like these large language models that are able to digest all of medical literature, output it in a structured fashion, compile it into a biomedical knowledge graph, these really interesting ways to display and analyze this kind of data. And ultimately, that’s how Every Cure was formed, was the concept that the drugs that we have are not fully utilized to treat every disease that they possibly can, and we can utilize artificial intelligence to unlock their life-saving potential.

Jim: Just so incredibly impressive. And a million questions spring to mind. As you know, my oldest sister, Lail, died of lupus. And when you said the cytokine storm, she had a kind of similar thing where she would go into remission, and then there’d be a massive attack, and it wasn’t like clockwork like your colleague’s, but when she died in 1971, it was like nobody knew very much at all about the disease. And in this case, did you find that the cure that worked for your colleague, was that transferable to other people with this similar disease?

Grant: Yeah, so the cure that worked for him, we studied his blood, we sampled his lymph nodes, we did immunohistochemistry and flow cytometry and basically found that their cytokines were elevated, another molecule called VEGF was elevated, there’s T cell activation. This all pointed towards something called the mTOR pathway. And started looking at different drugs that would hit that pathway, settled on a drug called Sirolimus. Sirolimus has been around for decades. It’s actually isolated from a fungus found in the soil on Easter Island. It’s amazing, right? And it shuts down the overactivation of this pathway that leads to this cascade that causes this whole cytokine storm.

For David it works perfectly, and it also works for about a third of the other patients that have a disease like David. And so that’s resulted in the benefit to countless thousands and thousands of patients’ lives. It’s a pretty thrilling and satisfying and motivating thing to be able to figure something like that out and to be able to do it, they have the opportunity to do it more and at scale and have the opportunity to save potentially millions of lives is a huge motivation for my team…

…[Grant:] So we couldn’t quite piece it together, and it was really an aha moment that this should be designed as a nonprofit, and it should be an AI company, because if you want to build the world’s best AI platform for drug repurposing, you’re going to need the world’s best dataset to train it, and you’re not going to get your hands on all the data that you want to get your hands on if you’re a competitor to all these people that are trying to use this data.

So we’re collaborative. We’re non-competitive. We are not profit-seeking. Our primary goal is to relieve patient suffering and save patient lives. So I’ll get to your question about how we’re utilizing that kind of resiliency data that I mentioned before. But first I’m going to help you understand how we use it. I’m going to describe the kind of data set that we’re constructing, and it’s something called a biomedical knowledge graph. It’s well known in the areas and the fields that we’re in, but maybe not a commonly known term to the layman, but it’s effectively a representation in 3D vector space of all of the biomedical knowledge we have as humanity, every drug, every target, every protein, every gene, every pathway, cell type, organ system, et cetera, and how they relate to different phenotypes, symptoms, and diseases.

And so every one of those biomedical concepts that I just described would be represented as a node, and then every relationship that that concept has with another relationship, like a drug treats a disease, there would be an edge. They call it a semantic triple. Drug, treats, disease. So you’ve got a node, an edge, and a node. And imagine a graph of every known signaling molecule and protein and a concept you can imagine, tens of millions of nodes, even more edges, representing all of human knowledge in biology. And that’s what multiple people have constructed. Actually, NIH funded a program called the NCATS Translator Program where a number of these knowledge graphs have been constructed. Other groups are doing it. A lot of private companies have their own. We are compiling them and integrating it with an integration layer that kind of takes the best from the top public ones, and then layers in additional proprietary data that we get from other organizations or data that we generate on our own.

And the example that you just mentioned, a company that is working on tracking genetic diseases and groups of people with the same genetic disease and looking at subpopulations within that group where there might be some resilience to the mutation, and then studying their genome to say, “Okay, what other proteins are being transcribed that might be protective against this mutation?”, and then going out and designing drugs that might mimic that protection. Well, how’s that data going to fit into my knowledge graph? Well, you can imagine that now if I have the data set that they’re working with, I know that there’s a mutation that results in a disease. So a gene associated with disease, that’s a node, an edge, and a node. And I also know that this other protein is protective of that disease.

So that just information that goes into the graph. And the more truth that I put into that graph, the more I can train that graph to identify patterns of successful examples of a drug working for a disease, and then it can try and find that pattern elsewhere where it either identifies nodes and edges that should already be connected or are connected in our knowledge base but no one has actually acted on, or it can maybe even generate a hypothesis on a totally new edge that is novel and has never been considered by experts before. So to answer your question, again, is we’re not doing that work ourselves, but we integrate the knowledge from that work so it can train our models and so we can pursue drug repurposing ideas…

…[Grant:] We’re not designing novel compounds. We think that there’s so much low-hanging fruit with the 3000 drugs that already exist that we are going to spend years and years unlocking the life-saving potential of those. And the reason why we’re focused there is because that is the fastest way to save human lives. If you develop a novel compound, you have to go all the way through the entire clinical development of an approval process. IND, phase one, phase two, phase three trials. This takes years and years and hundreds of millions of dollars, whereas in certain scenarios in drug repurposing, just like with my co-founder David, within weeks of us coming up with the hypothesis that this drug might work for him, as long as we could find a physician that would prescribe it to him, it went directly into his human body just weeks later.

So that brings me to this issue that I think we’re going to see, and you as an investor might make yourself aware of, is that there’s going to be lots and lots of failures in the world of AI-driven drug discovery. And that’s because not only are you an AI company that’s generating hypotheses, you’re also a biotech company that has to validate a novel compound and bring it all the way through the clinic through clinical trials and through regulatory approvals and into patients. So here you are an AI company, you’ve hired up your team of 50 data scientists and experts, and you come up with your hypothesis and you say, “Okay, great.”

You’re not Amazon that gets to A/B test where they’re going to put a button on the user interface and then they get feedback by the end of the day and okay, move the button here instead of here. When you come up with your hypothesis after your AI team says, “Okay, this is what the drug we’re going to move forward with,” you now have to go through potentially 10 years and hundreds of millions of dollars of additional development. So you don’t know if your AI team built anything of value. You don’t have that validation feedback loop that you do in other AI consumer-based organizations. So now you’re juggling sustaining an AI corporation that doesn’t have a feedback loop while you have to also pay for the clinical development of a drug. And so it’s a tension that’s hard, hard to manage.

And drug repurposing solves that tension. It allows us to go from hypothesis to validation in a much tighter feedback loop. So what we’re doing is something that both helps patients in the fastest and cheapest way possible, but also, the happy accident is that we push forward the field of data-driven drug discovery because we can inform our models in a faster feedback loop…

…[Grant:] One thing I learned when I was at Quantum Black and at McKinsey is, and we would go up against other machine learning organizations. I remember one time they put us head to head with another group and they said, “Okay, whoever comes with the best insights in the next three months, we’re going to pick to go with a longer contract going forward. And two seemingly similar teams working on the same dataset. We came up with a totally different recommendations than the other team did, and what was actual differentiator between the teams was that we had five medical degrees on our team, not just a bunch of data scientists, but data scientists plus medical experts. And in every step of the way that you’re building these knowledge graphs and designing these algorithms, you’re interfacing with medical expertise to make sure you imbue it with clinical understanding, with biological rationale of how this is actually going to work and how to interpret the typically really messy medical data.

And so if you think about the matrix that we’re producing, this heat map of 3000 drugs cross-referenced with 22,000 diseases creates 66 million possibilities, and we then score those possibilities from zero to one, and normalize them across the whole landscape. So that’s a tricky thing to do is drug A for disease X compared to drug B for disease Y, how do you compare the possibilities of each of those in zero to one? So we create that normalized score, and then we start looking at the highest scores and then filter down from there to say, “Okay, of all the highest probability of success opportunities here, which ones are going to impact patients the most, and which ones can we prove out quickly and efficiently in a lowcost trial with a few metapatients and high signal, so we can do this in three to six to 12 months births and suppose of five-year trial times?”

And the thing to think about, back to the comment about we need medical expertise highly integrated with what we’re doing is that even if you take the top thousand scores there, you’re still in the 0.001% of the highest ranking of scores, and now you got to pick amongst your thousand to get down to the top five. To get down to the top one, what is my first shot on goal going to be? That better be successful for all the things that I’m working on here, and it better help patients and really better work. So the AI can’t do that. You need a really smart head of translational science to make that last sort of decision of what’s going to go into patients and how it’s all going to work…

… [Grant:] we’re a nonprofit because we want to build the world’s best AI platform and we need the best data set to do it to save as many lives as we possibly can with drugs that already exist. So since the drugs already exist, it’s kind of a funny thing. I say we’re the smallest and the biggest pharma company in the world. We’re the biggest because every single drug that already exists is in our pipeline. We’re the smallest because we don’t own any of them. And then we take those drugs and we go after diseases that are totally neglected by the pharmaceutical industry. So it’s by design has to be a nonprofit.

5. How Bull Markets Work – Ben Carlson

Halfway through the year, the S&P 500 was up 15.3%, including dividends.

Despite these impressive gains the bull market has been relatively boring this year.

There have been just 14 trading days with gains of 1% or more. There has been just a single 2% up day in 2024. And there have only been 7 days of down 1% or worse.

Small moves in both directions.

Bull markets are typically boring like this. Uptrends tend to be these slow, methodical moves higher. Bull markets don’t make for good headlines because they’re made up of gradual improvements.

Bear markets, on the other hand, are where the excitement happens. Downtrends are full of both big down days and big up days…

..The best and worst days happen at the same time because volatility clusters. Volatility clusters because investors overreact to the upside and the downside when emotions are high…

…It’s also interesting to note that even though the S&P 500 is having a boring year, it doesn’t mean every stock in the index is having a similar experience.

While the S&P is up more than 15% there are 134 stocks down 5% or worse while 85 stocks are down 10% or more so far this year.

Stock market returns are concentrated in the big names this year, but it’s normal for many stocks to go down in a given year.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 30 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 30 June 2024:

1. An Interview with Scale AI CEO Alex Wang About the Data Pillar for AI – Ben Thompson and Alex Wang

When you saw that there was going to be a third pillar, yet no one was there, did you have any particular insights on how that would work, or was it just a matter of, “There’s a problem space that needs solving, and we’ll figure out how to solve it in the future”?

AW: Yeah. Probably the most formative, immediate experience was that I was training one of a neural network at this time on a single GPU in Google Cloud and using TensorFlow, and it was a neural network that detected emotion based on a photo of someone’s face, and all I did basically was I took the tutorial for ImageNet, so basically literally the tutorial code for a very different image recognition algorithm, and then I just swapped out the data set and then pressed “Enter”. Then 12 hours later, I had a neural network that smashed any of the other methods on this problem of recognizing emotion from images.

So the takeaway there is actually, data is what matters most.

AW: Yeah. From problem to problem, data is the only thing that varies, is maybe the better way to put it, and as a programmer, you kind of realize, “Oh, actually data is what’s doing all the actual programming and my insight into the problem doesn’t actually matter, it’s just all embedded in the data set that the model ends up getting trained on”.

So I think, A) I knew that data was very important. I remember this realization, the model ended at some performance, and I was like, “Okay, I’ve got to make this model better,” and so then I was like, “Okay, how am I going to improve on this data set?”, and then there was the second light bulb, which is that this is an incredibly painful process. You open up all the images and then you go through and you just look at, “Okay, are the labels for all the images correct?”, and then you’re like, “Okay, what new images should I get to pull into this?”, and then, “How am I going to get those labeled?”, and so all of the core operations, so to speak, of updating or changing or improving the data set were incredibly painful.

So I started the company in 2016, and this was an era where there was a broad-based recognition that platforms, particularly developer platforms that made very ugly things very easy were good businesses. It was already clear that AWS was ridiculously successful as a business, the most successful enterprise business that had ever existed, and then Stripe, it was also clearly recognized that Stripe was very successful, and so as a student of those companies realized that, “Hey, we should take this incredibly messy and complicated thing that exists today, and then figure out how to turn that into a beautiful developer UX and if we can accomplish that, then there’s a lot of value to be had here”.

There’s a lot to unpack there. Just as a broader philosophical point, do you think that insight about data still holds? So it’s not just that there’s three pillars, compute, algorithm, and data, but actually data is the most important, and just like you saw before, is it more complicated now or is even more the case?

AW: Yeah, I think it’s proving to be more and more the case. I was at an event with a lot of other AI CEOs recently, and one of the dinner conversations is, “Okay, compute, power, data: which do you run out of first?”, and the consensus answer around the room is data, and I think the data wall has become over the past few months, a pretty commonly debated topics. “Are we hitting a data wall in LLM development, or are we just fundamentally coming against the limits of data?” Even the most liberal assumptions around, let’s assume that you really did train on all human-generated text, which no sensible person does because you filter out all the bullshit, but if you did train on all human-generated texts, even then we will run out by 2027, 2028.

So just overall in terms of the sheer amount of data that’s necessary to keep up with scaling, we’re very clearly hitting some meaningful wall, and then if you look at, I think a lot of the model performance improvements as of late, or sort of the big gains in models, my personal reason, I think a lot of that actually boils down to data, and innovations on how to use data, and innovations on basically the data-intensive parts of the AI stack…

How have the needs of the market shifted then? You mentioned that you were getting at this before and I interrupted. You start out with images for self-driving cars, today it’s all about these text-based models. What is entailed in going from images to text?

AW: We had an interesting mid-step here, which is broadly speaking, I think the shift as the models have increased in intelligence is towards greater levels of expertise. But basically, we started autonomous vehicles and then starting about 2020 we actually started working with the government, the US government and this was driven because I grew up in Los Almos and realized that AI is likely a very important technology for our security.

We can do a side bit here, you wrote a very interesting piece on Substack in 2022, The AI War and How to Win It. Give me your thesis here and why you think it’s a big deal.

AW: Yeah, I think that the basic gist is first, if you look at the long arc of human history, it is punctuated by war. In some sense, human history is all about war, and then if you look at the history of war, then the history of war in some sense is all about technology. If you look at particularly the transitions from World War I to World War II to future wars, the Gulf War for example, the most significant bit so to speak, or the largest factor in how these wars end up playing out really, is access to technology. Obviously this is deep to my upbringing, grew up in Los Alamos, basically every year you have a multi-day history lesson on Los Alamos National Lab and the origins thereof.

So then you think about, “Okay, what are the relevant technologies today that are being built?”, and there’s a host of technologies I think are important, hypersonic missiles, space technology, et cetera. But AI is, you could very easily make the case, that it is the most important. If you could solve problem solving, then all of a sudden you have this incredibly powerful advantage.

If you believe that AI is really important for hard power, for American hard power, which is very important for I think ensuring that our way of life continues, then the most shocking thing for me was looking at, was going through and looking at the things that the CCP [Chinese Communist Party] were saying about AI, and there are CCP officials who have very literally said, “We believe that AI is our opportunity to become the military superpower of the world”. That we believe that roughly speaking, they said, “Hey, the Americans are not going to invest enough into AI, and so we’ll disrupt them by investing more into AI proportionally, and if we do so, even though we spend a lot less on our military, we will leapfrog them in capability”. This is, I think as a startup person, this is the core Innovator’s Dilemma or the core disruptive thesis that the CCP had basically a disruptive thesis on war powered by artificial intelligence.

This is basically the idea that you’re going to have these autonomous vehicles, drones, whatever, of all types controlled by AI, versus the US having these very sophisticated but operated by humans sort of systems, and the US will fall into the trap of seeking to augment those systems instead of starting from scratch with the assumption of fully disposable hardware.

AW: Yeah, I think there is at its core two main theses. One is perfect surveillance and intelligence in the sort of CIA form of intelligence, and this I think is not that hard to believe. Obviously, in China, they implemented cross-country facial recognition software as their first killer AI app, and it doesn’t take that much to think, “Okay, if you have that, then just extend the line and you have more or less full information about what’s happening in the world” and so that I think is not too hard to imagine.

Then the hot war scenarios is to your point, yeah, autonomous drone swarms of in land, air or sea that are able to coordinate perfectly and outperform any human.

I think when people hear AI, they think about the generative AI, LLMs, OpenAI, whatever it might be, and assume that’s a US company, Google’s a US company, et cetera, and so the US is ahead. This is obviously thinking about AI more broadly as an autonomous operator. Is the US ahead or what’s your perception there?

AW: I think that on a pure technology basis, yes, the US is ahead. China’s caught up very quickly. There’s two very good open source models from China. One is YiLarge, which is the model from Kai-Fu Lee‘s company, 01.ai. And then the other one is Qwen 2, which is out of Alibaba and these are two of the best open source models in the world and they’re actually pretty good.

Do they use Scale AI data?

AW: No, we don’t serve any Chinese companies for basically the same reasons that we’re working with the US military. YiLarge is basically a GPT-4 level model that they open-sourced and actually performs pretty well, so I think that on the technology plane, I think the US is ahead and by default I think the US will be maintaining a lead.

There’s an issue which Leopold Aschenbrenner recently called a lot of attention to, which is lab security. So we have a lead, but it doesn’t matter if, it can all be espionaged away basically and there’s this case recently of this engineer from Google, Linwei Ding who stole the secrets of TPU v6 and all these other secrets.

And wasn’t discovered for six months.

AW: Yeah, it wasn’t discovered for six months and also the way he did it was that he copy-pasted the code into Apple Notes and then exported to a PDF, and that was able to circumvent all the security controls.

So how does this tie into this middle stage for you of starting to sign government contracts? What were those about?

AW: Yeah, so I basically realized, and the punchline of what I was going through was that the United States was, by default, going to be bad at integrating AI into national security and into the military and a lot of this is driven by, for a while — this is less true now, but for a while — tech companies actively did not want to help the DOD and did not actively want to help US military capabilities based on ideology and whatnot, and even now the DOD and the US government are not really that great at being innovative and have a lot of bureaucracy that prevent this. So I decided basically like, “Hey, Scale, we’re an AI company, we should help the US government”.

We started helping them and we started working with them on all of their data problems that they needed to train specialized image detectors or specialized image detection algorithms for their various use cases, and this was the first foray into an area that required a lot of expertise to be able to do effectively, because at its core, the US government has a lot of data types and a lot of data that are very, very specialized. These are specialized sensors that they pay for, they’re looking at things that generally speaking the general population doesn’t care about, but they care a lot about — movement of foreign troops and the kinds of things that you might imagine military cares about — and so required data that was reflective of all of the tradecraft and nuance and capabilities that were necessary, so this was one of the first areas.

We actually have a facility in St. Louis, which have people who are by and large trained to understand all this military data to do this labeling.

So this was a clear separation then from your worldwide workforce?

AW: Yeah, exactly. It was a clear break in the sense that we were doing problems that almost anyone in the world could, with enough effort, do effectively and do well, to almost like the Uber driver, a very broad marketplace view, to something that required niche expertise and niche capability to do extremely well.

This sort of phase transition of data — there’s sort of a realization for us that, “Oh, actually in the limit almost all of the data labeling, almost all the data annotation is going to be in the specialized form”, because the arc of the technology is, first we’re going to build up all this generalized capability, and this will be the initial phase building of all these general capability, but then all the economic value is going to come from specializing it into all these individual specific use cases and industries and capabilities and it flowing into all the niches of the economy…

So where does synthetic data come into this?

AW: Yeah, synthetic is super fascinating. So I think that this has become super popular because we’re hitting a data wall, in some ways the most seductive answer to the data wall is, “Oh, we’ll just generate data to blow past the data wall”, generate data synthetically using models themselves. I think the basic results are that, at a very high level, synthetic data is useful, but it has a pretty clear ceiling because at it’s core you’re using one model to produce data for another model, so it’s hard to blow past the ceiling of your original model at a very fundamental level.

It’s a compressed version of what went into the original model.

AW: Yeah, exactly. It’s a very good way to compress insight from one model to get to another model, but it’s not a way to push the frontier of AI, so to speak…

So basically this is huge problem everyone is running into, it’s incredibly hard to solve and so someone is going to need to solve it and you’ve been working on it for eight to ten years or however long it’s been. The thesis seems pretty fairly straightforward, even if the margins are not necessarily going to be Nvidia-style margins, given that you have to use hundreds of thousands of humans to do that.

AW: Yeah and I think the other key nuance here, the other interesting thing, is today our revenue is 1% of Nvidia’s because, by and large, the budgets are mostly allocated towards compute. I think as with any portfolio optimization problem, in time, if data is actually the biggest problem, the percent of budgets that are allocated to data versus compute will slowly shift over time. So we don’t have to be half the budgets, even if we get to 5% of the budgets or 10% of the budgets versus 1% of the budgets, then there’s a pretty incredible growth story for data.

2. My Stock Valuation Manifesto – Vishal Khandelwal

1 .I must remember that all valuation is biased. I will reach the valuation stage after analyzing a company for a few days or weeks, and by that time I’ll already be in love with my idea. Plus, I wouldn’t want my research effort go waste (commitment and consistency). So, I will start justifying valuation numbers.

2. I must remember that no valuation is dependable because all valuation is wrong, especially when it is precise (like target price of Rs 1001 or Rs 857). In fact, precision is the last thing I must look at in valuation. It must be an approximate number, though based on facts and analysis.

3. I must know that any valuation method that goes beyond simple arithmetic can be safely avoided. If I need more than four or five variables or calculations, I must avoid that valuation method…

…10. I must remember that good quality businesses often don’t stay at good value for a long time, especially when I don’t already own them. I must prepare in advance to identify such businesses (by maintaining a watchlist) and buy them when I see them priced at or near fair values without bothering whether the value will become fairer (often, they do).

11. I must remember that good quality businesses sometimes stay priced at or near fair value after I’ve already bought them, and sometimes for an extended period of time. In such times, it’s important for me to remain focused on the underlying business value than the stock price. If the value keeps rising, I must be patient with the price even if I need to wait for a few years (yes, years!)…

…13. Ultimately, it’s not how sophisticated I am in my valuation model, but how well I know the business and how well I can assess its competitive advantage. If I wish to be sensible in my investing, I must know that most things cannot be modeled mathematically but has more to do with my own experience in understanding businesses.

14. When it comes to bad businesses, I must know that it is a bad investment however attractive the valuation may seem. I love how Charlie Munger explains that – “a piece of turd in a bowl of raisins is still a piece of turd”…and…“there is no greater fool than yourself, and you are the easiest person to fool.”

3. I Will F****** Piledrive You If You Mention AI Again – Nikhil Suresh

I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders…

…Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught…

…Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f****** catastrophe…

…A friend of mine was invited by a FAANG organization to visit the U.S a few years ago. Many of the talks were technical demos of impressive artificial intelligence products. Being a software engineer, he got to spend a little bit of time backstage with the developers, whereupon they revealed that most of the demos were faked. The products didn’t work. They just hadn’t solved some minor issues, such as actually predicting the thing that they’re supposed to predict. Didn’t stop them spouting absolute gibberish to a breathless audience for an hour though! I blame not the engineers, who probably tried to actually get the damn thing to work, but the lying blowhards who insisted that they must make the presentation or presumably be terminated.

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India…

…I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.

However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.

The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms because a market algorithm works out that humans can be converted into gloobnar, a novel epoxy which is in great demand amongst the aliens the next galaxy over for fixing their equivalent of coffee machines. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound. However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company’s bottom line in this scenario, so we won’t pay it any more attention.

A second outcome is that it turns out that the current approach does not scale in the way that we would hope, for myriad reasons. There isn’t enough data on the planet, the architecture doesn’t work the way we’d expect, the thing just stops getting smarter, context windows are a limiting factor forever, etc. In this universe, some industries will be heavily disrupted, such as customer support.

In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I’ll use once every five years. I was actually happy about this, and it’s a real edge over other applications. But if you don’t have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don’t actually have any internal documentation worth retrieving. Fix. Your. Shit.

The final outcome is that these fundamental issues are addressed, and we end up with something that actually actually can do things like replace programming as we know it today, or be broadly identifiable as general intelligence.

In the case that generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the future. Is that clear now? Having your team type in import openai does not mean that you are at the cutting-edge of artificial intelligence no matter how desperately you embarrass yourself on LinkedIn and at pathetic borderline-bribe award ceremonies from the malign Warp entities that sell you enterprise software5. Your business will be disrupted exactly as hard as it would have been if you had done nothing, and much worse than it would have been if you just got your fundamentals right. Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this. If we thread the needle between moderate impact and asteroid-wiping-out-the-dinosaurs impact, everything will be changed forever and your tepid preparations will have all the impact of an ant bracing itself very hard in the shadow of a towering tsunami.

4. Palmer Luckey and Anduril want to shake up armsmaking – Schumpeter (The Economist)

The war in Ukraine has been a proving ground for these sorts of weapons—and for Mr Luckey’s company. He visited Kyiv two weeks into the war. “What we’ve been doing was tailored for exactly the type of fight that’s going on and exactly what we predicted was going to happen,” he argues, pointing to three lessons.

One is the importance of drones that can navigate and strike autonomously, even in the face of heavy jamming of their signals and obscurants like metal-filled smoke clouds. Many existing drones have struggled with this, says Mr Luckey, because they lack “multi-modal” sensors, such as optical and infrared cameras, to substitute for GPS, and do not have enough built-in computing power to use the latest object-recognition algorithms.

Second is the observation that software is eating the battlefield. Imagine that Russia begins using a new type of jammer. Mr Luckey says that the data can be sent back immediately to generate countermeasures, which are then remotely installed on weapons at the front line without having to change any hardware. A recent study by the Royal United Services Institute, a think-tank in London, noted that drones in Ukraine needed to have their software, sensors and radios updated every six to 12 weeks to remain viable. Anduril, claims Mr Luckey, is “literally pushing new updates…every single night”.

His third lesson from Ukraine is that weapons must be built in vast quantities—and therefore cheaply. He laments that Russia produces shells and missiles far more cheaply than America does: “The US is now on the wrong side of an issue that we were on the right side of during the Cold War.” Anduril makes much of the fact that its production processes are modelled not on big aerospace firms, but automotive ones.

5. What It Really Takes to Build an AI Datacenter – Joe Weisenthal, Tracy Alloway, and Brian Venturo

Tracy (19:48):

Can I ask a really basic question? And we’ve done episodes on this, but I would be very interested in your opinion, but why does it feel like customers and AI customers in particular are so, I don’t know if addicted is the right word, but so devoted to Nvidia chips, what is it about them specifically that is so attractive? How much of it is due to the technology versus say maybe the interoperability?

Brian (20:18):

So you have to understand that when you’re an AI lab that has just started and it is an arms race in the industry to deliver product and models as fast as possible, that it’s an existential risk to you that you don’t have your infrastructure be your Achilles heel. Nvidia has proven to be a number of things. One is they’re the engineers of the best products. They are an engineering organization first in that they identify and solve problems, they push the limits, they’re willing to listen the customers and help you solve problems and design things around new use cases. But it’s not just creating good hardware, it’s creating good hardware that scales and they can support at scale.

And when you’re building these installations that are hundreds of thousands of components on the accelerator side and the InfiniBand link side, it all has to work together well. And when you go to somebody like NVIDIA that has done this for so long at scale with such engineering expertise, they eliminate so much of that existential risk for these startups. So when I look at it and I see some of these smaller startups saying, we’re going to go a different route, I’m like, what are you doing? You’re taking so much risk for no reason here. This is a proven solution, it’s the best solution and it has the most community support go the easy path because the venture you’re embarking on is hard enough.

Tracy (21:41):

Is it like the old, what was that old adage? No one ever got fired for buying Microsoft. Is it like no one IBM? Yeah, yeah. Or IBM, something like that.

Brian (21:50):

The thing here is that it’s not even, nobody’s getting fired for buying the tried and true and slower moving thing. It’s getting fired for buying the tried and true and best performing and bleeding edge thing. So I look at the folks that are buying other products and investing in other products almost as like they’re trying, they almost have a chip on their shoulder and they’re going against the mold just to do it.

Joe (22:14):

There are competitors to NVIDIA that they claim cheaper or more application specific chips. I think Intel came out with something like that. First of all, from the CoreWeave perspective, are you all in on Nvidia hardware?

Brian (22:31):

We are.

Joe (22:32):

Could that change

Brian (22:33):

The party line is that we’re always going to be driven by customers, right? And we’re going to be driven by customers to the chip that is most performant provides the best. TCO is best supported right now and in what I think is the foreseeable future, I believe that is strongly Nvidia…

…Joe (23:30):

What about Meta with PyTorch and all their chips?

Brian (23:33):

So their in-house chips, I think that they have those for very, very specific production applications, but they’re not really general purpose chips. And I think that when you’re building something for general purpose and there has to be flexibility in the use case while you can go build a custom ASIC to solve very specific problems, I don’t think it makes sense to invest in those to be a five-year asset if you don’t necessarily know what you’re going to do with it…

…Joe (25:31):

Let’s talk about electricity. This has become this huge talking point that this is the major constraint and now that you’re becoming more vertically integrated and having to stand up more of your operations, we talked to one guy formerly at Microsoft who said one of the issues is that there may be a backlash in some communities who don’t want their scarce electricity to go to data centers when they could go to household air conditioning. What are you running into right now or what are you seeing?

Brian (25:58):

So we’ve been very, very selective on where we put data centers. We don’t have anything in Ashburn, Virginia and the Northern Virginia market I think is incredibly saturated. There’s a lot of growing backlash in that market around power usage and just thinking about how do you get enough diesel trucks in there to refill generators that they have a prolonged outage. So I think that there’s some markets where it’s just like, okay, stay away from that. And when the grids have issues and that market hasn’t really had an issue yet, it becomes an acute problem immediately.

Just think about the Texas power market crisis back in, I think it was 2021, 2020 where the grid wasn’t really set up to be able to handle the frigid temperatures and they had natural gas valves that were freezing off at the natural gas generation plants that didn’t allow them to actually come online and produce electricity no matter how high the price was, right?

So there’s going to be these acute issues that people are going to learn from and the regulators are going to learn from to make sure they don’t happen again. And we’re kind of siting our plants and markets where our data centers and markets where we think the grid infrastructure is capable of handling it. And it’s not just is there enough power? It’s also on things.

AI workloads are pretty volatile in how much power they use and they’re volatile because every 15 minutes or every 30 minutes, you effectively stop the job to save the progress you’ve made. And it’s so expensive to run these clusters that you don’t want to lose hundreds of thousands of dollars of progress. So they take a minute, they do what’s called checkpointing where they write the current state of the job back to storage and at that checkpointing time, your power usage basically goes from a hundred percent to like 10% and then it goes right back up again when it’s done saving it.

So that load volatility on a local market will create either voltage spikes or voltage sags. A voltage sag is what you see is what causes a brownout that we used to see a lot of times when people would turn their air conditioners on. It’s thinking through, okay, how do I ensure that my AI installation doesn’t cause a brownout when people are turning during checkpointing, when people are turning their air conditioners on?

That’s the type of stuff that we’re thoughtful around, how do we make sure we don’t do this right? And talking to engineer NVIDIA’s engineering expertise, they’re working on this problem as well, and they’ve solved this for the next generation. So it’s everything from is there enough power there? What’s the source of that power? How clean is it? How do we make sure that we’re investing in solar and stuff in the area to make sure that we’re not just taking power from the grid to also when we’re using that power, how is it going to impact the consumers around us?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Microsoft, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 June 2024:

1. The C Word – Jonathan Clements

ON SUNDAY MORNING, May 19, I was enjoying croissants and coffee with Elaine at the kitchen table, while watching the neighborhood sparrows, finches, cardinals and squirrels have their way with the bird feeder. All was right in our little world, except I was a little wobbly when walking—the result, I suspected, of balance issues caused by an ear infection.

It was going to be a busy week, and I figured that it would be smart to get some antibiotics inside me, even if visiting the urgent care clinic on Sunday might be more expensive than contacting my primary care physician on Monday and perhaps having to go in for an appointment.

Long story short, I ended the day in the intensive care unit of a local hospital, where the staff discovered lung cancer that’s metastasized to my brain and a few other spots. This, as you might imagine, has meant a few changes in my life, and there will be more to come.

I have no desire for HumbleDollar to become HumbleDeathWatch. But my prognosis is not good. I’ve had three brain radiation treatments and I started chemotherapy yesterday, but these steps are merely deferring death and perhaps not for very long. I’ll spare you the gory medical details. But as best I can gather, I may have just a dozen okay months ahead of me…

The cliché is true: Something like this makes you truly appreciate life. Despite those bucket-list items, I find my greatest joy comes from small, inexpensive daily pleasures: that first cup of coffee, exercise, friends and family, a good meal, writing and editing, smiles from strangers, the sunshine on my face. If we can keep life’s less admirable emotions at bay, the world is a wonderful place.

We can control risk, but we can’t eliminate it. I’ve spent decades managing both financial risk and potential threats to my health. But despite such precautions, sometimes we get blindsided. There have been few cancer occurrences in my family, and it’s never been something I had reason to fear. Chance is a cruel mistress.

It’s toughest on those left behind. I’ll be gone, but Elaine and my family will remain, and they’ll have to navigate the world without me. I so want them to be okay, financially and emotionally, and that’s driving many of the steps I’m now taking…

Life’s priorities become crystal clear. Even at this late stage, I believe it’s important to have a sense of purpose, both professionally and personally. I can’t do much about the fewer years, and I have no anger about their loss. But I do want the time ahead to be happy, productive and meaningful.

2. Central Banking from the Bottom Up – Marc Rubinstein

From his office a few blocks from the River Rhine in Dusseldorf, Theo Siegert had been scouring the world for investment opportunities. His research process had thrown up an under-appreciated banking stock headquartered across the border in Switzerland, and he started building a stake. Siegert knew a bit about the banking business – he was already a non-executive director of Deutsche Bank – but this stock was different. In his home country, as in many others, central banks tend not to trade freely on the stock exchange. Not so in Switzerland. Before long, Siegert had become the largest shareholder of the Schweizerische Nationalbank, the Swiss National Bank…

…It would be difficult for the Swiss National Bank to pursue its mandate – ensuring that money preserves its value and the economy develops favorably – if it also had to pander to the demands of private shareholders. So it limits private shareholders to voting just 100 of their shares – equivalent to a 0.1% position – leaving Siegert with 4,910 shares on which he is ineligible to vote. And it caps the dividend at 15 Swiss Francs a share, equivalent to a 0.4% yield at today’s price of 3,850 Swiss Francs. Of the remaining distributable net profit, a third accrues to the central government and two-thirds to regional cantonal governments.

As a result, the 10.4 kilograms of gold per share the bank carries and its 1.2 million Swiss Francs of overall net assets per share (at March valuations) remain out of grasp for private shareholders. At best, the stock is a safe haven, providing a preferred return in a strong currency, with no counterparty risk…

…The trouble was, 2022 wasn’t a good year for asset prices, leaving the Swiss National Bank highly exposed…

…Having earned 174 billion Swiss Francs cumulatively over the prior thirteen years, the Swiss National Bank lost 133 billion Swiss Francs in a single year in 2022, equivalent to 17% of GDP. It canceled its dividend for only the second time in over 30 years, signaling that there is risk in a 0.40% dividend after all.

And although asset markets recovered in 2023, strength in the Swiss Franc during the year – partly driven by the bank selling down some of its foreign assets – led to a record foreign exchange hit, triggering another overall loss (of 3 billion Swiss Francs) and another canceled dividend. Fortunately, 2024 has so far been better and, as of the first quarter, over 40% of the two-year loss has been recovered…

…In some cases, such large losses have eaten into capital, leaving many central banks operating on negative equity. As a private sector analyst, this looks frightening, but explicit government support makes it moot. Even before the current spate of losses, some central banks, including those in Chile, the Czech Republic, Israel and Mexico, carried on their business for years with negative capital. A study from the Bank for International Settlements concludes that none of them compromised on their ability to fulfill their mandate.

Because it maintains both a distribution reserve to carry forward some profit and a currency reserve that is not distributable, the Swiss National Bank did not slip into negative equity despite its large loss. At the end of 2023, its equity to asset ratio stood at 7.9% and by the end of March, it was up to 14.3%. That contrasts with the Federal Reserve, which has $43 billion of capital supporting $7.3 trillion of assets, not including almost a trillion dollars of unrealized losses.

But going forward, the business of central banking will grow more challenging. Not only do higher rates expose central banks to losses related to assets purchased in the past, they also make it difficult to generate net interest income on the current balance sheet. Seigniorage income still persists but the falling use of cash may erode it in future years. Meanwhile, commercial bank deposits – which form the bulk of a central bank’s liabilities (449 billion Swiss Francs in the case of the Swiss National Bank, compared with 76.3 billion Swiss Francs of banknotes) – are typically remunerated at market rates, which are higher than yields on legacy securities. Central banks are paying a floating rate while locked into a (lower) fixed rate on their assets.

The challenge is evident in a closer look at the Swiss National Bank. In the era of negative interest rates, it earned income on sight deposits it held on behalf of commercial banks. In 2021, the last full year of negative rates, that income was 1.2 billion Swiss Francs. Having raised rates to 1.50%, the relationship flipped and the central bank began paying interest to commercial banks, which in 2023 amounted to 10.2 billion Swiss Francs. With the yield on Swiss Franc-denominated securities still low, net interest income on the book came to a negative 8.7 billion Swiss Francs…

…From its most recent high of 7,900 Swiss Francs at the beginning of 2022, the Swiss National Bank stock price has halved. Against its muted profit outlook, this is no surprise: The golden era of central bank profitability is likely over…

…For others, though, it’s fine. As the general manager of the Bank for International Settlements noted last year, “Unlike businesses, central banks are designed to make money only in the most literal sense.” Viewing central banks as stocks is instructive, but fortunately for the economy at large, there is more to them than that.

3. Reports of the petrodollar system’s demise are ‘fake news’ – here’s why – Joseph Adinolfi

Earlier this week, reports circulating widely on social-media platforms like X offered up a shocking proclamation: A 50-year-old agreement between the U.S. and Saudi Arabia requiring that the latter price its crude-oil exports in U.S. dollars had expired on Sunday.

The collapse of the accord would inevitably deal a fatal blow to the U.S. dollar’s status as the de facto global reserve currency, various commentators on X opined. Surely, financial upheaval lay ahead…

…But as speculation about an imminent end to the U.S. dollar’s global dominance intensified, several Wall Street and foreign-policy experts emerged to point out a fatal flaw in this logic: The agreement itself never existed…

…The agreement referred to by Donovan is the United States-Saudi Arabian Joint Commission on Economic Cooperation. It was formally established on June 8, 1974, by a joint statement issued and signed by Henry Kissinger, the U.S. secretary of state at the time, and Prince Fahd, the second deputy prime minister (and later king and prime minister) of Saudi Arabia, according to a report found on the Government Accountability Office’s website.

The agreement, as initially envisioned, was intended to last five years, although it was repeatedly extended. The rational for such a deal was pretty straightforward: Coming on the heels of the 1973 OPEC oil embargo, both the U.S. and Saudi Arabia were eager to flesh out a more formal arrangement that would ensure each side got more of what it wanted from the other.

The surge in oil prices following the OPEC embargo was leaving Saudi Arabia with a surplus of dollars, and the Kingdom’s leadership was eager to harness this wealth to further industrialize its economy beyond the oil sector. At the same time, the U.S. wanted to strengthen its then-nascent diplomatic relationship with Saudi Arabia, while encouraging the country to recycle its dollars back into the U.S. economy…

…According to Donovan and others who emerged on social-media to debunk the conspiracy theories, a formal agreement demanding that Saudi Arabia price its crude oil in dollars never existed. Rather, Saudi Arabia continued accepting other currencies – most notably the British pound (GBPUSD) – for its oil even after the 1974 agreement on joint economic cooperation was struck. It wasn’t until later that year that the Kingdom stopped accepting the pound as payment.

Perhaps the closest thing to a petrodollar deal was a secret agreement between the U.S. and Saudi Arabia reached in late 1974, which promised military aid and equipment in exchange for the Kingdom investing billions of dollars of its oil-sales proceeds in U.S. Treasurys, Donovan said. The existence of this agreement wasn’t revealed until 2016, when Bloomberg News filed a Freedom of Information Act request with the National Archives…

…Still, the notion that the petrodollar system largely grew organically from a place of mutual benefit – rather than some shadowy agreement established by a secret cabal of diplomats – remains a matter of indisputable fact, according to Gregory Brew, an analyst at Eurasia Group…

…Even more importantly as far as the dollar’s reserve status is concerned, the currency or currencies used to make payments for oil (BRN00) (CL00) are of secondary importance. What matters most when it comes to the dollar maintaining its role as the world’s main reserve currency is where oil exporters like Saudi Arabia decide to park their reserves, Donovan said.

4. On the Special Relativity of Investment Horizons – Discerene Group

We believe that it is hard for corporate executives to think long-term if they are overwhelmingly rewarded for short-term results. In their paper, “Duration of Executive Compensation,”2 Radhakrishnan Gopalan, Todd Milbourn, Fenghua Song, and Anjan Thakor developed a metric for “pay duration.” It quantifies the average duration of compensation plans of all the executives covered by an executive intelligence firm’s survey of 2006-2009 proxy statements. The average pay duration for all executives across the 48 industries in their sample was just 1.22 years. We think that such performance-based compensation duration borders on the absurd for leaders of ostensibly multi-decade institutions buffeted by so many factors beyond their short-term control.

Perhaps unsurprisingly, incentives drive behavior.3 Executive-pay duration was longer in firms that spent more on R&D, firms with a higher proportion of independent board directors, and firms with better stock-price performance. Conversely, firms that offered shorter pay duration to their CEOs were more likely to boost short-term earnings with abnormal accruals of operating expenses.

In a survey4 of 401 US CFOs conducted by John Graham, Campbell Harvey, and Shiva Rajgopal,   80% of survey participants reported that they would decrease discretionary spending on R&D, advertising, and maintenance to meet earnings targets. 55.3% said that they would delay starting a new project to meet an earnings target, even if such a delay entailed a sacrifice of value. 96.7% prefer smooth to bumpy earnings paths, keeping total cash flows constant. One CFO said that “businesses are much more volatile than what their earnings numbers would suggest.” 78% of survey participants would sacrifice real economic value to meet an earnings target.

Likewise, Daniel Bergstresser and Thomas Philippon have found5 that the more a CEO’s overall compensation is tied to the value of his/her stock, the more aggressively he/she tends to use discretionary “accruals” to affect his/her firm’s reported performance…

…According to the World Economic Forum and International Monetary Fund, the average holding period of public equities in the US has fallen from >5 years in 1975 to ~10 months in 2022…

…Another effect of short-termism has been to encourage firms to shed or outsource functions formerly considered to be critical to businesses, including R&D, manufacturing, sales, and distribution, thus creating atomized and fragile slivers of businesses that nevertheless often command illogically lofty valuations. For example, in recent times, aerospace, pharmaceuticals, and software companies that do not attempt to sustain going-concern investments and instead seek to continually acquire other companies in order to hollow out such companies’ engineering, R&D, and/or sales/distribution teams — thereby eliminating all possible sources of competitive advantage — have been feted as “asset-light” and “high-ROIC” poster children of their respective industries.

5. An Interview with Terraform Industries CEO Casey Handmer About the Solar Energy Revolution – Ben Thompson and Casey Handmer

But let’s dig into this solar thing. What is driving the cost curve decrease that was forecasted in 2011 that attracted you? And that has absolutely manifested over the last 10 years, famously exceeding every official projections for future costs. It always ends up being cheaper, faster than people realize. What is the driver of that?

CH: Well, so actually even Ramez Naam’s predictions were too conservative. No one, back then, predicted that solar would get as cheap as it has now. If you look at the DOE’s predictions in 2012 for how long it would take for us to get to current solar costs, their best guesses were 2150, and I don’t know if I’ll live that long.

So of course their entire roadmap for decarbonization didn’t include this, but now we have it. Can we use it? Yes, we sure as hell can and we sure as hell should, because it’s a massive gift that enables us to — we don’t have to de-growth in order to stop emitting pollution into the atmosphere. We can build our way out of the climate crisis by just increasing energy consumption and making energy cheaper for everyone.

In terms of how it gets cheaper, well, essentially, as I say, once the technology is inside the tent of capitalism, it’s generating value for people. It tends to attract wealth, it tends to attract capital, and that capital can be used to do things like hire manufacturing process engineers, and they’re very, very clever and they work very hard, particularly probably hundreds of thousands of engineers working at various solar factories in China right now. And sooner or later, they will find every possible configuration of matter necessary to force the price down. So same as with Moore’s law, essentially, we’ve just seen steady improvements.

Yeah, I was going to ask, is this an analogy to Moore’s law or is it actually the same sort of thing? Moore’s law is not a physical law, it is a choice by companies and individuals to keep pushing down that curve. Number one, what I get from you is that’s the same sort of concept here, but number two, are the actual discoveries actually similar to what’s going on?

CH: Yeah, actually to a large extent because it’s a silicon-based technology.

Right, exactly.

CH: There’s a lot of commonality there, but I think Moore’s law is not a law of nature, it’s what we call a phenomenological law, an emergent law. But basically all it says is there’s a positive feedback loop between cost reductions, increases in demand, increase in production, and cost reductions. So provided that the increase in demand, the induced demand as a result of the cost reduction, exceeds the cost reduction for the next generation of technology, you have a positive feedback loop. Otherwise, it’ll converge at some point, right? You’ll achieve maybe a 10x cost reduction and then it’ll stop, and we start to hit diminishing returns on all these technologies. But if you look at Moore’s law, it’s actually a series of maybe 20 or 30 different overlapping technology curves that kind of form this boundary of technology throughout time, and you see the same thing in solar technology if you really look under the hood and see what’s going on.

But yeah, the fundamental thing is there’s just enormous demand for solar at lower and lower prices and so manufacturers are justified in investing the capital they need in order to hit those prices and then the feedback mechanism keeps going. Solar manufacturing itself is a brutally competitive business which is both good and bad, it means like if you decide that you want to compete in solar, you don’t have to be at it for 50 years in order to compete. If you can capitalize, you can build a solar factory and if you’re smart enough and you work hard enough, in five years you can be in the top 20 manufacturers globally which is huge. Talking about billions of dollars of revenue every year just because everyone’s existing capital stock gets depreciated really quickly.

Right. But to your point, it’s also commodity then, right? So how do you actually build a sustainable business?

CH: Well, picks and shovels essentially. So actually one of the things that we like to say at Terraform, and I’m jumping the gun slightly here, but Terraform’s product essentially is a machine that converts solar power into oil and gas, so it bridges these two technology spans. It allows you to arbitrage essentially economically unproductive land that would otherwise just be getting hot in the sun. You throw some solar panels on there, that’s your computing hardware, but that’s not very useful, right? I could hand you an H100 but doesn’t do anything for you until you’ve got software to run on it and the software allows the raw computing power of that H100 to become useful for an end consumer…

Actually let’s run through some of the objections to solar power and then I think that will inherently get to some of these things. So we talked about the nuclear bit, what happens when the sun doesn’t shine?

CH: Yeah, so we’re actually seeing this in California right now. It creates a time arbitrage, right? If you have the ability to store power during the day and then release it during the night, you can make an incredible amount of money and that’s why we’ve seen battery deployments in California, for example, increased by I think a factor of 10x in the last four years, and the effect of that is it’s basically allowing people to transport power, or transport energy, through time in much the same way that power lines, transmission lines, allow people to transport electricity through space.

So what is happening with the battery cost curve? Because if that’s sort of an essential component to make this happen-

CH: Same thing, same story.

For the same reasons?

CH: Exactly the same reasons, same story. Battery manufacturing is probably a little bit more complex and not quite as well-developed as silicon solar panel manufacturing, but we’re seeing year-on-year growth of battery manufacturing. It’s like well over 100%, so it’s actually growing faster than solar, and then the cost improvement’s not quite as steep, but it’s easily like 5% or 10% per year depending on which technology you’re looking at.

In 2021, for example, it was extremely confidently predicted that lithium ion batteries would never get under $100 per kilowatt hour at the cell level and the pack level, and of course Tesla was widely mocked for claiming that they would be able to get ultimately below $100 bucks per kilowatt hour at the pack level. But then again, I think January this year or December last year, a Chinese manufacturer came out with a sodium ion battery cell, which is at $56 per kilowatt hour, so it’s like a 2x reduction in cost on top of what is already considered cutting edge, and we just go down from there.

Now, sodium ion batteries might not be perfectly suited for all kinds of applications, but they’re probably cheaper to produce than the lithium ion batteries. We know they’re cheaper to produce in lithium batteries and they’re more than capable of doing the sort of load shifting required to essentially store power during the day and then use it in the evening.

Are we in a situation already, or do we still have a bit to go, where the sort of combined weighted cost of solar, which is much cheaper than nuclear as you talked about, plus batteries, which sounds like it’s still more expensive now, but when you combine the two is it already lower?

CH: Yeah, so again just look at the data, right — the market reveals its preference. CleanTechnica ran an article almost five years ago now showing that in Texas they were developing battery plants 10:1 compared to gas peaker plants. Texas runs its own its own grid under slightly different rules where you can basically just build and connect and then the grid can force you to curtail if they’ve got overproduction, but that typically means it’s a more liquid market. And even in Texas, which is certainly not ideologically committed to solar, and actually incidentally this year deployed more solar than California did.

Yeah, I was going to say.

CH: Also Texas has the cheapest natural gas in the history of the universe, but they’re deploying more battery packs than they are gas peaker plants 10:1…

…CH: But I just want to say there’s a conception that, oh, solar and batteries only are on the grid because they’re massively subsidized and they’re actually screwing everything up. That’s actually, that’s not true. Solar and batteries is what’s keeping the grid working right now, it’s the only thing that’s providing expanded capacity.

The major challenge with additional solar development, particularly here in the States, is we now have this ten-year backlog or kind of development queue before you can connect your solar array to the grid, and the reason for that is the grid is old and it’s kind of overwhelmed, and it’s not able to transport all that power effectively to market.

Of course, one solution to this is just to build more grid. Another solution is to put some batteries on the grid. And, you know, the third solution is basically just build batteries and solar wherever you can, it’s actually working really well.

Then obviously what Terraform is doing is taking this otherwise un-utilized capacity for solar development and then pouring it into another aspect of our civilization’s absolutely unquenchable thirst for energy. Just to give you some hard numbers here, roughly a third of U.S. energy is consumed in the form of electricity and about two-thirds in the form of oil and gas. So even if we successfully electrified huge amounts of ground transportation and also moved all of the electricity grid to say wind, solar and a bit of nuclear and some batteries and maybe some geothermal or something like that, so completely decarbonize the grid, that would only deal with about a third of the economy. Two-thirds of the economy still runs on oil and gas and so that’s what Terraform is here to try and deal with.

One more question on the batteries.

CH: Yeah.

There’s always been, or the common refrain has been, we need a battery breakthrough, we need something completely new. Is the take, and you mentioned the sort of sodium ion, but even with terms of lithium ion, is the actual expectation or is the actual realization in your expectation going forward that actually the technology we have — sure, it’d be great to get a breakthrough, but there’s actually way more improvements and in what we have that will carry us a long way?

CH: Lithium ion batteries are already amazing. I mean, they’ve been around for about 35 years now, I think they were first commercialized for Panasonic camcorders or something and even then they were extremely compelling. They pushed NiCAD [nickel-cadmium] out of the market almost instantaneously, which is the previous battery chemistry and numerous applications. They’re more than good enough.

You say, “Well, I’d like a battery breakthrough”. Why? “Because I want to run my supersonic electric jet off batteries.” Well, good luck with that. But for all ground transportation purposes, for static backups, for all these kinds of applications, not only is the technology already great, it’s got a 30 year history of manufacturing at scale. We know how to make it safe, we know how to make it cheap, it’s extremely compelling and the numbers speak for themselves.

Battery manufacturing capacity expansion is not just happening for no reason, there’s enormous untapped demand for batteries. The way I like to think of it is what’s your per capita lithium ion allocation? Maybe in 1995, you might have a Nokia 3210 with — actually that would be after 1995 — but with a small lithium ion battery in it. So you’ve got 10 grams per person of lithium ion battery and nowadays my family has two electric cars, and that’s probably most of our batteries.

Yeah, now we have laptops, we have computers.

CH: But in terms of the bulk mass, like 400 kilograms per person or something for people to have electric cars and then if you have a static backup battery in your house and then maybe a share of your per capita part of the grid scale batteries and so on. I think it could easily scale to a couple of tons per lithium ion battery per person, particularly in like the more energy intensive parts of the United States.

Is that a large number? No, not really. I easily have a couple of tons per person in terms of steel just in my cars. I easily have probably 50 tons of concrete per person in terms of my built environment. I don’t actually think this is a particularly large number, I just think it’s unusual to see in such a short span of time some product go from the size of your thumb to the size of a large swimming pool, a large hot tub or something like that, in terms of your per capita allocation.

Where are we at as far as availability of say lithium or of all the various rare minerals or rare earths, whether that go into both solar and batteries?

CH: Yeah, I mean, again, I’m not a super expert on batteries, but the cure for high prices is high prices. Lithium is the third most common element in the universe, there’s no shortage of it. You could argue there’s a shortage of lithium refining capacity in the United States, particularly if you’re concerned about strategic vulnerability.

It’s like the rare earth thing, right? Rare earths are not actually rare. It’s just the actual ability to refine them.

CH: They’re super common, and actually solar solves that. It turns out that you can electrically catalytically separate rare earth elements using cheap solar power, more significantly lower environmental impact and much lower cost than traditional refining, and I have some friends working on that.

It is certainly true that batteries, people are concerned about cobalt. Actually, I have some cobalt here, here’s a cube of cobalt on my desk. Cobalt is a fabulous metal, but there’s not a huge amount of it necessarily. It’s not scarce like gold, but the mining situation is not quite sorted out. But at the same time, like almost all the major battery manufacturers use almost no cobalt right now because they’re able to adapt their processes to basically optimize their costs towards the cheaper materials.

Capitalism solves this, we don’t have to worry too much about it, there’s literally hundreds of thousands of chemists out there right now who are solving this problem right now, you don’t have to lose sleep over it, it is a completely commoditized production system…

What happens with old solar panels and old batteries? Obviously this is an objection to nuclear which is nuclear waste, and the good thing with nuclear waste is it’s really not that much. We’re talking about this deployment of massive amounts of solar panels, all these batteries. Where are we at in 10, 20 years if this build out happens? Is that a potential issue?

CH: I’m not too worried about it. And again, you need to look at your waste stream on a per capita basis. If we deployed as many solar panels as I want to, how many solar panels will you end up disposing of? I think if you ground them up it’d be one garbage bag per year. For a suburban family, we probably have 1,000 garbage bags of trash every year that gets landfilled.

But to talk about specifics, batteries I think are prime targets for recycling because the materials in them are essentially, as Elon Musk once said, super concentrated for the raw materials you need to make batteries. There’s multiple companies out there, including Redwood Materials, that are doing exclusively battery recycling, or battery component recycling, which is super obvious. That said, as battery production increases, even if you recycle all the old batteries, it will only be 1% of the input stream or something, but I just don’t see a future where we have giant piles of batteries lying around.

Then as far as solar panels go, they’re like a layer of silicon dioxide, which is glass, a layer of silicon, which used to be glass, and then a layer of silicon dioxide and maybe some aluminum around the edges. Well, you can strip off the aluminum and recycle that trivially, we’ve been recycling aluminum for 100 years, and the glass is glass. You can grind it up and landfill it, it’s basically sand.

People will say, “Oh, what about cadmium or something?” — well first, solar uses a cadmium telluride process to make their solar panels. But again, the amounts involved are trivial, they’re inert, they’re solid, they can’t run or leach or anything like that, I’m not too worried about it. As far as the sort of trash that humans routinely landfill, solar panels would actually significantly increase the purity of our dumps because they’re so inert compared to everything else…

…CH: One of the things I like to say is that oil and gas is so common in our civilization, it’s invisible because every single thing that you see with your eyes is a surface that’s reflecting light, it’s usually pigmented or made of plastic, and that pigment or plastic is made of oil or it’s made of natural gas. So unless you go outside and look at a tree, which is ultimately made of a kind of plastic also derived from sunlight and air, it’s extremely difficult to lay your eyes on anything that’s not made of hydrocarbons and obviously, so we’re extremely bullish about growth.

Now it could be the case that there’s zero growth. It could be the case that the oil and gas industry just motors along at about $8 trillion of revenue per year, which is about $1 billion per hour. So just in the time we’ve been talking, it’s $1 billion, which is just insane. But I actually think that once we unlock these cheaper forms of hydrocarbons that it will promote substantial growth, particularly in the energy-intensive industries.

So just to underscore the vision here, I get really, really fired up about this, because when I think of aviation and how amazing it is, and how we’ve only had it as a species for about a hundred years, and it’s only really been something that we can enjoy in jet transport for maybe 50 years. But actually the people who routinely fly on aircraft, and I know that you’re one of them because you’re an expert obviously, and myself, it’s probably only 50 million people on earth who’ve ever had that experience of flying in a jet, I don’t know more than 10 times in their life. Wouldn’t it be incredible if that number was 500 million or 5 billion, but to get there from here in terms of fossil fuel consumption, emits a lot of CO₂, but it also requires a huge amount of fuel. Aviation currently consumes about 2% of the world’s oil and gas just to fly less than 1% of the world’s population around, and so obviously we need to bring on a new source of fuel.

So when you think, well, what is a nice climate-positive version of aviation? Is it like the European model where we force airlines to make customers pay for carbon sequestration or carbon credits or something like that, which is either extremely expensive or extremely fraudulent or both, but in any case makes aviation more expensive and less accessible to people, just makes it more exclusive? Or do we say, “Why don’t we solve both these problems at once, and just bring online enormous new supply of high quality, cheap gas and natural gas for the future liquefied natural gas powered supersonic aircraft?”

At the same time it just happens to be carbon-neutral, so you don’t have to worry about CO₂ emissions, it’s not polluting the atmosphere with new CO₂ from the crust, and at the same time, instead of Boeing producing 500 aircraft a year, Boeing and maybe a few more startups can be producing 10,000 aircraft per year to service this kind of massive explosion in demand driven by economic expansion. That is a sick vision, that is so cool, we should absolutely do this as quickly as we can.

I think whether or not Terraform plays a huge role in this process or not, and I’m certainly intending for it to be — currently we’re leading this process — the economics is inevitable that we’re going to switch over to synthetic fuel sooner or later, and when we do, it’s going to get really, really cheap because we’re running it off solar power and when it gets really, really cheap, we’re going to do amazing aviation and other energy applications, and increase manufacturing and maybe some little bit of geo-engineering on the side to keep things in check, increase water supply in dry areas and so on. Why wait until 2060? We could have this done in 2040 if we just apply ourselves the right way and find the right business model…

How does it work? Give the non-physicist overview of how Terraform works.

CH: Yeah, sure. So from a customer’s perspective on the outside, essentially what a Terraformer does is it allows you to build your own oil and gas well in your backyard, regardless of the fact that you don’t own a drill rig, and in fact you don’t live anywhere near where oil and gas occurs naturally, which is again pretty cool. But how does it work under the hood? Well, it consumes electricity and most of that electricity gets used locally.

Actually I should state the Terraformer itself sits in the solar array, and that’s to reduce the cost of transmission of electricity, which would be absolutely prohibitive in this case, and the electricity gets used to capture CO₂ from the air and to split water into hydrogen and oxygen. We throw the oxygen away like trees do, we take the hydrogen and we react that in a classical old school chemical reactor with the CO₂ to produce methane and water. Then we can separate the water out because it condenses at a much higher temperature from the methane and we’re just left over with methane plus a little bit of leftover CO₂ and hydrogen and a tiny bit of water vapor. That’s natural gas, right?

Actually, when you get natural gas out of the ground, if you did have a drill rig and you did live in a place where natural gas occurs and you drill a hole in the ground, gas comes out. Well now you’ve got to build a well top and a bunch of other stuff that’s actually really complicated, and you might have a blowout and then what comes out of the ground is like between 10 and 80% natural gas and a bunch of other contaminants on top of that which have to be removed before you can sell it.

We don’t have that problem. What we produce is the pure product. It’s really compellingly elegant the way we do this. There’s no geology risk, plug-and-play once you plug it in it just generates a predictable amount of gas every day for however long the system lasts, which is most likely measured in decades.

In this case, you don’t have a battery capital cost, I presume it only runs when then suns out, right?

CH: Yeah, that’s absolutely correct. And I’ll say for anyone who’s considering doing a hardware tech startup, well, there is basically a recipe that we’ve stumbled upon for taking any existing industry and then applying it to solar power and getting the benefit of that extremely cheap power.

The first is you have to get the CapEx way, way down because your utilization is low, you’re only using your plant maybe 25% of the time, so you have to get the cost down by at least a factor of four. Then on top of that, you also have to make it compatible with the sun coming up and going down. So time variability, which is difficult, but not impossible. We have many processes that we can routinely throttle up and down in our everyday lives so you understand this intuitively, but if you can do that, and it sounds impossible, of course, “I just want a chemical reactor that’s 1/10 the size and 1/4 the cost and I can ramp it up and down”.

Well, the way you make this work is you just use more power. So you say, “Well, I don’t care about efficiency quite as much because my power is so cheap”, and that’s what makes it easy. But if you can do this, then you have —

You have to change that core assumption. Whereas almost every invention today is all about increasing the efficient use of power, and the whole point of solar is, “What if we assume power is basically infinite, but it’s bounded by time, then what would we do?”.

CH: It’s like cycles in your computer are basically free or on your cell phone or something…

Desalination seems like a potentially massive win here and very pertinent to the American West for example. But this idea that if you assume energy is infinite, we’re not short of water on earth, we’re short of water without salt.

CH: That’s right, yeah. I mean there are some places where it’d be relatively difficult to transport even fresh water from the ocean, but in California that’s not the case. California is at the end of the Colorado River, which is declining, and California of course has senior water rights, we take about 5 million acre feet of water per year.

So unlike Terraform, which is definitely developing new proprietary technology in-house, it’s quite exciting, but with solar desalination, you don’t need any new technology. You just go and build a plant essentially with stuff you can buy off the shelf. How much would it cost to build a plant that is able to substitute 100% of California’s water extraction from the Colorado River, essentially doubling Southern California’s water supply, and at the same time allowing you to fix the Salton Sea and also set up a massive light metals industry and a bunch of other things?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Tencent, and Tesla. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 June 2024:

1. Saying Goodbye: 30 Investing Lessons After 19% CAGR Over 7 Years – Eugene Ng

I had a near-death/paralysis accident over 10 years ago where I broke my neck. Thankfully, I survived it, but my neck still remains broken to this very day. Life is extremely precious, and I want to live my remaining life to the fullest, and positively impact as many people as I can…

…With a degree in economics and finance, and despite working in banking for over 11 years, I was ill-equipped from the onset to invest well. I decided to start from first principles, asking basic questions? What are stocks? Which they are part ownership stakes in business. Why stock prices rise, and eventually how much?

Eventually I came to realise that growth of revenues, profits and free cash flows matter the most over 5-10 years and beyond, not changes in valuation multiples. That’s why my favourite investing saying is where revenues, profits and free cash flows flow, the stock price eventually goes.

Could investing in this stock generate sufficient returns? Once you take the red pill, once the eyes see what truly matters, you can no longer un-see…

…Most investors are focused in not making errors of commission, or a Type I error, which is making a bad investment when you think it is a good one.

Instead, I am focused making less errors of omission, or Type II errors, rejecting a good investment when I think it is a bad one. Because the maximum a loser can lose is theoretically limited at 100%, but the maximum upside a missed winner can go higher is theoretically infinite…

…Ultimately, your investing strategy and style is unique to you. It must be comfortable to you, it must suit your personality and your strengths. Everyone’s investment portfolio is going to look different.

Most importantly, you must be able to sleep well at night. After some time, you will come to realise if your strategy is truly repeatable and scalable over the long-term…

…Investing in stocks is investing in businesses, and having some of the best CEOs running some of the best companies in the world with their employees working for you 24/7. When you view it that way, it changes your perspective in life…

…Wanted to share a personal story where we recently had a pair of olive-backed sunbirds building their hanging nest on our olive tree, at our balcony in our home in Singapore. We were delighted to welcome them to our home. It was an untidy nest, and our balcony floor was littered with fallen nest materials, but we didn’t mind.

Eggs have been laid, and the female sunbird has been incubating on and off during the day and full time at night over the last week. We are looking forward to see the eggs hatch in the coming week, hear the chicks chirp for the first time, watch them get older and fledge, and then get ready to take flight and leave the nest.

It was amazing to see how timely and beautiful this was, as it reminded me deeply of the journey that I am going to embark on with a new beginning. 

2. A Revolution in Biology – Kasra

Our conventional picture of biology is that everything happens in a bottom-up manner: molecular mechanisms dictate the functions of cells, which dictate the functions of your organs and which ultimately control your body. What is the thing at the very bottom of this hierarchy—the foundation for everything else in life? The genome. Genes are considered the fundamental code of life, so when it comes to figuring out questions of how the body develops, or how to cure diseases or change specific biological traits, we tend to look there…

…That is, until Michael Levin (and many others) entered the scene. They came in and said: genes are great, and they do contain much of the necessary information for building our bodies. But they don’t contain all of it, and they are not always a useful level of abstraction of understanding how the body develops, and consequently they are not always the best way to intervene with biology (e.g. to regenerate damaged organs, or to cure diseases like cancer). If you’ve ever done any programming, you know that there are many levels of abstraction—higher-level and lower-level programming languages, higher-level and lower-level API’s—at which you can try to understand or manipulate the software that runs in your computer. Levin’s point is that genes are like machine code, and modern-day programmers never think about machine code—they think about higher-level software constructs like objects, modules, and applications. The bold claim embedded in his work—the real revolution here—is that higher levels of abstraction and control meaningfully exist in biology. And one of the ways in which this higher level of abstraction manifests is in something called the bioelectric network of the organism.

We usually think of neurons as the only cells in our body that produce intelligent behavior by communicating in large networks. Neurons are constantly communicating with each other in the form of electrical patterns on their membrane and neurotransmitters, which are chemicals that transfer messages between cells. But it turns out that cells throughout the body have the exact same building blocks for such communication. They do the same communication, but slower. Levin and company call this the bioelectric network, as distinguished from a neural network.

In the past few decades we’ve discovered all the ways in which bioelectric networks distributed through the body do the same kinds of things that brains do: store memories, solve problems, and guide development. To get a sense of the bioelectric network in action, we have to talk about a mind-blowing creature called the planarian. This little critter (about 2cm in length) is a developmental “genius” of sorts: it doesn’t age, it doesn’t get cancer, and it is extremely regenerative, capable of regenerating any part of its body that gets cut off, even if it’s cut up into more than 250 pieces…

…Imagine taking one of these worms and splitting it into two. You now have two half-worms, and each of those half-worms is tasked with rebuilding the rest of its body. There’s a crucial decision here that the cells have to make: what part of the body do we already have, and what part do we need to build? One of the half-worms needs to produce a tail, and the other half-worm needs to produce a head. But the cells are at the very middle of the body, extremely far (from a cell’s perspective) from both the head and the tail. How do the cells have any idea what they should generate?

The answer, at least in part, is that all along the body the cells of the worm have a gradient of “resting membrane potentials”, which is effectively a stable electrical state. The cells keep track of their “position” in the body in this way, and experiments have demonstrated that the cell’s electrical state relative to the rest of the body is what determines whether it will proliferate into a head or a tail…

…Levin’s team was able to induce the worm to generate two heads instead of one head, by putting it into a solution of drugs that blocked specific ion channels (which in turn altered the electrical state of the cells). They’ve also induced the worm to generate no heads at all, or to generate the head of a different worm species. All of these are living, functional worms, just with a very different body structure…

…Keep in mind a crucial point: in all these experiments, the genes of the worms are never edited. You get a wildly different functional worm with the same genes. And what’s even wilder is that some of these changes are enduring: without any further drugs or modifications, the two-headed worm produces offspring that are also two-headed, indefinitely…

…Levin’s lab and others have already demonstrated an astonishing level of control over development by modulating bioelectric networks. They’ve done things like getting frogs to develop extra limbs, and getting them to develop an eye in their gut, or an eye in their tail that they can actually see out of. The end goal that Levin dreams of is an “anatomical compiler” – a program which takes as input a specification for an arbitrary organ or body plan, and outputs the specific set of chemical and electrical signals needed to generate that organ. Imagine 3-d printing entire synthetic organs and organisms, except instead of having to specify all the micro-level details, you can just give a high-level description like “an extra eye at the tail.” This is Dall-E but for biology. And in the very long run, it could be the answer to virtually all of biomedicine, including traumatic injury, birth defects, degenerative disease, cancer, and aging.

3. The Investing Boom That’s Squeezing Some People Dry – Jason Zweig

The idea is that when you lock your money up for months or years, you’re less likely to panic in a downturn, enabling the managers to amass a portfolio that will pay off in the long run…

…That bumps up against a basic law of financial physics: Eliminating one risk creates another.

An investment that doesn’t trade may have some advantages, but once you buy it, how do you sell it? How deep a haircut, or discount from the reported price, will you take?

Many funds have so far been able to cash out investors at what seems like a fair price. Many haven’t…

…Highlands REIT, a private Chicago-based real-estate fund, is a more-extreme case. The company bought back about 19% of its stock in December at 14 cents a share. For the sellers, that was like getting a haircut with a lawn mower: Highlands’ annual report estimates net asset value at 32 cents per share as of Dec. 15, 2023.

Outsiders are offering an even harsher haircut. On May 20, MacKenzie Capital Management, an investment firm in Orinda, Calif., opened a mini-tender for Highlands’ stock at 4 cents a share, minus a $25 transfer fee. On Lodas, the latest sale was at 10 cents…

…Institutions can sell big blocks of their alternatives, like hedge funds or private equity, to what are called secondary funds at discounts that might run 10% to 30% below net asset value.

In many cases, you should be so lucky.

Often, if you can find a broker willing to buy your alternative investment, the commission can run up to or even exceed 5%. Your haircut could be as deep as 30% to 50%. Depending on the buyer, weeks may go by before you get paid.

Other electronic marketplaces besides Lodas, including Central Trade & Transfer and 1st Trade, also match buyers and sellers of alternatives—typically at gaping discounts to net asset value.

4. Book Summary Part 2: “Our Investing Strategy, who does the market smile upon” – Made In Japan

Right before launching his fund, Hokkaido Takushoku Bank went bankrupt and was undergoing liquidation. He immediately decided to use that opportunity. He went to Sapporo to buy the shares of a specific company from them, which was Nitori a company with almost zero liquidity at the time. Some readers may recognize the name today as the largest furniture retail chain in Japan oft compared to Ikea. They’re known for their value-for-money proposition, providing quality products at an affordable price point, and has been a huge success.

You might not believe this if you look at Nitori’s stock price today but it was an unpopular company back then. According to Kiyohara-san, it was trading at 750 Yen per share at the time. One of the main reasons it seems, was that the furniture market was in decline, making it an unattractive industry to invest in. His thesis was that the market was extremely fragmented. The largest furniture retailer Ootsuka, only had a 5% market share. Nitori was the only vertically integrated manufacturer (others were distributors) and believed this could help them gain share as a cost-effective producer of home furnishings. Nitori was listed on the Sapporo Exchange so no institutional investor would touch it (since it would be impossible to sell). However, when he spoke to IR, he picked up on a key insight. While the Hokkaido economy, which was their main market, was not doing well and they saw a decline in same-store sales in the region, the 3 stores open in Kanto were doing very well. Providing a hint to Nitori’s true competitiveness.

And it’s funny because you can immediately tell he was built differently. After the research was done and when the fund launched he bought as much as he could from the failing bank and at launch it became 25% of his NAV. The stock tripled in a year and in 5 years the stock was a six-bagger. A year later it was a ten-bagger at which point he sold out. If he had held it till now stock would been a hundred bagger. But by 2003 Nitori was starting to get more institutional coverage and attention. He believed it was time to exit when. He says “When the party starts,  that’s when we go home.”

So here was the first lesson, which is that investing in an unpopular, shrinking market can still make you a lot of money. In fact, during the time he owned Nitori the market size halved. He also understood the opportunity to buy shares from distressed sellers, especially for stocks that are listed on some regional exchange that no one looks at…

…2007 Dec – 2009 Feb: “A sick patient getting hit by a truck, 3 times”

Just as the fund narrowly escaped its “matasaki”, it was followed by the 2008 crisis.

Whilst K1J Fund generated incredible returns from their bet in REITs and Real Estate and successfully exited from these. He still owned a lot of cheap real estate stocks in the fund. 3 holdings filed for bankruptcy and 1 went through an Alternate Dispute Resolution (ADR)  The worse part? he owned 45%, 35%, 10% and 20% of the shares outstanding.

Needless to say, it was distressing and he lost weight.

The goal was no longer for him to generate returns in this period. It was simply to survive.

He never said this himself, but what follows is what you call an absolute shitshow. Or as he would put it, “like a sick patient getting hit by a truck 3 times”.

The fund’s top priority was to reduce its leveraged long and short positions to avoid a margin call.

But to add insult to injury, their prime broker Goldman decided to change its margin policy to save themselves. (from 50% to 30%) Which could have been fatal for the fund. Fortunately, Goldman eventually agreed to only implement this in steps, which helped the fund bide some time.

The issue is that in a crisis like this it’s not just one kind of risk that materializes, there are second-order and third-order effects which, in isolation might have a low probability. I believe, however, the odds of secondary and tertiary events no matter how unlikely will increase when the first ‘highly improbable event’ occurs. (You can also apply this to the Livedoor example).

Although not a surprise, the clients that entered in excitement when the fund was killing it in 2005 started redeeming (mainly pensions) and the fund lost half of its clients.

This created a new risk which forced him to reduce his longs which were mainly in small, illiquid companies. A forced selling driven by client redemptions would in effect make you dig your own grave.

So how does he try to solve this problem? He asks these companies to buy back their shares.

From its peak in October 2005 to its trough in February 2009 the fund’s NAV was -72% and its AUM -89%.

This is when you realize most people won’t be able to replicate what he did. I wrote this in part 1. He decides to put almost all of his net worth in the fund to try and save it. He adds “Because that is the responsibility of the manager”. Like a captain being the last to leave a sinking ship, an honorable and brave decision.

I want to reflect here because this is not something most of us could do. It’s really easy to read this as a brave story and just say “wow, awesome”, but never really understand the extent of how hard it was. (This is called the empathy gap in psychology where we underestimate our psychology to make decisions in a certain situation). If your fund is already down heavily, you have clients threatening to leave or have left, your prime broker changing the rules, and you’re being forced to exit your positions at ridiculous valuations, are you ready to risk going broke to save it? Remember your morale at this point is probably at an all-time low. In a world where limited liability corporations are the norm (i.e. the damage to your personal wealth can be legally limited where, at the very worst moment most of us would use to escape) he decided to go all in.

Also don’t forget, he’s had to tell his wife he did just that! (which might’ve been the scariest part!). Apparently, her response to him telling her was “Didn’t you also say that last week?” lol.

But the question begs, why did he do that? His confidence was far from crushed, and he was convinced if he closed his shorts and be as long as possible, that he would make alot of money. Why? because he knew a sudden decline will almost always result in a V-shape recovery. His game was to just survive until then. That is SOME confidence he had.

What’s amazing is that he went to clients telling them it would be foolish to leave now, “the fund can probably double from here”.

In the end, from its trough through Feb 2018 his fund 12x ed…

…Shorts are the most dangerous in a bear market, in this scenario, his game was to maximize his long positions. Maintaining a short book means, your prime broker will usually give you a hard time in these moments and pssibly reduce your margin which also limits your long exposure. The other is that when the market turns and your shorts also move up, this might also force you to reduce your long position (to cover). Understanding this helped him avoid a forced error of omission. Imagine having no choice but to sell your longs which could have multiplied but you were forced to sell them after a little move up to cover your shorts…

…Lasertec (Circa 2020)

  • This was not a fundamental idea, though it did fit the typical target for his shorts: expensive-looking large-cap.
  • He simply saw an opportunity through the lens of Japan’s inherent tax rules.
  • The fourth largest shareholder was the widow of the founder who owned 4.24%.
  • So he thought, what happens if she passes away too?
  • Japan’s inheritance tax is the highest in the world, and her children will have to pay for it by selling shares.
  • In the end, this is really what happened.

This is an important theme for owner-operated businesses, in which inheritance can play an outsized impact on the stock price.

5. An Interview with AMD CEO Lisa Su About Solving Hard Problems
– Ben Thompson and Lisa Su

What was your response in November 2022 when ChatGPT shows up?

LS: Well, it was really the crystallization of what AI is all about.

Obviously you’ve been in the graphics game for a long time, you’ve been thinking about high-performance computing, so the idea that GPUs would be important was not foreign to you. But were you surprised the extent to which it changed the perception of everyone else around you and what happened after that?

LS: We were very much on this path of GPUs for high-performance computing and AI. Actually, it was probably a very significant arc that we started, let’s call it back in the 2017 plus timeframe. We’ve always been in GPUs, but really focusing on-

What was it in 2017 that made you realize that, “Wait, we have these, we thought we bought ATI for gaming, suddenly, there’s this completely different application”?

LS: It was the next big opportunity, we knew it was the next big opportunity. It was something that Mark and I discussed, which was, by putting CPUs and GPUs together in systems and designing them together, we’re going to get a better answer and the first near-term applications were around super-computing. We were very focused on these large machines that would reside at national laboratories and deep research facilities and we knew that we could build these massively parallel GPU machines to do that. The AI portion, we always also thought about it as clearly a HPC plus AI play.

You said before that AI is the killer application for HPC.

LS: Yes.

But you will talk to people in HPC, they’re like, “Well, it’s a little bit different”, to what extent is that the same category versus adjacent categories?

LS: It’s adjacent but highly-related categories, and it all depends on the accuracy that you want in your calculations, whether you’re using the full accuracy or you want to use some of these other data formats. But I think the real key though, and the thing that really we had good foresight on is, because of our chiplet strategy, we could build a highly modular system that could be, let’s call it, an integrated CPU and GPU, or it could be just incredible GPU capability that people needed.

And so, the ChatGPT moment for me was the clarity around, now everybody knew what AI was for. Before, it was only the scientists and the engineers who thought about AI, now everybody could use AI. These models are not perfect, but they’re amazingly good, and with that, I think the clarity around how do we get more AI compute in people’s hands as soon as possible was clear. Because of the way we had built our design system, we could really have two flavors. We had HPC-only flavor, which is what we would call our MI300A and we had AI only flavor, which was the MI300X…

One of the things that does strike me about the contrast is, and one of Nvidia’s really brilliant moves was the acquisition of Mellanox and their portfolio in networking, and to the extent it matters to tie all these chips together, particularly for training.

In your Computex keynote, you talked about the new Ultra Accelerator Link and Ultra Ethernet Link standards, and this idea of bringing lots of companies together, kind of calling back to the Open Compute Project back in the day as far as data centers. Makes perfect sense, particularly given Nvidia’s proprietary solutions have the same high margins, we all know and love, as the rest of their products.

But I guess this is my question about your long-term run — do you think it’s fair to say that, from a theoretical Clayton Christensen perspective, because we’re early in AI, maybe it’s not a surprise, the more proprietary integrated solution is the belle of the ball in many respects? There’s a bit where, yes, being open and modular all makes sense, but maybe that’s not going to be good enough for a while.

LS: I would say it this way. When you look at what the market will look like five years from now, what I see is a world where you have multiple solutions. I’m not a believer in one-size-fits-all, and from that standpoint, the beauty of open and modular is that you are able to, I don’t want to use the word customize here because they may not all be custom, but you are able to tailor.

Customize in the broad sense.

LS: That’s right.

Tailor is a good word.

LS: Tailor is the right word — you are able to tailor the solutions for different workloads, and my belief is that there’s no one company who’s going to come up with every possible solution for every possible workload. So, I think we’re going to get there in different ways.

By the way, I am a big believer that these big GPUs that we’re going to build are going to continue to be the center of the universe for a while, and yes, you’re going to need the entire network system and reference system together. The point of what we’re doing is, all of those pieces are going to be in reference architectures going forward, so I think architecturally that’s going to be very important.

My only point is, there is no one size that’s going to fit all and so the modularity and the openness will allow the ecosystem to innovate in the places that they want to innovate. The solution that you want for hyperscaler 1 may not be the same as a solution you want for hyperscaler 2, or 3.

Where do you think the balance is going to be then, between there being a standard approach versus, “This is the Microsoft approach”, “This is the Meta approach”? There’s some commonality there, but it is actually fairly customized to their use cases and needs. Again, not next year, but in the long run.

LS: I think as you get out three, four or five years, I think you’re going to see more tailoring for different workloads, and what happens is, the algorithms are going to — right now, we’re going through a period of time where the algorithms are just changing so, so quickly. At some point, you’re going to get to the place where, “Hey, it’s a bit more stable, it’s a little bit more clear”, and at the types of volumes that we’re talking about, there is significant benefit you can get not just from a cost standpoint, but from a power standpoint. People talk about chip efficiency, system efficiency now being as important if not more important than performance, and for all of those reasons, I think you’re going to see multiple solutions…

How much inference do you see actually going back to the CPU?

LS: I think a good amount of inference will be done on the CPU, and even as you think about what we’re talking about is the very large models obviously need to be on GPUs, but how many companies can really afford to be on the largest of models? And so, you can see now already that for smaller models, they’re more fine-tuning for those kinds of things, the CPU is quite capable of it, and especially if you go to the edge.

Right. You noted on the last earnings call that the MI300, it’s been supply-constrained, your fastest ramp ever, but is maybe from the expectations of some investors, a little disappointing in the projections for the end of the year. How much do you feel that shift to being demand-constrained is about the 325 coming along, which you talked about this week, versus the fact that just generally Nvidia supply has gone up, as everyone’s trying to figure this stuff out? Yes, your long-term opportunity is being this sort of customized supplier — tailored supplier, sorry, is the word that we’re going for — versus, “Look, I don’t want to say picking up but just we need GPUs, we’ll buy them from anyone”. Where do you feel your demand curves are relative to the competition and the rapid progression of the space?

LS: Again, let me take a step back and make sure we frame the conversation. The demand for AI compute has been off the charts, I think nobody would have predicted this type of demand, and so when I say that there is tightness in the supply chain, that’s to be expected, because nobody expected that you would need this many GPUs in this timeframe. The fact is the semiconductor industry is really good at building capacity, and so that is really what we’ve seen. As we’ve started to forecast-

And so you feel it’s more a function of there’s just so much supply coming online?

LS: Absolutely, and that’s our job. Our job is to make it to a place where you’re not constrained by manufacturing capacity.

Really, for us, it is about ensuring that customers are really ramping their workloads and that is a lot of deep work, deep partnerships that we’re doing with our customers. So honestly, I feel really good about the opportunities here. We’ve been through this before where it’s very similar to what we saw when we did the initial data center server CPU ramps, which is our customers work very closely with us, they get their software optimized, and then they add new workloads, and add more volumes, and that’s what I would expect to happen here, too.

The difference in AI is that I think customers are willing to take more risk, because there’s a desire to get as much, as fast as possible.

Is there a challenge for you, because that desire to take more risks means they’re more accepting of say, high margins to get the leading GPUs or whatever it might be, or the GPU with the largest ecosystem, developer ecosystem?

LS: What I will say is I’m super happy with the progress we’ve made on software.

Fair enough.

LS: What we’re seeing is excellent out-of-box performance. The fact is things just run, the fact is that much of the developer ecosystem wants to move up the abstraction layer, because everybody wants choice.

And you feel you’re going to get to a stage where that move up the abstraction layer is a common layer across companies, as opposed to getting one company internally moves up the abstraction layer, and so they can buy any CPU, but that doesn’t necessarily benefit you going into another company, or do you feel that’s going to be-

LS: I absolutely believe that it’ll be across the industry. Things like PyTorch, I think PyTorch is extremely widely adopted, OpenAI Triton, similar. These are larger industry things where frankly, part of the desire is it takes a long time to program down to the hardware. Everyone wants to innovate quickly, and so the abstraction layer is good from the standpoint of just rapid innovation.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Meta Platforms, Microsoft, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 June 2024:

1. Google CEO Sundar Pichai on AI-powered search and the future of the web – Nilay Patel and Sundar Pichai

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more…

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing…

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

2. Five Moat Myths (transcript here)- Robert Vinall

So we’re now on to Moat Myth number three, which is execution doesn’t matter. So there’s this idea that, like I mentioned the quote earlier, on “when a management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact.” So this is a bit of a callback to my presentation on management and it implies that as long as the moat is there, nothing can go wrong and vice versa – if the mode isn’t there, then nothing is basically going to go right. I really strongly disagree with that. Some of the best businesses, some of the best investments I’ve seen, in the companies which have really great execution and that execution tends over time to lead to a moat. So I think people get it backwards a little bit. It’s not that the moat trumps execution, it’s that the moat is the output of execution…

…So this one won’t be a surprise to you. I kind of talked about it in the summary on the management presentation but there’s this idea that management doesn’t matter. And I have two examples. So one is a crook and this is the easiest argument to make. Anyone who says management doesn’t matter, all that counts is the business and the financials, well clearly a crook can destroy a business. There’s thousands of examples of that. One springs to mind is an Indian brewer, Kingfisher where the guy effectively sells the business and buys an airline with it, which goes bust. His family went from being very wealthy to zero. So clearly management can destroy a business. I don’t think that’s a hard argument to make.

But on the positive side, clearly management can also be the difference between a great business and a failing business. And of course the most famous example of that ever is Berkshire Hathaway, the company we’re all here to see tomorrow. As many of you will know, Berkshire Hathaway was a failing textile mill and would have almost certainly gone bankrupt and is today I think one of the top 10 largest companies in the US, if not in the world. And that’s thanks to the investment decisions and the investing acumen of Warren Buffett. So clearly management does matter.

3. Getting materials out of the lab – Benjamin Reinhardt

Inventing a new material is the beginning of a long process.

Take carbon fiber composites. You’re almost certainly familiar with these, particularly if you’ve ridden a surprisingly light bike or seen its distinctive crosshatched weave pattern on a car dashboard or phone case.

Looking at carbon fiber composites through an electron microscope, you observe strands of carbon atoms arranged in a hexagonal pattern, woven into mats and layered with a resin such as epoxy. Carbon fiber’s tensile strength (the amount of load it can bear under tension before it breaks) is similar to steel, but the material is much less dense. So if you care about both weight and strength – as you do when you’re designing vehicles from a supercar to a Boeing 787 – carbon fiber is the material for you.

Modern materials like these carbon fiber composites are born in laboratories. Researchers at universities or industrial research labs do test tube–scale experiments, which can produce mind-blowing results. Carbon fiber first showed great promise in 1960 when Richard Millington patented a process to create fibers made of 99 percent carbon.

However, at lab scale, materials don’t do anything. Most people wouldn’t want a rope that is a centimeter long, or a battery that lasts three minutes. Leaving the lab requires bridging many orders of magnitude: from producing less than 0.001 kilograms (one gram) per day in a lab to more than 1,000 kilograms (one tonne) per day in a factory.

You can think of lab-scale materials as the most artisanal products in the world, painstakingly handcrafted by people with advanced degrees. Like any artisanal product, lab-scale materials are expensive. Trying to mass-produce these materials by simply increasing the number of fume hoods, test tubes, and pipette wielders would make them cost billions of dollars per kilogram. After a material is invented, we need to discover cheaper ways to produce it, since price per quantity has a dramatic effect on how much it can be used.

We call this process ‘scaling’, but to me that word is frustratingly vague. It bundles together many different problems that need to be solved to decrease cost and increase yield. The three key ones are:

Consistency. A lab can declare success if a small fraction of their material has an impressive property, but a factory needs that fraction to be much higher. A more consistent yield means less waste, and a lower price.

Standardization. Figuring out how to produce a material using conventional, industry-standard equipment avoids the cost of custom tools and enables you to make more material in an easily replicable way.

Streamlining. Moving a product through a continuous manufacturing process, as opposed to applying each of the manufacturing steps to a small, static batch drastically reduces costs. Henry Ford did this with his moving assembly line, passing cars from worker to worker rather than moving workers from car to car…

…Building an industrial-scale factory requires money – a lot of it. To justify the expense to investors, you need to answer the questions, ‘What is your material good for?’, and more importantly, ‘Who will buy it?’

The answer is far from obvious, even for great materials: carbon fiber went through a decades-long journey before it became the star it is today. At first, manufacturers sold it as low-margin home insulation material because of its low thermal conductivity. It was key to several failed products, from turbine blades to a replacement for fiberglass. It eventually found its first iconic use case when Gay Brewer won the first annual Taiheiyo Club Masters using a golf club with a carbon fiber shaft.

The search for a cost-effective use case leaves many new materials in a chicken-and-egg situation: entrepreneurs and companies can’t justify the expense of scaling because there isn’t an obviously valuable application – but that application can’t emerge without a cost-effective material that can be experimented with.

Even applications that do seem obvious can take a long time to realize. In 1968, Rolls-Royce attempted to use carbon fiber in airplane propellers, which failed spectacularly. The propellers were extremely vulnerable to impacts – the whole project became such a boondoggle that it was a significant factor in the company’s collapse into receivership in 1971. Another 40 years would pass before the first majority–carbon fiber airplane, the Boeing 787, took flight…

…Scientists, mostly working in universities, have strong incentives to focus on novelty and one-off demonstrations because these can lead to publications and positive media attention. That work can be valuable, but the search for novelty alone creates mismatches with efforts to produce useful materials at scale. Essentially, the system of discovery sets up scaling for failure by not creating materials without any consideration of their ability to scale.

The drive to focus on new discoveries over improving old ones’ capacity to scale, combined with the difficulty of mimicking real-world conditions in a lab, creates initial experiments that bear little resemblance to how people use a material in the real world.

Take the development of lithium-ion battery anodes. Researchers can demonstrate exciting leaps in power density from a new anode material using a half-cell reaction that provides functionally infinite lithium. But in a real battery with finite lithium, these anodes would reduce battery lifetimes to the point of unusability.

Similarly, carbon nanotubes have incredible tensile strength for their weight, but it’s hard to make them longer than a few centimeters. This length limit comes from carbon nanotubes’ tendency to tangle and become more susceptible to impurities as they get longer. Cable makers in the real world don’t just care about strength-to-weight ratios, but also the length over which the material maintains that strength. Yet scientists can take their headline of ‘superstrong carbon nanotubes’ and move on to the next project…

…Materials start-ups often struggle to raise venture capital financing. Venture isn’t a good fit for the capital costs and timescales of the material industry: the size, scale, and expectations of venture capital funds are well-suited to invest in software and pharmaceuticals whose revenues can skyrocket once they hit the market. Venture capital also prefers high-margin businesses that can get to market quickly, but materials often face a trade-off between margins and speed: while it’s faster and cheaper to innovate on one component of a larger production line or one material in an existing product, most of the margins come from new products…

…The long road from the lab to the material world might make the future of new materials seem bleak.

One reason for optimism is that new materials might already be on the horizon. There is a shockingly consistent timescale for materials to become useful beyond their initial niches. It took roughly 50 years between Roger Bacon’s discovery in 1958 and the flight of the first majority–carbon fiber airplane in 2009. The first lithium-ion battery was created by NASA in 1965, but most people didn’t start interacting with them until the mid 2000s. The properties of pure carbon nanotubes weren’t isolated until 1991. If there is indeed a 40- to 50-year timescale for lab-based materials to be useful in high-impact applications, we don’t need to despair about a carbon nanotube space elevator being overdue until somewhere around 2040.

4. High-Yield Was Oxy. Private Credit Is Fentanyl – Greg Obenshain and Daniel Rasmussen

Private equity assets have increased sevenfold since 2002, with annual deal activity now averaging well over $500 billion per year. The average leveraged buyout is 65 percent debt-financed, creating a massive increase in demand for corporate debt financing.

Yet just as private equity fueled a massive increase in demand for corporate debt, banks sharply limited their exposure to the riskier parts of the corporate credit market. Not only had the banks found this type of lending to be unprofitable, but government regulators were warning that it posed a systemic risk to the economy.

The rise of private equity and limits to bank lending created a gaping hole in the market. Private credit funds have stepped in to fill the gap. This hot asset class grew from $37 billion in dry powder in 2004 to $109 billion in 2010, then to a whopping $261 billion in 2019, according to data from Preqin. There are currently 436 private credit funds raising money, up from 261 only five years ago. The majority of this capital is allocated to private credit funds specializing in direct lending and mezzanine debt, which focus almost exclusively on lending to private equity buyouts.

Institutional investors love this new asset class. In an era when investment-grade corporate bonds yield just over 3 percent — well below most institutions’ target rate of return — private credit funds are offering targeted high-single-digit to low-double-digit net returns. And not only are the current yields much higher, but the loans are going to fund private equity deals, which are the apple of investors’ eyes…

…Banks and government regulators have expressed concerns that this type of lending is a bad idea. Banks found the delinquency rates and deterioration in credit quality, especially of sub-investment-grade corporate debt, to have been unexpectedly high in both the 2000 and 2008 recessions and have reduced their share of corporate lending from about 40 percent in the 1990s to about 20 percent today. Regulators, too, learned from this experience, and have warned lenders that a leverage level in excess of 6x debt/EBITDA “raises concerns for most industries” and should be avoided. According to Pitchbook data, the majority of private equity deals exceed this dangerous threshold…

…Empirical research into lending markets has typically found that, beyond a certain point, higher-yielding loans tend not to lead to higher returns — in fact, the further lenders step out on the risk spectrum, the less they make as losses increase more than yields…

…The historical experience does not make a compelling case for private credit. Public business development companies are the original direct lenders, specializing in mezzanine and middle-market lending. BDCs are Securities and Exchange Commission–regulated and publicly traded companies that provide retail investors access to private market platforms. Many of the largest private credit firms have public BDCs that directly fund their lending. BDCs have offered 8 to 11 percent yield, or more, on their vehicles since 2004 — yet returned an average of 6.2 percent, according to the S&P BDC index. BDCs underperformed high-yield over the same 15 years, with significant drawdowns that came at the worst possible times..

…Central to every private credit marketing pitch is the idea that these high-yield loans have historically experienced about 30 percent fewer defaults than high-yield bonds, specifically highlighting the seemingly strong performance during the financial crisis…

…But Cambridge Associates has raised some pointed questions about whether default rates are really lower for private credit funds. The firm points out that comparing default rates on private credit to those on high-yield bonds isn’t an apples-to-apples comparison. A large percentage of private credit loans are renegotiated before maturity, meaning that private credit firms that advertise lower default rates are obfuscating the true risks of the asset class — material renegotiations that essentially “extend and pretend” loans that would otherwise default. Including these material renegotiations, private credit default rates look virtually identical to publicly rated single-B issuers…

… If this analysis is correct and private credit deals perform roughly in line with single-B-rated debt, then historical experience would suggest significant loss ratios in the next recession. According to Moody’s Investors Service, about 30 percent of B-rated issuers default in a typical recession (versus fewer than 5 percent of investment-grade issuers and only 12 percent of BB-rated issuers)…

…Private equity firms discovered that private credit funds represented an understanding, permissive set of lenders willing to offer debt packages so large and on such terrible terms that no bank would keep them on its balance sheet. If high-yield bonds were the OxyContin of private equity’s debt binge, private credit is its fentanyl. Rising deal prices, dividend recaps, and roll-up strategies are all bad behaviors fueled by private credit…

…Lender protections have been getting progressively weaker. After analyzing just how weak these covenants have become since the financial crisis, Moody’s recently adjusted its estimate of average recovery in the event of default from the historical average of 77 cents on the dollar to 61 cents…

…Today private equity deals represent the riskiest and worst-quality loans in the market. Banks and regulators are growing increasingly worried. Yet massive investor interest in private credit has sent yields on this type of loan lower, rather than higher, as the deteriorating quality might predict. As yields have fallen, direct lenders have cooked up leveraged structures to bring their funds back to the magical return targets that investors demand. Currently, we suspect that a significant number of private equity deals are so leveraged that they can’t pay interest out of cash flow without increasing borrowing. Yet defaults have been limited because private credit funds are so desperate to deploy capital (and not acknowledge defaults). Massive inflows of capital have enabled private lenders to paper over problems with more debt and easier terms.

But that game can’t go on forever.

5. How Does the Stock Market Perform in an Election Year? – Nick Maggiulli

With the U.S. Presidential election set for a rematch in November, many investors are wondering how the U.S. stock market might perform in the months that follow. While predicting the future is never easy, using history as a guide can be useful for understanding how markets might react to a Biden or Trump victory…

…In the seven or so weeks following an election there can be lots of uncertainty around how the future might unfold. But, if we look at how markets actually perform after an election, they are typically pretty average. To start, let’s consider how U.S. stocks (i.e. the S&P 500) have performed from “election day” until the end of the year for each year since 1950. Note that when I say “election day” I mean from the Tuesday after the first Monday in November to year end, regardless of whether there was an actual election…

…while stock performance has varied quite a bit since 1950, U.S. stocks tend to rise slightly following an election (or in the same time period during a non-election year). The biggest exceptions to this were in 2008, when markets declined by nearly 11% from election day to year end, and in 1998, when they increased by almost 10% as the DotCom bubble continued to inflate.

However, if we look at the average performance in election years versus non-election years, all these differences wash out. Plotting the average performance of the 18 election years and 56 non-election years in the data, we see basically no long-term difference in performance:

While the S&P 500 tends to perform worse (on average) in the first few days following the election, there seems to be no lasting impact on stocks through year end. In fact, the average return following election day through December 31 is 2.3% in an Election Year compared to 2.4% in a Non-election Year. In other words, their returns on average are basically the same. The median (50th percentile) return is similar as well with a 2.9% return in an Election Year compared to 2.4% during a Non-election year…

…When Trump won the 2016 election to almost everyone’s surprise, many believed that U.S. stocks would crash as a result. Jane Street, a prominent quantitative trading firm, was one of them. After finding a way to get the 2016 election results minutes before the rest of the mainstream media, Jane Street still ended up losing money because they got the market’s reaction wrong. As Michael Lewis recalls in Going Infinite:

What had been a three-hundred-million-dollar profit for Jane Street was now a three-hundred-million-dollar loss. It went from single most profitable to single worst trade in Jane Street history.

This illustrates how difficult it can be to predict the reaction of markets, even for the smartest people in the room…

…Overall, U.S. stocks performed better than average after both Trump and Biden’s election victories. However, with the market increasing by 4% in 2016 and 7% in 2020, Biden is the clear post-election winner.

However, if we look at how U.S. stocks performed throughout the rest of their presidency, it seems like Trump will be the clear winner when all is said and done…

…One of the reasons I love this chart is because it illustrates that U.S. stocks tend to rise regardless of which political party is in office. This suggests that the factors that impact stock prices have less to do with who’s in office than we might initially believe.

Some of you will see the chart above and point out how the only two negative periods occurred when Republican presidents were in office. That is technically correct. However, it is also true that these negative periods occurred immediately after Democratic presidencies. So who’s to blame? The Republicans? The Democrats? Neither? No one knows…

…While the outcome of the 2024 U.S. Presidential election remains uncertain, history suggests that the stock market is likely to perform similarly regardless of who wins. In the short term, markets may react positively or negatively to the election results, but those effects tend to even out over time…

…Ultimately, the key to navigating the uncertainty of an election year is to stay informed and avoid making emotional decisions based on short-term political events. The U.S. economy and stock market have made it through countless political cycles before and will make it through this one as well. So no matter who wins in November, history suggests that staying the course is often the best course of action. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 June 2024:

1. Pierre Andurand on a Shortage of Cocoa, Surging Copper and the Outlook for Oil – Tracy Alloway, Joe Weisenthal, and Pierre Andurand

Tracy (03:29):

So maybe to begin with, talk to us about how you got interested in cocoa because my understanding is you put a big trade on a long position earlier this year, it paid off massively. But this isn’t a sort of normal type of trade for you. This is something that was a little bit different.

Pierre (03:48):

Yes. Well, generally my background is more in energy trading, but I’ve traded quite a bit of metals as well, a little bit of agricultural products.

But I have one analyst who was very good and told me in January, ‘Pierre, you should look at cocoa.’ So I’m like ‘Okay, I don’t know anything about it, tell me.’

And he gave me a really good presentation that was really interesting. So then we really digged in really deep together to really understand the fundamental market. And basically we have a massive supply shortage this year.

I mean, we see production down 17% relative to last year. Most analysts out there have it down 11%, but that’s because they tend to be very conservative. They have lots of clients and they don’t want to worry the world. So they come with relatively conservative estimates.

But really tracking the export from the main exporters, mainly Ivory Coast and Ghana, that represent together about 60% of [the] world’s production. We see basically Ivory Coast exports down 30% year to date, I mean season to date and Ghana down 41%.

So just those two countries together since the start of the season, which is the 1st of October, are down 800,000 tons. And now we have what we call the mid-crop that is starting, but that represents only 20% of the balance of the season for West Africa.

And that’s not going to be enough to really change the deficit that we have this year. So we have a deficit of 800,000 tons from those two countries. And then looking at all the other countries we have, I think there some slightly positive, some are slightly negative, but basically we get to a deficit of 800,000 tons this year. And so that’s the first time we have such, you know, a decline in supply and that’s very hard to make it fit.

So at first you eat into current inventories until you run out of inventories and then the price can go anywhere.

So when we look at, okay, what makes the price of cocoa, right? It’s always about supply versus demand. But what has been capping the price between $2,500 a ton and $3,000 a ton, it was not demand because demand is extremely inelastic. I mean you can study that historically when you have a recession or not, when prices go up a lot or not. I mean demand generally goes up.

And that’s because the amount of in dollar terms that people consume in cocoa is very small. I mean, I did a back of the envelope calculation the other day. I mean at basically $10,000 a ton, even though it’s four times the more recent historical prices, out of a market of 5 million tons of demand per year, you have like 8 billion people on the planet, so on average it means that people consume 1.7 grams of cocoa per day, which at $10,000 a ton represents 1.70 cents per day. Okay, that’s the average person. Many people eat nothing and a few eat 10 times that amount…

…Pierre (06:56):

But let’s say if you eat even one full tablet, so 125 grams a day every single day for the whole year, which is quite a lot, of a high content real cocoa because your milk and milk chocolate, you have less than 10% cocoa in it.

So the price can go up 10 times, your tablet is only going to double in price. It’s not going to react very much to the cocoa price. But if you take a high content, high chocolate content bar, like a tablet, 125 grams, that means that you probably have [a] maximum of 50 grams of cocoa beans equivalent in it. I mean it’s probably a lot less.

Then you get to an expense of $14 per month at current prices, which is an increase of $10 per month relative to when we had a more normal price. So it means that demand, like for more reasonable chocolate lovers, that increase in [the] price in cocoa just corresponds to $2 to $5 per month.

So people are not going to eat less chocolate because of that. So it means that prices really are capped by the amount of supply you get. So if you can’t get enough supply, the price can go up a lot until we get more supply.

And when do we get more supply? Well, that in part [is] due to the weather, if you have much better weather then you might get more supply of cocoa beans the next year. But we have some issues that are also structural.

So when we look at the reasons for this large decline in production this year, I mean a lot of the reasons are actually structural. I mean we can look at four reasons why cocoa bean production has gone down a lot this year.

First I should give a little bit of background of why cocoa is so concentrated in West Africa. I mean it’s mainly because it requires very specific temperature, rainfall and humidity conditions. And that’s why most of the production is concentrated around a certain latitude — so 70% in West Africa and then you have 21% in mainly Latin America and 5% in Asia and Oceania.

So the main reasons why we lost a lot of production this year is number one weather. So some of it [is] due to El Nino, we had basically a period of time when it was too hot and a period of time when we had way too much rain.

Second is climate change. So climate change is every year shifting the weather patterns generally unfavorably for cocoa productions. Then you have two diseases, you have one called the Black Spot disease that comes from the fungus and it occurs mainly during the rainy season. It’s spread by rain splash, so basically it can’t grow when it’s dry.

And then you have a virus called the Swollen Shoot disease. It’s not a new disease. It was discovered in 1936. It’s transmitted by mealybugs, but it decreases cocoa yields a lot. So basically a tree that has that Swollen Shoot disease loses 25% yield within the first year and 50% within two years, and the tree dies within three to four years. And we’ve had actually a spread of that disease over the last year.

And then also we had less usage of fertilizers, mainly in the Ivory Coast, due to high fertilizer prices and also shortages due to the Russian invasion of Ukraine. So everything is linked. So some of it might be solved if we get better weather. I mean for next year we should have La Nina and not El Nino, so that should help at the margin.

But we still have issues with climate change. We still have issues with Black Spot disease and Swollen Shoot disease and there’s no indication that we get more usage of fertilizers in [the] Ivory Coast because actually the farmers are still getting relatively low prices and they’re still struggling to make ends meet. So a lot of those supply issues are actually structural…

…Joe (12:09):

So what did your analyst see? Or how was your analyst able to see something in the supply and demand situation that he felt, and you felt, was not being identified by the analysts who cover this closely?

Pierre (12:23):

I think it’s mainly an understanding of how much prices have to move to balance the market. You know, sometimes people can trade that market for like 20 years. They’ve been used to a range of prices and they believe, okay, the top of the range is the high price for example.

But they don’t really ask themselves what makes that price, right?. And sometimes taking a step back can help. I mean what makes the price is mainly the fact that in the past you would have the supply response if prices were going up. But if now you don’t get the supply response, or the supply response takes four or five years, then you need to have a demand response.

And a lot of people look at prices in nominal terms. So you hear people saying ‘Oh, we are at all time high prices in cocoa, but that’s because they look at prices in nominal terms. [The] previous high in 1977 was $5,500 something dollars a ton of 1977 dollars, which is equivalent to $28,000 a ton of today’s dollars.

So we are still very far from previous highs. And so you have to look at a bit more history and understand in the past how prices reacted to a shortage, how long it took to recover the product shortage to actually solve itself. And what’s different today.

So there’s a ratio that we look at that most people look at, it’s actually the inventory to grindings ratio. So it’s a measure of inventory to demand, what we call grinding is basically industrial companies that take the cocoa beans and they want to make chocolate with it. So it’s a process and some of them make the end product chocolate directly. Some of them sell back the product to other chocolate makers.

And so basically a typical grinder would take cocoa beans and make cocoa butter and powder with it. And the prices of both those elements also went up even more than cocoa beans, which means that actually we probably had some destocking everywhere in the chain.

So it looks like demand, when we look at the chocolate makers, the end demand for chocolate didn’t go down at all, it looks to be flat on the year. Grindings look to be down three, three and half percent this year, despite the fact that the end demand is the same in volume, which means that they’ve been destocking cocoa beans actually.

And so we had destocking everywhere — at the end chocolate level, at the cocoa beans, at the cocoa butter and cocoa powder level. So we had this destocking everywhere on the chain and now we have the largest deficit ever on top of two previous years of deficit. And it looks like next year we will have a deficit.

So we’re in a situation where we might actually run out of inventories completely. I mean this year we think we will end up with an inventory to grinding ratio — so inventory at the end of the season — of 21%. For the last 10 years we’ve been between 35% and 40% roughly. At the previous peak in 1977 we were at 19% and that’s what drove us to $28,000 a ton, of todays’s dollars.

If we have another deficit next year, then we might go down to 13%. So I don’t think it’s actually possible. That’s when you really have real shortage of cocoa beans, you can’t get it and that’s when the price can really explode. And so understanding that you have to slow down demand and we know that demand can’t really be slowed.

So that’s when you can have an explosion [in price]. And remember that these commodity futures, you need to have, they’re actually physically settled. So if somebody wants to take delivery, they have to converge with the price of the physical. If you have no physical, somebody wants to take delivery, the price can go anywhere.

So it’s a dangerous commodity too short, right? If you have no physical against it. And actually sometimes we read news that the funds have been pushing cocoa prices. It’s actually completely untrue because the funds have been selling since February. They actually went from a length of 175,000 lots, so that’s 1.75 million tons of cocoa lengths, I think it was around like September last year in average, or a bit earlier, to 28,000 lots to 280,000 tons at the moment.

So they sold more than 80% of their length actually. And the people who’ve been buying the futures from the funds, it’s producers because they’re producing a lot less than they expected.

So what has been happening in the cocoa market is that you had a reduction of what we call the open interest, where both the longs would use their length and the shorts would use their shorts. And then we get into a market where you have less liquidity because you have less exposure, you have less longs and less shorts, and then the volatility increases.

So in the past when people were comfortable being, let’s say, having a 100 lots position now because it moves more than 10 times more than in the past, we’re going to have like a 10 lots position, right? So the market became more — due to the fact that we had a massive move and we have a massive deficit, so everybody’s reducing their positions and because of the increased volatility, we have less activity. And that’s what makes the point more volatile as well.

2. The World’s Most Secret Book About Making a Fortune in Mining – Swen Lorenz

For years, I have been trying to find a copy of an ultra-rare book published in 2008.

It told the inside story of a few mining entrepreneurs who built a company from zero and sold it for a staggering USD 2,500,000,000 (2.5 BILLION) in cash just 674 days later. That’s a quick fortune earned, if ever there was one!

The company was listed on the stock market, and public investors were taken along for some of the ride. In 2006/07, this was the single best British company to own stock of.

Somehow, though, the company’s insiders seem to have regretted publishing their detailed account. The book strangely disappeared from the market shortly after it was published. Curiously, there is ample evidence that an active effort was made to erase the book from history…

…The book in question is “UraMin – A team enriched. How to build a junior uranium mining company“.

Junior mines are companies that are still looking for resources, rather than producing the resource. As most of my readers will know, they are among the most speculative ventures one can invest in. About 99% of them make most investors lose their entire investment. The remaining 1% regularly end up making investors 10, 20, or even 100 times their money.

UraMin was primarily the brainchild of Stephen Dattels, a Canadian mining entrepreneur with a decade-long track record. The book describes the genesis of UraMin from his own perspective and that of his two partners, Ian Stalker and Neil Herbert. It was, for all intents and purposes, a real insiders’ account of the incredible success story.

UraMin produced oodles of capital gains. It was a lucrative investment not just for its pre-IPO investors, but also for those who bought into it through shares acquired on the open market post-IPO…

…It’s no surprise that the book starts with describing just how “Down and Out” the market for uranium-related investments was at the time.

At the turn of 2004/05, you would have been hard-pressed to find any investors interested in uranium. The price for the metal had been in a 26-year (!) bear market. From it 1977 peak, it had been downhill ever since. There were barely any publicly listed companies operating in the uranium industry.

You would have struggled to find anyone who even understood what the metal was used for, and how it was used.

Or as Ian Stalkers, the CEO of UraMin, is quoted in the book:

“A meeting with potential investors could literally take hours. … First, it required a full explanation of what uranium is used for (it isn’t used for ‘bombs’), a run-through of the fuel cycle (enrichment and so on), the safety record of nuclear reactors, long-term disposal issues and the balance of supply and demand. We were lucky if we managed to talk for 10 minutes about the company.”

It was not an opportunity that the mass of investors would have jumped at when it was first presented.

  • However, all the clues were there. At the end of 2004/05, three crucial developments had already taken place, all pointing towards an imminent reversal of fortunes:
  • The price of uranium had started to creep up. It went from USD 10/lb in early 2003 to USD 20/lb by the end of the following year (which was still far below the late-1970s high of USD 115/lb).
  • Existing stockpiles of the metal, which had soared during the 1990s because of a decommissioning of Soviet nuclear missiles, had dwindled to virtually zero. The oversupply that had depressed the price for so long was gone.
  • A soaring oil price, which at the time was up more than 10 times compared to its early-2000s low, provided increasing demand for cheap nuclear energy. It was only a matter of time before investment would flow towards the much cheaper source of energy.

Subsequently, the uranium price went through the roof…

…Put more bluntly, there are occasions when a management team has to concede that everyone is better off if it puts the company up for sale – which is difficult because it usually leads to the entire management team and board losing their jobs!

Also, who wants to leave a party when things are the most fun? Making the decision to call it quits and focus on maximising a buyout price for a company is an extraordinarily hard decision to take. However, it is quite regularly the one decision that a board really should have the guts and the sense of realism to take.

I wasn’t surprised to read that Dattels and his colleagues had that rare quality of knowing when to quit:

“The trend towards a smaller group of larger uranium companies had significant repercussions for UraMin, something that its management realised early on. “The sector was not a large one – it had already seen several significant mergers and more were rumoured,” notes Neil Herbert. “Despite the rapid progress we had made, we were in danger of becoming a relatively small operator.

On 19 February 2007, Reuters reported that UraMin was planning a strategic review of its assets in light of the recent consolidation of the sector.

In effect, analysts believed, the company had just put itself up for sale.”

Companies can put themselves up for sale by hiring an investment bank and making a public announcement, or they can de facto put themselves up for sale by feeding information into their industry’s rumour mill.

Steve Dattels decided that “we should take the initiative and evaluate the merger possibilities rather than wait for the telephone call.”

UraMin hired advisors and went through an official process of allowing prospective acquirers access to its internal information.

Following the process of inviting bids, the company came to an agreement with French nuclear power company, AREVA. In June 2007, UraMin’s management team agreed to a takeover offer that valued the company at USD 2.5bn. The entire purchase price was payable in cash.

Investors who had bought in at the bottom of GBp 50 per share made 8 times their money within just 12 months.

One of the earliest institutional backers of the venture reportedly made 22 times their money in just 24 months.

3. Why Utilities Are Lighting Up the Stock Market – Jason Zweig

As Bespoke Investment Group, a research firm, pointed out this week, three of this year’s five best-performing stocks in the S&P 500 are utilities: Vistra, Constellation Energy and NRG Energy. Vistra, up 143%, has even outperformed the king of AI itself, Nvidia; Constellation, up 85%, is barely behind it…

…The business of providing electricity hasn’t grown in the past couple of decades as conservation and more-efficient technology have reduced consumption. The U.S. generated slightly less electricity in 2021 than it had in 2007, according to the federal Energy Information Administration—even though the economy grew more than 3% annually over that period.

Now, however, the need for energy is finally expanding. On their April 23 earnings-announcement call, executives at NextEra estimated that electricity demand from data centers alone would grow 15% a year through the end of the decade.

AI isn’t the only reason utilities have heated up so fast. The rapid increase in demand for electricity nationwide comes from three main sources, says Maria Pope, CEO of Portland General Electric, Oregon’s biggest utility.

One is the revival of domestic manufacturing after decades of moving offshore. Another is the boom in semiconductor production, boosted by government support. But the expansion of data centers, “driven by the insatiable appetite of AI,” is the fastest-growing source of industrial demand, says Pope.

Jay Rhame, chief executive of Reaves Asset Management, which manages about $3 billion in utility stocks, thinks the only historical parallel is the boom in electricity generation that followed the widespread adoption of air conditioning in the 1960s and 1970s.

4. Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI – Nilay Patel and Shantanu Narayen

If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?

I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.

The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.

For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.

Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.

When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive…

I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.

You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?

I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.

How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?

Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.

So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.

And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.

It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?

I think the approach that we’ve taken has shown just tremendous leadership by saying … Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.

I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.

It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?

To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?

I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—

The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?

I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears…

Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?

It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.

There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?

Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.

The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?

I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?

I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.

Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?

Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.

Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?

It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.

5. The S&P 500 vs. the U.S. Economy – Ben Carlson

The S&P 500 is a big part of the U.S. economy but there are plenty of differences between the stock market and the economy.

For instance, the technology sector has an outsized impact on S&P 500 earnings growth over time:..

…Depending on the time frame, the tech sector can make up the majority of both earnings gains and losses. The same is true of sales:…

…The BEA estimates tech’s contribution to GDP to be 10%.1 That’s still close to $3 trillion but the economy is far more diversified and spread out than the stock market.

A decent chunk of sales for S&P 500 companies also comes from outside our borders:…

…The S&P 500 is a U.S. index but it is comprised of global corporations…

…S&P 500 companies are enormous but the majority of firms with $100 million or more in sales are private companies:…

…S&P 500 companies account for roughly 1 in 5 jobs in the United States:…

…But these corporations are insanely efficient and profitable, accounting for half of the profits in America:


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  We currently have a vested interest in Adobe, Apple, and Tencent. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 May 2024:

1. How I Think About Debt – Morgan Housel

Japan has 140 businesses that are at least 500 years old. A few claim to have been operating continuously for more than 1,000 years…

…These ultra-durable businesses are called “shinise,” and studies of them show they tend to share a common characteristic: they hold tons of cash, and no debt. That’s part of how they endure centuries of constant calamities…

…I think this is the most practical way to think about debt: As debt increases, you narrow the range of outcomes you can endure in life…

…I hope to be around for another 50 years. What are the odds that during those 50 years I will experience one or more of the following: Wars, recessions, terrorist attacks, pandemics, bad political decisions, family emergencies, unforeseen health crises, career transitions, wayward children, and other mishaps?

One-hundred percent. The odds are 100%.

When you think of it like that, you take debt’s narrowing of survivable outcomes seriously…

…I’m not an anti-debt zealot. There’s a time and place, and used responsibly it’s a wonderful tool.

But once you view debt as narrowing what you can endure in a volatile world, you start to see it as a constraint on the asset that matters most: having options and flexibility.

2. Economists Aren’t the Best at Predicting the Economy – Tyler Cowen

Out of curiosity, I recently cracked open The American Economy in Transition, published in 1980, edited by Martin Feldstein and including contributions from other Nobel-winning economists, successful business leaders and notable public servants. Though most of the essays get it wrong, I found the book oddly reassuring…

…For instance, many authors in the book are focused on capital outflow as a potential problem for the US economy. Today, of course, the more common concern is a possible excess inflow of foreign capital, combined with a trade deficit in goods and services. Another concern cited in the book is European economies catching up to the US. Again, that did not happen: The US has opened up its economic lead. Energy is also a major concern in the book, not surprisingly, given the price shocks of the 1970s. No one anticipates that the US would end up the major energy exporter that it is today.

Then there is the rise of China as a major economic rival, which is not foreseen — in fact, China is not even in the book’s index. Nether climate change nor global warming are mentioned. Financial crises are also given short shrift, as the US had not had a major one since the Great Depression. In 1980 the US financial sector simply was not that large, and the general consensus was that income inequality was holding constant. Nor do the economics of pandemics receive any attention.

So you may see why the book stoked my fears that today’s economists and analysts do not have a good handle on America’s imminent problems.

As for opportunities, as opposed to risks: The book contains no speculation about the pending collapse of the Soviet Union. Nor are the internet, crypto or artificial intelligence topics of discussion…

…Then there are the things that haven’t changed much over the decades. Peter G. Peterson, who helped to found the fiscally conservative Peterson Institute, has an essay in the book worrying about the federal deficit.

The piece that most resonated with me, contrary to expectation, is by Paul Samuelson. Samuelson is the one contributor who realizes he doesn’t understand what is going on in the world. He starts by mentioning how forecasts in 1929 and 1945 failed to see the future very clearly. He hopes that the 1980 contributions will be luckier. “The facts tell their own story,” he writes, “but it is not the simple story that so many want to hear.”

Perhaps true reassurance comes from knowing that, all things considered, the US economy has done quite well since 1980.

3. The Cazique of Poyais: a Real Estate illusion in the new world – Javier Pérez Álvarez

After fighting in the South American wars of independence, Gregor MacGregor returned home declaring himself Cazique (kind of a tribal prince) of an imaginary Central American country called “Poyais.” His utopian paradise promised unparalleled wealth and opportunities, attracting hundreds of investors who, unfortunately, not only ended up losing their fortunes but also their lives…

…Gregor MacGregor, known as the Prince of Poyais, Cazique, and His Serene Highness, was a Scottish soldier who became one of the most notorious conmen of his time. He was born on December 24, 1786, into the MacGregor Clan, a family with a strong military tradition…

…At sixteen, Gregor joined the British Army just as the Napoleonic Wars were breaking out. Serving in the 57th Foot Regiment, he quickly rose to the rank of lieutenant within a year.

In June 1805, at the age of nineteen, he married Maria Bowater, a wealthy and well-connected woman, the daughter of a Royal Navy admiral. This marriage secured his social position, and he bought the rank of captain, avoiding the traditional path of promotion that would have required seven years of hard work…

…After his wife’s death, he faced financial difficulties, and his social aspirations crumbled. It was then that his interests turned to Latin America, inspired by the Venezuelan revolutionary general Francisco de Miranda.

Selling his property in Scotland, MacGregor sailed to Venezuela in 1812, presenting himself as “Sir Gregor” and offering his services to Miranda, who appointed him colonel and commander of a cavalry battalion. Despite some initial successes, his ambition drove him to rapidly ascend the ranks, achieving the position of General of Division in the armies of Venezuela and New Granada by the age of thirty…

…Then in 1820, MacGregor came across the swampy, inhospitable coast of Nicaragua, known as the Mosquito Coast. Here he persuaded the leader of the indigenous people to give him land to create a colony. A dream of empire began to take shape.

The self-appointed Prince of Poyais reappeared in London in 1820. He was seeking investors and colonists looking for a new opportunity across the Atlantic in a new world full of possibilities…

…He commissioned a book, illustrated with engravings, describing the country with “total” accuracy…

…Taking advantage of his past as a British Army officer, he managed to gain the sympathy of high society. Nothing has ever been more important than good marketing and PR. The Crown recognized him as a foreign dignitary and, to foster relations between the two countries, honored him with the title of Sir (finally). At that time, just as it happens now, brokers didn’t care what kind of securities they sold as long as they made money from them. Thus, in 1822, Sir Gregor managed to place “Poyais State bonds for stabilization” worth £200,000. These bonds were traded alongside securities from other already recognized states, such as Colombia, which had gained its independence in 1810.

After this, MacGregor took it a step further. He opened offices throughout Great Britain that sold land to colonists who wanted to start a new life in Poyais…

…Many were convinced. Hundreds of enthusiastic colonists spent their savings buying land in Poyais and the corresponding passage overseas…

…In 1822, the first emigrants arrived on the country’s shores in two ships. At the location where the capital should have been, described in detail in the book by the “Black River,” there was nothing. The place the colonists had arrived at was known as the “Mosquito Coast.” The natives themselves avoided that place due to its terrible climate…

…Nevertheless, typical of human psychology, the colonists’ discontent turned against the ship’s captain who had brought them, for it was he who was there. Somehow, he had made a mistake, disembarking them in that godforsaken place and immediately setting sail. No one thought to doubt Sir Gregor. The few natives there could not care for the colonists. Many fell ill and died.

The survivors returned to Great Britain in the autumn of 1823. Surprisingly, no scandal occurred. The emigrants continued to believe in the word of the Prince of Poyais…

…Naturally, all those who invested their money in Poyais bonds lost it. However, it must be said that the returns on these bonds were in line with other investments made in Latin America during those years. On many occasions, the solvency of real states was no different from that of fictional countries like Poyais.

4. 4 Economic Charts That Might Surprise You – Ben Carlson

Large corporations aren’t feeling inflation’s impact. Consumers hate inflation. Small businesses aren’t a fan. Politicians don’t like it much either.

But large corporations?

They seem just fine when it comes to profit margins…

…And the explanation:

Corporations are paying higher wages and input costs but they simply raised prices to combat those higher costs.

Corporate America puts profit first, second, and third, which is one of the reasons the stock market is so resilient.

If it seems like corporations always win it’s basically true. They know how to adapt regardless of the macro environment…

…When Russia invaded Ukraine in the spring of 2022, the price of oil quickly shot up from around $90/barrel to $120/barrel.

Energy experts and macro tourists alike came out with $200/barrel predictions. It made sense at the time!

That war still rages on, along with an additional conflict in the Middle East. In the past, this would have sent oil prices skyrocketing. The oil crisis was a big reason we had stagflation in the 1970s.

Not this time around. Oil prices are back down to $80/barrel. On an inflation-adjusted basis, oil prices are essentially flat since 2019 just before the pandemic…

…The U.S. becoming the biggest oil producer in the world is one of the most important macro developments of the past 20-30 years, yet you rarely hear about it.

This is a huge deal!

5. What It’s Like to Be a Regional Fed President On the Road – Tracy Alloway, Joe Weisenthal, Tom Barkin, and many others

Tracy (11:02):

What’s the biggest constraint on your growth right now? Is it getting the materials? Is it availability of contractors? What’s stopping you from selling even more?

Albert (11:14):

I guess for us it’s going to be more financial institutions understanding our business more. I think the supply chain issue for us, it’s okay, as we have access to different supplies, but it’s more of having a backing of a financial institution, for us.

Tracy (11:37):

So credit?

Carport Central employee (11:38):

So credit. But our turnaround time in our industry, luckily it is pretty quick, but because of the fabrication time and their time schedule for commercial projects, they are not able to pay us, let’s say within maybe 90 days.

And our credit terms are, say, net 30, net 45. So basically we have to have a reserve of cash. You know, it’ll come in, but it’s just a delayed situation. So the growth that we’re seeing, we’re actually being restrained because of not having access to the capital that we need to actually move forward.

Tom (12:14):

And what are the banks telling you when you go talk to them and say ‘I got a business and I got a lot of demand and I just need a little more capital?’

Carport Central employee (12:19):

Well, I think right now it’s mostly because of the way the economy’s going. They’re really, they’re not as free telling you ‘Hey, come on in, let’s help you.’ It’s more like ‘Eh, let me see if I can, I don’t know if I can,’ that kind of situation, not like it was before.

Tom (12:34):

But it’s access rather than rate because you could say ‘Oh, they’ll give it to me. It’s just costing me too much.’

Tom Williams (12:39):

Yeah, I think it’s more access. I think people are more reserved with that…

…Joe (16:20)

So you mentioned when we talked about the sort of anecdotal learnings, the examples you gave were sort of either confirmatory or maybe inform something at the margins like, okay, maybe there’s still more juice on the public sector for [the] labor side. How often does it come up where people will start consistently saying something that, oh, this is really not showing up in the data yet, and it’s sort of an early signal of something that later on you say ‘Yep, there it is, playing out in the numbers.’

Tom (16:48):

I’d say every quarter there’s something like that. So in the fourth quarter last year, in October, you may remember the numbers were really, really frothy. And I wasn’t hearing any of that in the market, and I actually came out and said ‘t’s just not consistent with what I’m hearing.’

Joe (17:02):

The inflation numbers?

Tom (17:03):

No, the demand numbers, the consumer spending numbers, the retail sales numbers were very frothy. That’s not consistent. I’d say today we just got a retail sales report recently that was quite strong and I’m hearing decent consumer spending. I’m not hearing that strong. And maybe I’ll be proven wrong by the time this airs, but that’s what I’m hearing.

So I do hear things that are different and then I hear some number of things that are in advance. May of 2020 in Bristol, Tennessee opened, Virginia wasn’t open. It was right at the end of the first part of Covid and I talked to a developer who said ‘Oh my God, the malls are packed.’

And that was before any of us knew that the opening of the economy would lead to that kind of spending. You know, that’s a good example. I’ll also get a reasonable amount of, I’ll call it segment specific information. You know, how are higher income consumers thinking versus lower income consumers? Or what’s the job market for professionals versus skilled trades? And so the overall number may be the same, but you’ll get some insight into what’s really driving it..

…Winston-Salem Rotary Club member (20:37):

To the extent you can, can you give us any flavor of what you all discussed in your interest rate meetings? And secondly, do you have favorite economic benchmarks you find very useful?

Tom (20:49):

You know, what I’m mostly interested in is real-time information. You’re trying to figure out what’s actually happening in the marketplace. So I get credit card spending every week, year-over-year, and during Covid, I got pretty calibrated on what that means in terms of retail sales.

But that’s something I look at closely to try to get a sense of demand. Consumer spending’s 70% of the economy. On the labor market, the jobs report that comes out every month is clearly the best, most secure thing. But I take some comfort from the weekly jobless claims, because it’s at least a real time measure of whether layoffs are accelerating, which is what you’d see if the economy turned south.

And I think you kind of get the point. I’m trying to figure out is there any risk of the economy turning? That’s really what I focus on.

In terms of the meeting, maybe I’ll give you a 10-day look at it rather than just the meeting itself because the weekend before, 10 days before the meeting, the weekend 10 days before the meeting, we’ll get, the staff does a 200-page vertical text, greatest analysis of the economy you’ve ever seen. And it’ll include domestic and international and financial markets and lending markets and different scenarios for where the economy might go and different monetary policy operations. And so it’s a brilliantly done piece of work.

Tom (22:15):

At the same time, Jay Powell sends around his first draft of what the statement might be. And so we work all weekend and into the week, debating how we want to talk about the economy and whether we like that statement.

We’ll offer Jay — I’m giving you this background so you understand me — we’ll offer Jay our perspective on this statements. He always likes mine best. That’s not actually true. I’m making the point, the statement that we issue the Wednesday of the meeting has largely, not always, but largely, been pretty well-vetted by the time you get to the meeting.

So we don’t go to the meeting and try to line edit a statement. For the most part, every time that the chair has a bad press conference, that’s because we’ve line edited the statement in the meeting and we send them out there two hours after the meeting to go defend it, which is, I think in my judgment, a little bit of malpractice. But we do it sometimes in the meeting itself.

There’s often a special topic and so the staff will present some papers on the special topic and we’ll have a debate about it. Then we all go around and talk about economic conditions. So I’ll say ‘I’ve been in the district for the last seven weeks and here’s what I think I’ve learned, and here’s what I take solace from in the recent data and here’s what I think are some interesting conclusions you might not have otherwise thought about.’

Then we all talk about the statement, pretty productive meeting. It’s a reasonably formal meeting. It’s not really flippant. There’s not tons of humor in there. You know, it’s a pretty serious meeting, but it’s also, every word is transcripted. So if you’re having trouble sleeping, you can go get them from five years ago…

…Tracy (45:55):

This is another theme that comes up regularly in Tom’s meetings. Big and small companies seem to have experienced a lot of the economy of recent years in very different ways. We asked Tom about this.

Tracy (46:08)

Do you notice a big difference between what larger companies are saying versus smaller companies?

Tom (46:12):

I do. Smaller companies are still struggling to fill workforce jobs. They’re still struggling to fill jobs. And that’s in part because there was more capacity to raise wages in the larger companies than there were in the smaller companies.

And we were with one earlier today, but when you go to a smaller company, you do hear that kind of constraint being much bigger. During the supply chain shortage era, you absolutely heard that the big companies had a lot more benefit than the smaller companies. And I think when it came to the margin recapture cycle, the big companies have led the way on that. And a lot of small companies are still saying that they’re working to recapture margins.

Joe (46:54):

Being able to compete on wages isn’t the only edge that larger companies have in the current environment. Many of them have also been able to refinance their debt. Contrast that with the smaller company, Carport Central, which told Tom that bank lending is becoming a constraint on its business.

Tracy (47:10):

That might be one reason, according to Tom, that economic growth has so far defied the gravity of higher interest rates. They just haven’t flowed through to some parts of the economy just yet.

Tom (47:20):

Well, so the data that I keep coming back to is interest payments as a percent of either personal disposable income or corporate revenue. And those numbers have only now finally gotten back to 2019 levels. And that’s because a lot of individuals paid down their credit cards and refinanced their mortgages, and a lot of companies paid down their debt and refinanced their debt.

And so the in aggregate impact of having the Fed funds rate at five and a third versus where it was basically at zero hasn’t really flown through the aggregate economy. Now it’s certainly flown through to individual parts of the economy.

And the most surprising things to me, obviously, the residential market, where you’ve got the 3% mortgage holders who don’t want to trade into a 7% mortgage and are unwilling to sell their house. But behind that is that 92% of mortgages are fixed rate, okay? So that’s different than what the economy was 15 years ago.

In commercial real estate, multifamily, you hear about a set of people who really can’t develop anymore, want to turn in the keys, whatever version of it. And another set of people who are owners who are feeling actually just fine.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 May 2024:

1. Why Xi Jinping is afraid to unleash China’s consumers – Joe Leahy

Both inside and outside China, there is a strongly held view among many economists that the country could secure a further period of robust growth if it were able to boost consumption by its own citizens. Indeed, faced with a property crisis, President Xi Jinping has taken some one-off measures to stimulate consumption to offset a fall in domestic demand.

But Xi has eschewed more radical medicine, such as cash transfers to consumers or deeper economic reforms. His latest campaign is instead to unleash “new quality productive forces” — more investment in high-end manufacturing, such as EVs, green energy industries and AI.

According to analysts, the reasons for the lack of more radical action on consumption range from a need to generate growth quickly by pumping in state funds — this time into manufacturing — to the more deep-seated difficulties of reforming an economy that has become addicted to state-led investment.

Ideology and geopolitics also play roles. For Xi, China’s most powerful leader since Mao Zedong, the greater the control his country exerts over global supply chains, the more secure he feels, particularly as tensions rise with the US, analysts argue. This leads to an emphasis on investment, particularly in technology, rather than consumption.

Under Xi, security has also increasingly taken precedence over growth. Self-reliance in manufacturing under extreme circumstances, even armed conflict, is an important part of this, academics in Beijing say…

…The pressure on Beijing to find a new growth model is becoming acute, analysts say. China has become too big to rely on its trading partners to absorb its excess production.

“The exit strategy has to be, at the end of the day, consumption — there’s no point producing all this stuff if no one’s going to buy it,” says Michael Pettis, a senior fellow at the Carnegie Endowment in Beijing.

Few projects capture Xi’s vision for 21st-century Chinese development as well as Xiongan, a new city being built on marshlands about 100km from Beijing…

…Xiongan unites many of Xi’s favourite development themes. Through vast investment in mega-infrastructure projects such as a high-speed rail hub, Xiongan aims to bring state-owned enterprises, universities and entrepreneurs together to concentrate on high-technology innovation, from autonomous vehicles and life sciences to biomanufacturing and new materials. As of last year, about 1mn people were living there, $74bn had been invested and 140 companies had set up there, Beijing says.

Conspicuously absent from the city plans are strategies to encourage the thing China’s economy lacks most — domestic consumption. In guidelines released in 2019 for Xiongan by Xi’s cabinet, the State Council, there was no mention of the term “consumption”, except for “water consumption”…

…China’s investment to gross domestic product ratio, at more than 40 per cent last year, is one of the highest in the world, according to the IMF, while private consumption to GDP was about 39 per cent in 2023 compared to about 68 per cent in the US. With the property slowdown, more of this investment is pouring into manufacturing rather than household consumption, stimulating oversupply, western critics say…

…Economists suspect that behind the rhetoric, the investment in manufacturing is partly pragmatic. With the property market still falling three years after the crisis began, and many indebted provinces ordered to suspend large infrastructure projects, Xi needs to find growth somewhere to meet his 5 per cent target for this year.

“The bottom line is they want growth in output and they want the jobs associated with that growth,” says Stephen Roach, a faculty member at Yale and former chair of Morgan Stanley Asia. He says when “they’re clamping down on property, it doesn’t leave them with much choice but to go for a production-oriented growth stimulus”…

…In areas vital to China’s national security, the country needed supply chains that “are self-sufficient at critical moments”, he said. “This will ensure the economy functions normally in extreme circumstances.”

HKU’s Chen says China no longer measures its “national power” in purely economic terms “but more importantly, in terms of military . . . capacity. And this is why manufacturing is very important”.

He says in this vision of the world, consumption is a lower priority…

…The Rhodium Group argues that some of the loans that flowed into the industrial sector last year went to local government finance vehicles, the heavily indebted off-balance sheet investment holding companies of provinces and municipalities.

While large sums still went to manufacturers, they “do not have a strong appetite to expand capacity given falling prices”, Rhodium said in a report.

Economists say that for consumers to feel comfortable to spend more, particularly after the property slump, China needs to step up its development of social welfare programmes and healthcare. While China has made strides in building out its public pension and healthcare systems, they are still lacking.

But such solutions would take a long time to boost consumer confidence and would require massive new funding from government coffers that are running dry.

Greater consumption would also necessarily mean reducing the role of manufacturing or investment in the economy. This could be done by unwinding China’s intricate system of subsidies to producers, which includes government infrastructure investment, access to cheap labour, land and other credit, says Pettis.

But if that was done in a big bang fashion, the share of household consumption to GDP would increase while overall GDP would contract as manufacturers suffered. This was obviously not a politically preferable option for Xi.

2. Strategy Reviews – John H. Cochrane

After an extensive extended and collective deliberation, the Fed adopted a new strategy framework known as Flexible Average Inflation Targeting. This framework was explicitly designed by a worldview that “the federal funds rate is likely to be constrained by its effective lower bound more frequently than in the past,”  and a consequent judgement that “downward risks to employment and inflation have increased.” A shift to “inclusive” employment, a return to the old idea that economic “shortfalls” can be filled, and a promise not to preempt future inflation but rather let inflation run hot above 2% to make up past shortfalls followed. These promise of future dovishness were hoped to stimulate demand in the short run.

In short, the Fed adopted an elaborately-constructed new-Keynesian forward-guidance defense against the perceived danger of deflation and stagnation at the zero bound.

No sooner was the ink dry on this grand effort, however, than inflation shot up to 8%, and the zero bound seemed like a quaint worry. Something clearly went drastically wrong. Naturally, the first question for a strategy review is, how can we avoid having that happen again?

Inflation eased without interest rates substantially higher than inflation or a large recession. I think I have a (and the only) clear and simple explanation for that, but I promised not to digress into a fiscal theory today. Still inflation is persistently high, raising the obvious worry that it’s 1978 again. Obviously, central banks have a range of worries on which to focus a new strategy, not just a return to a long-lasting zero bound. (Though that could happen too.)…

…React or guide? It seems clear to me that policy will have to be described more in terms of how the Fed will react to events, rather than in standard forward guidance terms, unconditional promises of how the funds rate will evolve. It will involve more “data-dependent” rather than “time-dependent” policy.

In part, that must come, I think, as a result of the stunning failure of all inflation forecasts, including the Fed’s. Forecasts did not see inflation coming, did not see that it would surge up once it started, and basically always saw a swift AR(1) response from whatever it was at any moment back to 2%. Either the strategy review needs to dramatically improve forecasts, or the strategy needs to abandon dependence on forecasts to prescribe a future policy path, and thus just state how policy will react to events and very short-term forecasts. I state that as a question for debate, however…

…Fiscal limitations loom. Debt to GDP was 25% in 1980, and still constrained monetary policy. It’s 100% now, and only not 115% because we inflated away a bunch of it.  Each percentage point of real interest rate rise is now quickly (thanks to the Treasury’s decision to issue short, and the Fed’s QE which shortened even that maturity structure)  a percentage point extra interest cost on the debt, requiring a percent of GDP more primary surplus (taxes less spending). If that fiscal response is not forthcoming, higher interest rates just raise debt even more, and will have a hard time lowering inflation. In Europe, the problem is more acute, as higher interest costs could cause sovereign defaults. Many central banks have been told to hold down interest rates to make debt more sustainable. Those days can return…

…Ignorance. Finally, we should admit that neither we, nor central banks, really understand how the economy works and how monetary policy affects the economy. There is a complex verbal doctrine that bounces around central banks, policy institutions, and private analysts, asserting that interest rates have a relatively mechanical, reliable, and understood effect on “spending” through a “transmission mechanism” that though operating through “long and variable lags” gives the Fed essentially complete control over inflation in a few years. The one thing I know from 40 years of study, and all of you know as well, is that there is no respectable well-tested economic model that produces anything like that verbal doctrine. (More here.)  Knowing what you don’t know, and that nobody else does either, is knowledge. Our empirical knowledge is also skimpy, and the historical episodes underlying that experience come with quite different fiscal and financial-structure preconditions. 1980 was a different world in many ways, and also combined fiscal and microeconomic reform with high interest rates.

3. Big Tech Capex and Earnings Quality – John Huber

Capex is not only growing larger, but the rate of growth is set to accelerate this year as they invest in the AI boom. Combined capex at MSFT, GOOG and META is set to grow around 70% in 2024. As a percentage of sales, capex will grow from 13% of sales in 2023 to around 20% in 2024…

…Bottom line: the other Big Techs are getting far more capital intensive than they have in the past. Their FCF is currently lagging net income because of the large capex, and this will eventually flow through to much higher depreciation charges in the coming years.

This is not necessarily worrying — if the returns on these investments are good, then sales growth will be able to absorb these much higher expenses. But this is not a sure thing, so I like to use P/FCF metrics as I think a large majority of the assets they’re investing in will need to be replaced. This means the capex levels we see currently could be recurring. So, while the P/E ratios range from 25 to 35, the P/FCF ranges from 40-50.

Again, if the investments are able to earn good returns, then profit margins will remain intact, but one thing to notice is FCF margins (while very strong) have not kept up with GAAP profit margins: e.g. at MSFT, FCF margins have declined slightly from 28% to 26% over the last decade while net margins have expanded from 25% to 36%, leaving GAAP profit margins far in excess of FCF margins. Eventually, as growth slows these margins will tend to converge as depreciation “catches up” to cash capex spend. Whether net margins come down or FCF margins move up simply depends on the returns on capital earned and the growth it produces.

I’m not predicting a poor result, but I’m mindful of how difficult it will be given how different the companies are today. They used to grow with very little capital invested, but now they have a mountain of capital to deploy, which is obviously much harder at 7 times the size:…

…I don’t think anyone (including management) yet knows what the returns on the $150 billion of investments that these three companies will spend in 2024. They are optimistic, but it’s not clear cut to me.

Think about how much profit needs to be generated annually to earn acceptable returns on this capex: a 10% return would require $15 billion of additional after tax profits in year 1. As Buffett points out, if you require a 10% return on a $150 billion investment but get nothing in year 1, then you’d need $32 billion in year 2, and just one more year of deferred returns would require a massive $50 billion profit in year 3.

What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!

What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!

4. A Few Short Stories – Morgan Housel

Thirty-seven thousand Americans died in car accidents in 1955, six times today’s rate adjusted for miles driven.

Ford began offering seat belts in every model that year. It was a $27 upgrade, equivalent to about $190 today. Research showed they reduced traffic fatalities by nearly 70%.

But only 2% of customers opted for the upgrade. Ninety-eight percent of buyers preferred to remain at the mercy of inertia.

Things eventually changed, but it took decades. Seatbelt usage was still under 15% in the early 1980s. It didn’t exceed 80% until the early 2000s – almost half a century after Ford offered them in all cars.

It’s easy to underestimate how social norms stall change, even when the change is an obvious improvement. One of the strongest forces in the world is the urge to keep doing things as you’ve always done them, because people don’t like to be told they’ve been doing things wrong. Change eventually comes, but agonizingly slower than you might assume…

…When Barack Obama discussed running for president in 2005, his friend George Haywood – an accomplished investor – gave him a warning: the housing market was about to collapse, and would take the economy down with it.

George told Obama how mortgage-backed securities worked, how they were being rated all wrong, how much risk was piling up, and how inevitable its collapse was. And it wasn’t just talk: George was short the mortgage market.

Home prices kept rising for two years. By 2007, when cracks began showing, Obama checked in with George. Surely his bet was now paying off?

Obama wrote in his memoir:

George told me that he had been forced to abandon his short position after taking heavy losses.

“I just don’t have enough cash to stay with the bet,” he said calmly enough, adding, “Apparently I’ve underestimated how willing people are to maintain a charade.”

Irrational trends rarely follow rational timelines. Unsustainable things can last longer than you think…

…John Nash is one of the smartest mathematicians to ever live, winning the Nobel Prize. He was also schizophrenic, and spent most of his life convinced that aliens were sending him coded messages.

In her book A Beautiful Mind, Silvia Nasar recounts a conversation between Nash and Harvard professor George Mackey:

“How could you, a mathematician, a man devoted to reason and logical proof, how could you believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world?” Mackey asked.

“Because,” Nash said slowly in his soft, reasonable southern drawl, “the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

This is a good example of a theory I have about very talented people: No one should be shocked when people who think about the world in unique ways you like also think about the world in unique ways you don’t like. Unique minds have to be accepted as a full package.

5. An Interview with Databricks CEO Ali Ghodsi About Building Enterprise AI – Ben Thompson and Ali Ghodsi

So you said you came over to the U.S. in 2009. Did you go straight to UC Berkeley? There’s some great videos of you giving lectures on YouTube. You’re still an adjunct professor there. Do you ever teach anymore or is this a, “Homeboy made good, we’ll give him the title forever”, sort of situation?

AG: No, I teach about a class a year and I still enjoy really doing that. I imagine if I had nothing to do, that’s a job I would actually enjoy doing.

So yeah, I came to the United States just to stay here one year and do research at UC Berkeley and just ended up staying another year, another year, another year. And the timing was — we didn’t know it at the time, but Dave Patterson, who was a professor at UC Berkeley, and now Turing Award winner, which is the Nobel Prize in computer science essentially, said at the time, “We’ve had Moore’s Law, but we no longer know how to make the computers faster and cramming more transistors. That era is over, so computers are not going to get any faster”, and we know he was right, they’re all between two and four gigahertz since then.

So we need the new computer, and the new computer is the cloud, and it also needs new software, so we built all this software stack — the era of data and AI. So it was the perfect time. I always regretted, “Why was I not born in the ’50s or ’60s when computers happened?” — well, actually it kind of happened again in ’08, ’09, ’10, and Berkeley was at the forefront of that. So we were super lucky to see that kind of revolution and being part of that…

…The general idea is you mentioned you started out with Mesos where you needed to compute in parallel instead of serially so you have to have a cluster of computers, not just one. Spark lets you basically do the same thing with data, spread it out over a huge number of computers. You can end up with massive amounts of data, structured, unstructured, people will call it like a “data lake”. There’s a data lake, there’s a data warehouse, there’s a Data Lakehouse. Walk me through the distinction and where that applies to Databricks and its offering.

AG: At the time, the world was kind of split. Those that have structured data, structured data are things that you can represent in tables with rows and columns, those were in data warehouses and you could connect your BI tools, business intelligence tools, that lets you ask questions about the past from those rows and columns. “What was my revenue last week in different regions, by different products, by different SKUs?”, but you couldn’t ask questions about the future.

Then at the same time, we had these future looking workloads, which were, “Okay, we have all kinds of text, images, and unstructured data that’s coming into the enterprise,” and that you couldn’t store in these structured tables, they cannot be represented as tables of rows and columns, those you stored in what’s called data lakes. But then the good news was if you knew what you were doing, you could ask questions about the future, “What’s my revenue going to be next week? Which customer is going to churn next?”. But these worlds were living completely separately and securing them was very hard and there was a lot of redundant stacks that were being built up at the same time.

Our idea was how do we, 1) unify this and 2) how do we disrupt the existing ecosystem? How do we create the company that’s disruptive? And our idea was what if we have open source technology, everybody stores all their data, both the structured and unstructured data in the lake, which is just basically almost free storage by the cloud vendors, but we standardize the format in an open source format, so it almost becomes like USB — you can plug anything in there. Then we build an engine that can do both the BI stuff, backwards looking questions, and the futuristic AI stuff, and that’s what we call the Lakehouse, which is a portmanteau of data lakes and their warehouses. The marketing firms we talked to and anyone we’d ask said, “This is a terrible idea”…

So you’ve been using the word AI a lot. Did you use the word AI a lot five years ago?

AG: I think we used the word ML quite a bit.

Yeah, machine learning. That’s right, there’s a big word branding moment. I mean, there was the ChatGPT moment, so I guess there’s two questions. Number one, did that shift how you were thinking about this space, or was this already super clear to you? But then number two, I have to imagine it fundamentally changed the way your customers were thinking about things and asking about things.

AG: Yeah, from day one we were doing machine learning. We actually built MLlib as part of Spark already before we actually started Databricks. Actually the first use case of Spark in 2009 was to participate in the Netflix competition of recommending the best movie, and we got the second prize actually, we didn’t get the first prize.

The whole point about being able to distribute broadly and do things in a highly parallel manner, I mean we’re basically in that world.

AG: Exactly. Well, a lot of people also use that parallel worlds to just do backwards processing, like a data warehouse that tells you, “Tell me everything about the past”, and it’s great to see trend lines about the past, but to use this kind of more advanced statistical approach, that’s when you venture into machine learning. We were doing it already in 2012, ’13, I tried to push the company to use the phrase AI instead of ML, the most hardcore academics in the company were against it. They said that AI was a buzzword but I said, “No, I think that’s actually what resonates with people”. But at the same time we were seeing more and more deep neural networks so these neural networks are getting stacked do better and better.

Around 2018 is when we started seeing especially language processing, natural language processing, getting more and more applications on the platform. We saw insurance companies using them to analyze huge amounts of texts to assess risks, we saw translation starting to happen, we saw pharma companies analyzing big amounts of electronic medical records that were written, unstructured text. So it was pretty clear that something is going on with NLP [Natural Language Processing] and that just accelerated during the pandemic. So we saw it, we already had over a thousand customers using these kind of transformer models. So when ChatGPT came out, we kind of thought it’s a nothing burger, but of course we were wrong in that it was an absolute awareness revolution.

Yes, exactly.

AG: What we took for granted was not what the rest of the world was taking for granted. So we feel like the world woke up to AI in November 2022 with ChatGPT, though the truth is it had been going on for 20 years.

That’s what strikes me. That’s the biggest impact is number one, you had the total rebranding of everything to AI, my washing machine now has AI, what a miracle. But just the fact that you went through this when you started with Spark, you thought this is a great idea, no one knows what it is. Now suddenly people are asking you, knocking on your door, “We have data on your thing, can we run ChatGPT on it?” — is that how those conversations went?

AG: Yeah, I mean literally before ChatGPT, I would tell the marketing department to tone down the AI language because customers would say, “Hey, this AI stuff is futuristic, we have concrete problems right now with data that we need to solve”, so I actually shot down a marketing campaign and marketing was really upset about it, which said, “Customer X is a data and AI company, Customer Y is a data and AI company”. They had it ready to go and I shot it down and I said, “We don’t want to push so hard on AI because people don’t really want AI”, and then literally after ChatGPT happened, I told them, “Hey, that campaign from a couple of years ago, maybe we should run it now” — which we did actually and people loved it. So yeah, it’s just the market was just not ready…

All right, number three, Databricks solves Mosaic ML’s need to build a sales force and Mosaic ML solves Databricks need to build a sustainable differentiated business around an open source project.

AG: Yes, I think you are 99% right? I would modify that last sentence to say —

I didn’t give you enough credit for how much you had differentiated to date?

AG: No, I actually think that you kind of were spot on, but I would say with open source, I would say that it was Mosaic ML having a research team that really was deep in LLM research and AI, it was hard to come by at the time and it was very, very hard actually to hire those researchers that really gave us that. And then the know-how to customize LLMs on your data in a secure way.

How does that work? How do you do that?

AG: So this is what their specialty was. When everybody else was building one giant model or a few giant models that are supposed to be very smart, these guys, their business model was, “We build it again and again and again, custom either from scratch or from an existing checkpoint, you tell us or we can fine tune it, but we can help you build an LLM for you and we will give you the intellectual property of that LLM and its weights”. That way you as a customer can compete with your competitors and in the long run you become a data and AI leader just like our billboards that I had banned a few years earlier say. You’re going to be a data and AI company. It doesn’t matter if you’re a pharma company or a finance company or a retail company, you’re actually going to be a data and AI company, but for that you need intellectual property. Elon Musk is not just going to call OpenAI for his self-driving capabilities, he needs to have his own. Same thing is going to be true for you in finance, retail, media. So that was their specialty, but we had the data.

Is that actually true though? Do they actually need to have their own intellectual property or is there a sense — my perception, and I picked up on this, I was at some sort of conference with a bunch of CEOs, it struck me how they had this perception of, “We’ve had this data for years, we were right to hold onto it, this is so valuable!”, and I’m almost wondering, are you now so excited about your own data that you’re going to be over protective of it? You’re not going to want to do anything, you’re actually going to sort of paralyzed by, “We have so much value here, we have to do it ourselves”, and miss out on leveraging it sooner rather than later because you’re like, “It has to be just us”.

AG: No, I do think that people have now realized how valuable their data is, there’s no doubt about that and it is also true, I believe in it. The way I think of it is that you can think of the world as two kind of parallel universes that coexist these days with LLMs. We’re super focused on one, which is the kind of open Internet and the whole crawl of everything that’s in it and all of the history of mankind that has been stored there. Then you’re building LLMs that’s trained on that and they become intelligent and they can reason and understand language, that’s what we’re focused on.

But we’re ignoring this other parallel universe, which is every company on the planet that you join have you sign an NDA, an employee agreement, and then that gives you access to all this proprietary data that they have on the customers and everything else, and they have always been protective of that. The LLMs today that we are training and we’re talking about, they don’t understand that data, they do not understand the three letter acronyms in any organization on the planet.

So we do the boring LLMs and the boring AI for those enterprises. We didn’t have quite the muscle to do it without Mosaic, they really understood how to build those LLMs, we had the data already. So we had the data and we had the sales force, Mosaic did not have the data, they did not have the sales force, they did have the know-how of how to build those custom models.

I don’t think that the companies are hamstrung and they’re not going to do anything with it, they want to do things with it. I mean, people are ready to spend money to do this. It’s just that I feel like it’s a little bit of a 2007 iPhone moment. iPhone comes out, every company on the planet says, “We have to build lots of iPhone apps, we have to”. Then later it turns out, “Well, okay, every company building a flashlight app is maybe not the best use of resources, in fact, maybe your iPhone will just have a flashlight in it”. So then it comes back to what special data do you have that no one else has, and how can we actually monetize that?

How does it actually work to help companies leverage that? So you released a state-of-the-art open LLM, DBRX, pretty well regarded. Do you do a core set of training on open data on whatever might exist and then you’d retrain it with a few extra layers of the company’s proprietary data and you have to do that every time? How modular is that? How does that actually work in practice?

AG: Yeah, there’s a whole slew of different techniques ranging from very, very lightweight fine tuning techniques. The most popular one is called LoRA, low rank adaptation, to actually training a chunk of the model. So you take an existing model that’s already trained and it already works and you customize a bunch of the layers to what’s called CPT, continuous pre-training, in which case you actually train all of the layers of the model, an existing model that’s already baked and ready, but you train all of the layers. It costs more to do that to all the way if you’re doing something really different. So if the domain that you’re using for the data set is significantly different, then you want actually what’s called pre-train, which is train the model from scratch. If you’re a SaaS application and LLMs is the core of the offering, you probably want to have a pre-trained model from scratch, so we can do all of those.

I would say the industry is not actually a hundred percent, the research is not a hundred percent clear today of when should you use which, where. We have a loose idea that if you don’t have huge amounts of data and it’s kind of similar in domain to what the LLM already can do, then you can probably use the more lightweight ones, and if your data is very different and it’s significant, then probably the lightweight mechanisms are not good for you, and so on. So we have a research team that really can do this really, really well for enterprises. But I think a lot of progress is going to happen in the next few years to determine how can we do this automatically? How do we know when to use them? And there might be new techniques also that are developed.

What’s the trade-off? I imagine you talk to a company, we absolutely want the most accurate model for sure, we want it totally customized to us. And then you’re like, “Okay, that’s going to cost XYZ, but then also to serve it is going to cost ABC”. The larger a model is the more expensive it is to serve and so your inference costs are just going to overwhelm even the upfront costs. What’s that discussion like and trade-off like that you’re having with your customers?

AG: Well, the vast majority have lots of specific tasks that they want to do. So again, a lot of people are thinking of things like ChatGPT, which are sort of completely general purpose open-ended, ask me anything. But enterprise typically have, “Okay, I want to extract labels from all this core piece of data, I want to do it every day like ten times a day”, or, “I want to tag all of these articles with the right tags and I want to do that very accurately”. So then actually for those specific tasks, it turns out you can have small models. The size of the model helps you actually be much cheaper and that matters at scale and then they are really, really concerned about quality and accuracy of that, but for that specific task, it doesn’t need to nail a super balanced answer to the question of whether there was election fraud or not in 2020.

(laughing) Right.

AG: It just needs to really extract those tags really, really well, so then there are techniques you can use to that. There is a way where you can actually have your cake and eat it too, assuming that the task you want to do is somewhat narrow.

But we also have customers that are, “No, I’m building a complete interactive general-purpose application in say, many of the Indian dialects in India, and I want to do that, and existing models are not very good at that, help me do that”. Then you have to go for a bigger model but bigger is usually more expensive. Of course, we are using the mixture of experts architecture, which we think is where the world is headed and which is what people also think what GPT-4 was based on, but we’ve also seen with Llama 3 from Meta that dense models, that are not mixture of experts, are also excellent and they’re doing really, really well…

Is there a difference between domestic and international in terms of the aggressiveness with which they’re approaching AI?

AG: Yeah, I would say that China is moving very, very fast on AI. Some Asian countries, there’s less regulation. Europe, I feel is lagging always, has been lagging a few years behind the United States, and they’re concerned about — there’s also competitive concerns with so many American companies, cloud companies and so on from Europe. So Europe is a little bit more regulated and a few years usually lagging United States.

That’s what we’re seeing, but there’s regional differences. Like India is very interesting because it’s moving so fast, there’s no signs of anything that’s recession-like over there. There are markets like Brazil and so on that are doing really well. So really, you have to go case-by-case, country-by-country. We have significant portion now of our business in Europe as well, and also now a growing business in Asia and also Latin America.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time. Holdings are subject to change at any time.

What We’re Reading (Week Ending 12 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 12 May 2024:

1. From Blueprint to Therapy: The Evolution and Challenges of Nucleic Acid Interventions – Biocompounding

Nucleic Acid Therapies (NATs) offer a targeted approach to rectify these underlying genetic issues. By employing strategies like antisense oligonucleotides, mRNA therapy, RNA interference, or CRISPR-based gene editing, NATs can directly modify or regulate the expression of genes responsible for the disease. These therapies can repair or silence defective genes, replace missing genes, or modulate gene expression, thereby addressing the root cause of the disease at the molecular level. This precision in targeting genetic defects makes NATs a promising and revolutionary approach in modern medicine, potentially offering cures or significant treatment improvements for numerous genetic and acquired diseases.

The modalities of NATs vary based on their mechanism of action, type of nucleic acid used, and therapeutic goals. Here’s an introduction to the different modalities of NATs:

  1. Antisense Oligonucleotides (ASOs): These are short, synthetic strands of DNA or RNA that are designed to bind to specific RNA molecules within a cell. By binding to their target RNA, ASOs can interfere with the process of protein production. They can inhibit the expression of a gene, modify RNA splicing, or promote the degradation of the RNA molecule. ASOs are used in conditions like Duchenne Muscular Dystrophy and Spinal Muscular Atrophy. Example: Sarepta Therapeutics
  2. RNA Interference (RNAi): This modality uses small interfering RNA (siRNA) or microRNA (miRNA) to silence specific genes. RNAi works by degrading the mRNA of a target gene, preventing it from being translated into a protein. This approach is particularly useful in diseases where inhibiting the expression of a certain gene can be therapeutic. RNAi has been explored for various applications including cancer therapy and viral infections. Currently FDA-approved siRNAs are used in conditions such as Hereditary transthyretin-mediated amyloidosis. Example: Alnylam Pharmaceuticals
  3. AAV Gene Therapy: Adeno-associated virus (AAV) vectors are commonly used in gene therapy. AAVs are small viruses that can deliver genetic material into cells without causing disease. In AAV gene therapy, the therapeutic gene is packaged into an AAV vector, which then delivers the gene into patient’s cells. This modality is useful for treating genetic disorders, such as Hemophilia A, by providing a functional copy of a defective or missing gene. Example: Spark Therapeutics
  4. mRNA Therapy: mRNA therapies involve the use of messenger RNA to produce therapeutic proteins inside the body. Unlike traditional gene therapy that alters the DNA within cells, mRNA therapy delivers mRNA that is translated into the desired protein, offering a temporary but effective treatment. This approach has gained significant attention, especially in the development of COVID-19 vaccines. Currently there are several attempts to develop cancer vaccines by the key players in this space. Example: Moderna, BioNTech, Pfizer
  5. CRISPR/Cas9 and Genome Editing: This revolutionary technology enables precise editing of the genome. CRISPR/Cas9 can be used to add, delete, or alter specific DNA sequences in the genome, offering the potential to correct genetic defects at their source. While still in the experimental stages for many applications, it holds promise for treating a range of genetic disorders. In Dec 2023, the first ever FDA-approved CRISPR-based gene therapy was used to treat sickle cell disease. Example: CRISPR Therapeutics, Vertex

2. China Is Still Rising – Nicholas Lardy

Those who doubt that China’s rise will continue point to the country’s weak household spending, its declining private investment, and its entrenched deflation. Sooner than overtake the United States, they argue, China would likely enter a long recession, perhaps even a lost decade.

But this dismissive view of the country underestimates the resilience of its economy. Yes, China faces several well documented headwinds, including a housing market slump, restrictions imposed by the United States on access to some advanced technologies, and a shrinking working-age population. But China overcame even greater challenges when it started on the path of economic reform in the late 1970s. While its growth has slowed in recent years, China is likely to expand at twice the rate of the United States in the years ahead.

Several misconceptions undergird the pessimism about China’s economic potential…

…A second misconception is that household income, spending, and consumer confidence in China is weak. The data do not support this view. Last year, real per capita income rose by 6 percent, more than double the growth rate in 2022, when the country was in lockdown, and per capita consumption climbed by nine percent. If consumer confidence were weak, households would curtail consumption, building up their savings instead. But Chinese households did just the opposite last year: consumption grew more than income, which is possible only if households reduced the share of their income going to savings…

…Another misconception concerns the potential for a collapse in property investment. These fears are not entirely misplaced; they are supported by data on housing starts, the number of new buildings on which construction has begun, which in 2023 was half what it was in 2021. But one has to look at the context. In that same two-year period, real estate investment fell by only 20 percent, as developers allocated a greater share of such outlays to completing housing projects they had started in earlier years. Completions expanded to 7.8 billion square feet in 2023, eclipsing housing starts for the first time. It helped that government policy encouraged banks to lend specifically to housing projects that were almost finished; a general easing of such constraints on bank loans to property developers would have compounded the property glut…

…By 2014, private investment composed almost 60 percent of all investment—up from virtually zero percent in 1978. As private investment is generally more productive than that of state companies, its expanding share of total investment was critical to China’s rapid growth over this period. This trend went into reverse after 2014 when Xi Jinping, having just assumed the top leadership position, aggressively redirected resources to the state sector. The slowdown was modest at first, but by 2023, private investment accounted for only 50 percent of total investment…

…But here again, the pessimism is not supported by the data. First, almost all the decline in the private share of total investment after 2014 resulted from a correction in the property market, which is dominated by private companies. When real estate is excluded, private investment rose by almost ten percent in 2023. Although some prominent Chinese entrepreneurs have left the country, more than 30 million private companies remain and continue to invest.

3. The Cloud Under The Sea – Josh Dzieza

In the family tree of professions, submarine cable work occupies a lonely branch somewhere between heavy construction and neurosurgery. It’s precision engineering on a shifting sea using heavy metal hooks and high-tension lines that, if they snap, can cut a person in half. In Hirai’s three decades with Kokusai Cable Ship Company (KCS), he had learned that every step must be followed, no matter how chaotic the situation. Above all else, he often said, “you must always be cool.”…

…The world’s emails, TikToks, classified memos, bank transfers, satellite surveillance, and FaceTime calls travel on cables that are about as thin as a garden hose. There are about 800,000 miles of these skinny tubes crisscrossing the Earth’s oceans, representing nearly 600 different systems, according to the industry tracking organization TeleGeography. The cables are buried near shore, but for the vast majority of their length, they just sit amid the gray ooze and alien creatures of the ocean floor, the hair-thin strands of glass at their center glowing with lasers encoding the world’s data.

If, hypothetically, all these cables were to simultaneously break, modern civilization would cease to function. The financial system would immediately freeze. Currency trading would stop; stock exchanges would close. Banks and governments would be unable to move funds between countries because the Swift and US interbank systems both rely on submarine cables to settle over $10 trillion in transactions each day. In large swaths of the world, people would discover their credit cards no longer worked and ATMs would dispense no cash. As US Federal Reserve staff director Steve Malphrus said at a 2009 cable security conference, “When communications networks go down, the financial services sector does not grind to a halt. It snaps to a halt.”…

…Fortunately, there is enough redundancy in the world’s cables to make it nearly impossible for a well-connected country to be cut off, but cable breaks do happen. On average, they happen every other day, about 200 times a year. The reason websites continue to load, bank transfers go through, and civilization persists is because of the thousand or so people living aboard 20-some ships stationed around the world, who race to fix each cable as soon as it breaks.

The industry responsible for this crucial work traces its origins back far beyond the internet, past even the telephone, to the early days of telegraphy. It’s invisible, underappreciated, analog. Few people set out to join the profession, mostly because few people know it exists…

…Once people are in, they tend to stay. For some, it’s the adventure — repairing cables in the churning currents of the Congo Canyon, enduring hull-denting North Atlantic storms. Others find a sense of purpose in maintaining the infrastructure on which society depends, even if most people’s response when they hear about their job is, But isn’t the internet all satellites by now? The sheer scale of the work can be thrilling, too. People will sometimes note that these are the largest construction projects humanity has ever built or sum up a decades-long resume by saying they’ve laid enough cable to circle the planet six times…

…The world is in the midst of a cable boom, with multiple new transoceanic lines announced every year. But there is growing concern that the industry responsible for maintaining these cables is running perilously lean. There are 77 cable ships in the world, according to data supplied by SubTel Forum, but most are focused on the more profitable work of laying new systems. Only 22 are designated for repair, and it’s an aging and eclectic fleet. Often, maintenance is their second act. Some, like Alcatel’s Ile de Molene, are converted tugs. Others, like Global Marine’s Wave Sentinel, were once ferries. Global Marine recently told Data Centre Dynamics that it’s trying to extend the life of its ships to 40 years, citing a lack of money. One out of 4 repair ships have already passed that milestone. The design life for bulk carriers and oil tankers, by contrast, is 20 years.

“We’re all happy to spend billions to build new cables, but we’re not really thinking about how we’re going to look after them,” said Mike Constable, the former CEO of Huawei Marine Networks, who gave a presentation on the state of the maintenance fleet at an industry event in Singapore last year. “If you talk to the ship operators, they say it’s not sustainable anymore.”

He pointed to a case last year when four of Vietnam’s five subsea cables went down, slowing the internet to a crawl. The cables hadn’t fallen victim to some catastrophic event. It was just the usual entropy of fishing, shipping, and technical failure. But with nearby ships already busy on other repairs, the cables didn’t get fixed for six months. (One promptly broke again.)

But perhaps a greater threat to the industry’s long-term survival is that the people, like the ships, are getting old. In a profession learned almost entirely on the job, people take longer to train than ships to build.

“One of the biggest problems we have in this industry is attracting new people to it,” said Constable. He recalled another panel he was on in Singapore meant to introduce university students to the industry. “The audience was probably about 10 university kids and 60 old gray people from the industry just filling out their day,” he said. When he speaks with students looking to get into tech, he tries to convince them that subsea cables are also part — a foundational part — of the tech industry…

…To the extent he is remembered, Cyrus Field is known to history as the person responsible for running a telegraph cable across the Atlantic Ocean, but he also conducted what at the time was considered an equally great technical feat: the first deep-sea cable repair.

Field, a 35-year-old self-made paper tycoon, had no experience in telegraphy — which helps explain why, in 1854, he embarked on such a quixotic mission…

…“When it was first proposed to drag the bottom of the Atlantic for a cable lost in waters two and a half miles deep, the project was so daring that it seemed to be almost a war of the Titans upon the gods,” wrote Cyrus’ brother Henry. “Yet never was anything undertaken less in the spirit of reckless desperation. The cable was recovered as a city is taken by siege — by slow approaches, and the sure and inevitable result of mathematical calculation.”

Field’s crew caught the cable on the first try and nearly had it aboard when the rope snapped and slipped back into the sea. After 28 more failed attempts, they caught it again. When they brought it aboard and found it still worked, the crew fired rockets in celebration. Field withdrew to his cabin, locked the door, and wept.

Cable repair today works more or less the same as in Field’s day. There have been some refinements: ships now hold steady using automated dynamic positioning systems rather than churning paddle wheels in opposite directions, and Field’s pronged anchor has spawned a medieval-looking arsenal of grapnels — long chains called “rennies,” diamond-shaped “flat fish,” spring-loaded six-blade “son of sammys,” three-ton detrenchers with seven-foot blades for digging through marine muck — but at its core, cable repair is still a matter of a ship dragging a big hook along the ocean floor. Newfangled technologies like remotely operated submersibles can be useful in shallow water, but beyond 8,000 feet or so, conditions are so punishing that simple is best…

…Debates about the future of cable repair have become a staple of industry events. They typically begin with a few key facts: the ships are aging; the people are aging; and it’s unclear where the money will come from to turn things around.

For much of the 20th century, cable maintenance wasn’t a distinct business; it was just something giant, vertically integrated telecom monopolies had to do in order to function. As they started laying coaxial cables in the 1950s, they decided to pool resources. Rather than each company having its own repair vessel mostly sitting idle, they divided the oceans into zones, each with a few designated repair ships.

When the telcos were split up at the turn of the century, their marine divisions were sold off. Cable & Wireless Marine became Global Marine. AT&T’s division is now the New Jersey-based SubCom. (Both are now owned by private equity companies; KCS remains a subsidiary of KDDI.) The zone system continued, now governed by contracts between cable owners and ship operators. Cable owners can sign up with a nonprofit cooperative, like the Atlantic Cable Maintenance & Repair Agreement, and pay an annual fee plus a day rate for repairs. In exchange, the zone’s three ships — a Global Marine vessel in Portland, UK, another in Curaçao, and an Orange Marine vessel in Brest, France — will stand ready to sail out within 24 hours of being notified of a fault.

This system has been able to cope with the day-to-day cadence of cable breaks, but margins are thin and contracts are short-term, making it difficult to convince investors to spend $100 million on a new vessel.

“The main issue for me in the industry has to do with hyperscalers coming in and saying we need to reduce costs every year,” said Wilkie, the chair of the ACMA, using the industry term for tech giants like Google and Meta. “We’d all like to have maintenance cheaper, but the cost of running a ship doesn’t actually change much from year to year. It goes up, actually. So there has been a severe lack of investment in new ships.”

At the same time, there are more cables to repair than ever, also partly a result of the tech giants entering the industry. Starting around 2016, tech companies that previously purchased bandwidth from telcos began pouring billions of dollars into cable systems of their own, seeking to ensure their cloud services were always available and content libraries synced. The result has been not just a boom in new cables but a change in the topology of the internet. “In the old days we connected population centers,” said Constable, the former Huawei Marine executive. “Now we connect data centers. Eighty percent of traffic crossing the Atlantic is probably machines talking to machines.”…

…In 2022, the industry organization SubOptic gathered six cable employees in their 20s and 30s for a panel on the future of the industry. Most of them had stumbled into their jobs inadvertently after college, and the consensus was that the industry needed to be much better about raising public awareness, especially among the young.

“I don’t know if anyone saw, but during the pandemic, submarine cables actually went viral on TikTok,” said one panelist, a young cable engineer from Vodafone. “People didn’t know they existed, and then suddenly, out of nowhere, they were viral. I think it’s engaging with youth and children through their own avenues — yes, you can have science museums and things like that, but they are online, they are on their iPads, they’re on their phones.”

“We’ve got some pretty senior decision-makers and influencers in the subsea cable industry here,” said one audience member. “Did any of us know that we went viral on TikTok?” he asked, to laughter.

“As this panel rightfully said upfront, it’s not that we have a brand problem,” said another audience member, “we just don’t have a brand at all.”

4. Looking for AI use-cases – Benedict Evans

I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.

The one really big use-case that took off in 2023 was writing code, but I don’t write code. People use it for brainstorming, and making lists and sorting ideas, but again, I don’t do that. I don’t have homework anymore. I see people using it to get a generic first draft, and designers making concept roughs with MidJourney, but, again, these are not my use-cases. I have not, yet, found anything that matches with a use-case that I have. I don’t think I’m the only one, either, as is suggested by some of the survey data – a lot of people have tried this, especially since you don’t need to spend $12,000 on a new Apple II, and it’s very cool, but how much do we use it, and what for?…

…Suppose you want to analyse this month’s customer cancellations, or dispute a parking ticket, or file your taxes – you can ask an LLM, and it will work out what data you need, find the right websites, ask you the right questions, parse a photo of your mortgage statement, fill in the forms and give you the answers. We could move orders of magnitude more manual tasks into software, because you don’t need to write software to do each of those tasks one at a time. This, I think, is why Bill Gates said that this is the biggest thing since the GUI. That’s a lot more than a writing assistant.

It seems to me, though, that there are two kinds of problem with this thesis.

The narrow problem, and perhaps the ‘weak’ problem, is that these models aren’t quite good enough, yet. They will get stuck, quite a lot, in the scenarios I suggested above. Meanwhile, these are probabilistic rather than deterministic systems, so they’re much better for some kinds of task than others. They’re now very good at making things that look right, and for some use-cases this is what you want, but for others, ‘looks right’ is different to ‘right’…

…The deeper problem, I think, is that no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this…

…The cognitive dissonance of generative AI is that OpenAI or Anthropic say that we are very close to general-purpose autonomous agents that could handle many different complex multi-stage tasks, while at the same time there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL. Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.

I often compared the last wave of machine learning to automated interns. You want to listen to every call coming into the call centre and recognise which customers sound angry or suspicious: doing that didn’t need an expert, just a human (or indeed maybe even a dog), and now you could automate that entire class of problem. Spotting those problems and building that software takes time: machine learning’s breakthrough was over a decade ago now, and yet we are still inventing new use-cases for it – people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.

You could propose the current wave of generative AI as giving us another set of interns, that can make things as well as recognise them, and, again, we need to work out what. Meanwhile, the AGI argument comes down to whether this could be far, far more than interns, and if we had that, then it wouldn’t be a tool anymore.

5. TIP622: Finding Certainty In An Uncertain World w/ Joseph Shaposhnik – Clay Finck and Joseph Shaposhnik

[00:29:29] Joseph Shaposhnik: I think of the credit bureaus and I think of a partner of theirs, which we’ll spend a minute talking about in a second, but. As you may know, the credit bureaus, there’s three of them in the United States. And they run an incredible oligopoly. If you want to secure a mortgage, get a car loan, rent a home, they’re involved in all of those decision making situations by the owners of those assets.

[00:29:55] Joseph Shaposhnik: As an example, if you go for a mortgage, all three credit bureaus will be pinned to get a score on you. All of them will be paid a couple of dollars for that score and all of that information that they’re pulling is contributory data. So there’s a relatively insignificant amount of incremental cost to generate that score and deliver it to the customer.

[00:30:21] Joseph Shaposhnik: You know, it’s a 95% incremental margin business. I mean, this is an incredible business. It’s basically an override on all economic activity in the United States and outside the United States where they play. And they’re just incredible businesses. But surprisingly not incredible stocks. You know, how could that be?

[00:30:40] Joseph Shaposhnik: It’s shocking to give you a sense organic growth if you look back, the last 5 years for the businesses have been approximately 7% a year. So, 3 or 4 times. Global GDP or U.S. GDP. They’ve outgrown the S&P the average S&P business over that period of time. They started with 30% EBITDA margins at the beginning of the 5 year period, so very profitable businesses.

[00:31:09] Joseph Shaposhnik: Yet over the last five years, two out of the three credit bureaus have underperformed the S&P, and over a 10 year period, they’ve been just in line performers with the S&P and so, I mean, they run an oligopoly. How could that possibly be? I used to be the credit bureau analyst at TCW, so I’m very familiar with these businesses, and they’re just incredible companies.

[00:31:33] Joseph Shaposhnik: And what happened is all three of these businesses spent more money on M&A than they generated in free cash flow over that five year period of time. They spent more money on M&A than all of the free cash flow they generated over the last five years. And they generate a lot of free cash flow. And let me say, let me just tell you, this is not on synergistic M&A.

[00:31:57] Joseph Shaposhnik: This was, I mean, they would call it synergistic, but it’s very difficult to synergize a near utility that they operate. And instead of just sticking to their knitting, they decided to acquire a lot of different data assets. that were incredibly expensive, generally from private equity, which doesn’t give assets away.

[00:32:18] Joseph Shaposhnik: And those returns are always, the returns on those businesses are always going to be lower than the returns on this incredible oligopoly that they run. And so, as interestingly as that, so of course, margins have been under pressure, returns have gone way down for these businesses because of all the acquisitions, these poor acquisitions at high multiples.

[00:32:42] Joseph Shaposhnik: And one of the most surprising things is we looked at the data on this, two out of the three businesses engaged in near zero share of purchases over that five year period of time. So you have this incredible business, you know, these three businesses that run an oligopoly, basically just an override on all economic activity.

[00:33:03] Joseph Shaposhnik: And they find all of these other businesses more attractive to allocate capital to than their own business, which is a 95% incremental margin business. Incredible. No wonder the stocks have not performed well, even though those businesses and those stocks should be like shooting fish in a barrel.

[00:33:20] Joseph Shaposhnik: So it’s incredible they bought back no, no stock, two out of the three businesses bought back no meaningful amount of stock. And not surprisingly, those businesses underperformed. In contrast to that, they have a partner, which is Fair Isaacs. And so Fair Isaacs, which is, the ticker is FICO provides the formula to the credit bureaus, which generates the score.

[00:33:44] Joseph Shaposhnik: The credit bureaus contribute the data. and the data with the formula creates a score that they can then sell to their end customers. So the bureaus pay FICO a fee for the formula, and they take the formula, and they generate a score, and they sell it to their customer. So you would think that FICO is basically in this ecosystem, has similar growth dynamics, has similar returns going into that 5 year period of time, similar EBITDA margins, tied to the same end markets, relatively similar company.

[00:34:17] Joseph Shaposhnik: Yet, over that five year period of time, FICO took all of its free cash flow, all of it, and used it to repurchase its shares. And so over the last five years, FICO has reduced share count by 20%, has engaged in no meaningful acquisitions to dilute its incredible franchise, and has generated a five bagger over the last five years.

[00:34:44] Joseph Shaposhnik: compared to the bureaus that have generated 15 to 100% return, total return over that 5 year period of time. So, a 5 bagger, which has outperformed the market by a ton, compared to an underperforming or an inline performance for the bureaus, I think just tells the tale of how important great capital allocation decision making is, how important it is to be aligned with a management team that understands how to generate value for shareholders.

[00:35:13] Joseph Shaposhnik: And I think for us and for everybody, serves as a warning when we think about investing with teams that are acquiring businesses in general and certainly acquiring businesses That are not as attractive as the core business. So capital allocation makes or breaks stories all the time, and incentives generally drive these decisions, but often times it just takes and an investor oriented CEO to see the big opportunity, which is usually in its core and not far afield.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 May 2024:

1. Karen Karniol-Tambour on investment anomalies at Sohn 2024 (transcript here) – Karen Karniol-Tambour and Jawad Mian

Jawad Mian (00:30): So 6 months ago equities were rallying in anticipation of lower interest rates but now we’ve seen year-to-date equities are rallying despite higher bond yields. So with a strong economy and inflation less of an issue, are you reverting to the typical inverse relationship between equities and bonds.

Karen Karniol-Tambour (00:49): The relationship between equities and bonds – it’s not an immutable fact of life. It’s not just a thing that occurs. It’s a function of the fundamental building blocks in stocks and bonds. When you look at stocks and bonds, they have a lot of things in common. They’re all future cash flows you’re discounting to today. So if you raise that, it’s bad for both and they both don’t do great when inflation is strong. The real inverse comes from their reaction to growth for the reason you’re saying. If growth is strong, then you can get equities rising and at the same time you can actually get the central bank tightening in response to that growth, which is bad for the bonds. And actually, the anomaly has been the years leading up to 2022, where inflation was just just a non-factor and the only dominant macro issue was growth. And so we’ve gotten really used to the idea that stocks and bonds have this inverse relationship. But that’s actually the anomaly. It’s not that normal to have a world where inflation just doesn’t matter. And finally, we live through this period where it’s like, “Wait a minute, inflation, its gravitational pull was at such a low level it was irrelevant – it’s becoming relevant again.” And we got this positive correlation where they both did badly, because you need to tighten in response to that inflation rearing its head.

Today – knock on wood – we look like we’re back to a world where inflation is not a non-issue, but it’s not a dominant issue, where we can have the kind of market action we’ve enjoyed so far in 2024, where we find out growth’s pretty damn resilient, growth’s doing great, companies can do well, earnings can do well, and at the same time the FED can ease less-than-expected or tighten relative to expectations at the same time. If they were tightening to stop very bad inflation, that would be a very different outcome. So the fundamental question as an investor is sort of where is the gravitational pull of inflation going to be? Is this going to be a major topic that then lead stocks and bonds sometimes to act the same way? Or is it going to go back to being kind of a non-issue?…

…Mian (02:53): A second anomaly. For the last 50 years, we’ve seen the US budget deficit average around 3% and it’s projected to be 6% over the next decade. So far we have seen markets being willing to finance these record deficits, in contrast to the UK for example. How come?

Karniol-Tambour (03:11): I think the best answer to this starts with actually the current account deficits, because obviously that’s part of who’s buying all the bonds we’re issuing. And it is a really weird anomaly because the United States is buying way more foreign goods than they’re buying ours. And typically if countries do that, their currency is weak because they have to convince someone to hold all the currency on the other side of that, so they have to attract all this financing. That the United States is running a massive current account deficit and yet the dollar is strong because what’s happening on the other end is people are just so enthusiastic about buying dollar financial assets. It’s so extreme that I think the United States has kind of a version of a Dutch disease.

So the classic Dutch disease is, you’re Saudi Arabia, you have oil. No one’s buying oil because you’re Saudi Arabia. No one’s thinking, “I really want Saudi oil.” They just need to fill up their car. So whatever the gas, is the gas is. But as Saudi Arabia, you get uncompetitive outside of it because money’s flooding in just for your oil, for nothing else. The United States has kind of become that on financial assets, which is people aren’t really thinking “I just want US financial assets.” It’s just that United States financial assets have done so well, they’re the dominant part of the index in stocks and in bonds. So anyone that needs to save any money around the world just ends up in US assets. As long as you care at all about market cap – which anyone reasonable would – and you’re going to the big market around the world, if you’re saving, you’re giving the United States money. And so we’re ending up with this flood of money that is a huge anomaly where we actually have a rising currency making everything else kind of uncompetitive, because people just want to buy stocks and bonds and no one else enjoys that. So we can run these huge deficits and sort of not worry about it.

2. Remembering Daniel Kahneman: A Mosaic of Memories and Lessons – Evan Nesterak and many others

To be continued …

By Richard Thaler, Professor of Behavioral Science and Economics, University of Chicago

My fondest memories of working with Danny come from 1984 to ’85 when I spent a year visiting him in Vancouver at The University of British Columbia. Danny had just begun a new project with Jack Knetsch on what people think is fair in market transactions and they invited me to join them. We had the then-rare ability to ask survey questions to a few hundred randomly selected Canadians each week. We would draft three versions of five questions, fax them to Ottawa Monday morning, get the results faxed back to us Thursday afternoon. Who needs Mturk! We then spent the weekend digesting the results and writing new questions.

We learned that raising the price of snow shovels the morning after a blizzard might make sense to an economist, but would make customers angry. Danny displayed two of his most prominent traits. He was always a skeptic, even (especially?) about his own ideas, so we stress-tested everything. And he was infinitely patient in that pursuit. Was our finding just true for snow shovels? What about water after a hurricane? Flu medicine? How about late-season discounts (which of course are fine). It was total immersion; meeting in person several times a week and talking constantly. We were in the zone.

Although we spent another year together in New York seven years later, we were unable to recreate that intensity. We had too many other balls in the air. But we continued our conversations and friendship until the end. Every conversation ended the same way: “To be continued.”…

...I’m more like a spiral than a circle

By Dan Lovallo, Professor of Strategy, Innovation and Decision Sciences, University of Sydney

Many people have heard that Danny changes his mind—a lot. This is certainly true. I have never written even a 5,000-word essay with him that didn’t take a year. Let me add another dimension to the discussion. During our last working dinner at a bistro in New York, and possibly out of mild frustration, I said, “Danny, you know you change your mind a lot.” It wasn’t a question. He continued chewing. I continued my line of non-question questioning: “And often you change it back to what it was at the beginning.”

Danny, having finished his bite and without missing a beat, looked up and in his characteristic lilt said, “Dan, that’s when I learn the most.” Then using his finger he drew a circle in space. “I don’t go around and around a problem. It might seem like it, but I am getting deeper and deeper.” The circle morphed into a three-dimensional spiral. “So, you’re missing all the learning,” he explained, as he displayed the invisible sculpture. “I’m more like a spiral than a circle.” Happy with this new idea, Danny grinned as only Danny could…

A case in character

By Angela Duckworth, Professor of Psychology, University of Pennsylvania

One evening, more than twenty years ago, I was the last one in the lab when the phone rang. “Hello?” I said, I hope not brusquely. I was a Ph.D. student at the time and eager to get back to my work. “Hello?” came the reply of an uncommonly polite older gentleman, whose accent I couldn’t quite place. “I’m so sorry to trouble you,” he continued. “I believe I’ve just now left my suitcase there.” Ah, this made sense. We’d hosted an academic conference that day. “It’s a terrible inconvenience, I know, but might you keep it somewhere until I can return to pick it up?” “Sure,” I said, cradling the receiver and grabbing a notepad. “How do you spell your name?” “Thank you so very much. It’s K-A-H-N-E-M-A-N.” I just about fainted. “Yes, Dr. Kahneman,” I said, coming to my senses, likely more deferentially than when I’d first picked up.

When I hung up, I thought to myself, Oh, it’s possible to be a world-famous genius—the most recently anointed Nobel laureate in economics, among other honors—and interact with anybody and everybody with utmost respect and dignity, no matter who they are. In the years that followed, I got to know Danny Kahneman much better, and when I did, that view was only confirmed. Confirmation bias? Halo effect? No and no. What then? Character. The world is mourning the loss of Danny Kahneman the genius, as we should, but I am missing Danny Kahneman the person…

Anxious and unsure

By Eric Johnson, Professor of Business, Columbia University

A few months before the publication of Thinking, Fast and Slow in 2011, the Center for Decision Sciences had scheduled Danny to present in our seminar series. We were excited because he had decided to present his first “book talk” with us. Expecting a healthy crowd, we scheduled the talk in Uris 301, the biggest classroom in Columbia Business School.

I arrived in the room a half hour early to find Danny, sitting alone in the large room, obsessing over his laptop. He confided that he had just changed two-thirds of the slides for the talk and was quite anxious and unsure about how to present the material. Of course, after the introduction, Danny presented in his usual charming, erudite style, communicating the distinction between System 1 and System 2 with clarity to an engaged audience. Afterwards, I asked him how he thought it went, and he said, “It was awful, but at least now I know how to make it better.” Needless to say, the book went on to become an international bestseller.

This was not false modesty. Having studied overconfidence throughout his career, Danny seemed immune to its effects. While surely maddening to some coauthors, this resulted in work that was more insightful and, most importantly to Danny and to us, correct. He was not always right, but always responsive to evidence, supportive or contradictory. For example, when some of the evidence cited in the book was questioned as a result of the replication crisis in psychology, Danny revised his opinion, writing in the comments of a critical blog: “I placed too much faith in underpowered studies.”

The best tribute to Danny, I believe, is adopting this idea, that science and particularly the social sciences, is not about seeming right, but instead, being truthful…

Practical problem solving

By Todd Rogers, Professor of Public Policy, Harvard University

I was part of a group helping some political candidates think about how to respond to untrue attacks by their political rivals. We focused on what cognitive and social psychology said about persuasive messaging. Danny suggested a different emphasis I hadn’t considered.

He directed us to a literature in cognitive psychology on cognitive associations. Once established, associations cannot simply be severed; attempting to directly refute them often reinforces them, and logical arguments alone can’t undo them. But these associations can be weakened when other competing associations are created.

For instance, if falsely accused of enjoying watching baseball, I’d be better off highlighting genuine interests—like my enjoyment of watching American football or reality TV—to dilute the false association with baseball. This anecdote is one small example of the many ways Danny’s profound intellect has influenced practical problem-solving. He’ll be missed and remembered.

Premortems

By Michael Mauboussin, Head of Consilient Research, Morgan Stanley

The opportunity to spend time with Danny and the chance to interview him were professional delights. One of my favorite lessons was about premortems, a technique developed by Gary Klein that Danny called one of his favorite debiasing techniques. In a premortem, a group assumes that they have made a decision (which they have yet to do), places themselves in the future (generally a year from now), and pretends that it worked out poorly. Each member independently writes down the reasons for the failure.

Klein suggested that one of the keys to premortems was the idea of prospective hindsight, that putting yourself into the future and thinking about the present opens up the mind to unconsidered yet relevant potential outcomes. I then learned that the findings of the research on prospective hindsight had failed to replicate—which made me question the value of the technique.

Danny explained that my concern was misplaced and that prospective hindsight was not central to the premortem. Rather, it was that the technique legitimizes dissent and allows organizations the opportunities to consider and close potential loopholes in their plans. That I had missed the real power of the premortem was a revelation and a relief, providing me with a cherished lesson…

Eradicating unhappiness

By George Loewenstein, Professor of Economics and Psychology, Carnegie Mellon University

For Danny, research was intensely personal. He got into intellectual disputes with a wide range of people, and these would hurt him viscerally, in part because it pained him that people he respected could come to different conclusions from those he held so strongly. He came up with, or at least embraced, the concept of “adversarial collaboration” in which researchers who disagreed on key issues would, however, agree upon a definitive test to determine where reality lay. A few of these were successful, but others (I would say most) ended with both parties unmoved, perhaps reflecting Robert Abelson’s insight that “beliefs are like possessions,” and, hence subject to the endowment effect.

I was spending time with Danny when he first got interested in hedonics—happiness—and that was a personal matter as well. His mother was declining mentally in France, and he agonized about whether to visit her; the issue was that she had anterograde amnesia, so he knew that she would forget his visit as soon as it ended. The criterion for quality of life, he had decided, should be the integral of happiness over time; so that—although she would miss out on the pleasure of remembering it—his visit would have value if she enjoyed it while it was happening.

Showing the flexibility of his thinking, and his all-too-rare willingness to learn from the data, his perspective changed as he studied happiness. He became more concerned about the story a life tells, including, notably, its peak and end; he concluded that eradicating unhappiness was a more important goal than fostering happiness, and began to draw a sharp distinction between happiness and life satisfaction, perhaps drawing, again, on his own experience. He always seemed to me to be extremely high in life satisfaction, but considerably less so in happiness.

3. Paradox of China’s stock market and economic growth – Glenn Luk

Joe Weisenthal of Bloomberg and the Odd Lots posed this question on Twitter/X:

“Given that the stock market hasn’t been especially rewarding to the volume-over-profits strategy undertaken by big Chinese manufacturers, what policy levers does Beijing have to sustain and encourage the existing approach?”

Many people may have noticed that despite the impressive growth of Chinese manufacturers in sectors like electric vehicles, the market capitalizations of these companies are dwarfed by Tesla. This seeming paradox lies at the heart of the the question posed by Joe.

In 2020, I shared an observation that China cares a lot more about GDP than market capitalization. I was making this observation in the context of Alibaba1 but would soon broaden the observation to encapsulate many more situations. In sharp contrast to Americans, Beijing just does not seem to care that much about equity market valuations but do seem to very much care about domestic growth and economic development…

…With respect to private sector market forces, Chinese policymakers tend to see its role as coordinators of an elaborate “game” that is meant to create an industry dynamic that drives desired market behaviors. The metaphor I sometimes use is as the Dungeon Master role in Dungeons & Dragons.

These “desired market behaviors” tend to overwhelmingly revolve around this multi-decade effort to maximize economic development and growth. Beijing has been very consistent about the goal to become “fully developed” by the middle of the 21st century.

To date, I would say that Chinese policymakers have been relatively successful using the approaches and principles described above to drive economic growth:

  • Priority on labor over capital / wage growth over capital income growth. Prioritizing labor is a key pillar of China’s demand-side support strategy. Growth in household income drives growth in domestic demand (whether in the form of household gross capital formation or expenditures).
  • Setting up rules to foster the create competitive industry dynamics and motivate economic actors to reinvest earnings back into growth.
  • Periodic crackdowns to disrupt what is perceived to be rent-seeking behavior, particularly from private sector players that have accumulated large amounts of equity capital (vs. small family businesses):
    • Anti-competitive behavior (e.g. Alibaba e-commerce dominance in the late 2010s)
    • Regulatory arbitrage (moral hazards inherent in Ant Financial’s risk-sharing arrangement with SOE banks)
  • Societal effects (for-profit education driving “standing on tiptoes” approach to childhood education)
  • Supply-side support to encourage dynamic, entrepreneurial participation from private sector players like in the clean energy transition to drive rapid industry through scale and scale-related production efficiencies. China has relied on supply-side strategies to support economic for decades despite repeated exhortations by outsiders to implement OECD-style income transfers.
  • Encouraging industry consolidation (vs. long drawn-out bankruptcies) once sectors have reached maturity although there are often conflicting motivations between Beijing and local governments.

A consistent theme is Beijing’s paranoia to rent-seeking behavior by capitalists (especially those who have accumulated large amounts of capital). It is sensitive to the potential stakeholder misalignment when capitalists — who are primarily aligned with one stakeholder class (fiduciary duty to equity owners).

It would prefer that rent-seeking behavior be handled by the party instead, whose objective (at least in theory) is to distribute these rents back to “The People” — although naturally in practice it never turns out this way; Yuen Yuen Ang has written multiple volumes about the prevalence of Chinese-style corruption and its corrosive economic effects.

So to bring it back to Joe’s question, the answer on whether Chinese policymakers can continue these policies going forward very much revolves around this question of rent-seeking: is it better to be done by the government or by private sector capitalists? What should be abundantly clear is that Beijing is definitive on this question: the party will maintain a monopoly on rent-seeking.

4. What Surging AI Demand Means for Electricity Markets – Tracy Alloway, Joe Weisenthal, and Brian Janous

Brian (09:58):

Yeah, and you’re right, I mean it’s not like we didn’t know that Microsoft had a partnership with OpenAI and that AI was going to consume energy. I think everyone though was a bit surprised at just how quickly what ChatGPT could do just captured the collective consciousness.

You probably remember when that was released. I mean it really sort surprised everyone and it became this thing where suddenly, even though we sort of knew what we were working on, it wasn’t until you put it out into the world that you realize maybe what you’ve created. That’s where we realized we are running up this curve of capability a lot faster than we thought. A number of applications that are getting built on this and the number of different ways that it’s being used and how it’s just become sort of common parlance. I mean, everyone knows what Chat GPT-3 is, and no one knew what it was the month before that.

So there was a bit, I think of a surprise in terms of just how quickly it was going to capture the collective consciousness and then obviously lead to everything that’s being created as a result. And so we just moved up that curve so quickly and I think that’s where the industry maybe got, certainly the utilities were behind because as you may have seen there, a lot of them are starting to restate their low-growth expectations.

And that was something that was not happening right before that. And so we’ve had massive changes just in the last two years of how utilities are starting to forecast what forecast. So if you take a look at a utility like Dominion in Virginia, so that’s the largest concentration of data centers in the United States. So they’re pretty good representative of what’s happening. If you go back to 2021, they were forecasting load growth over a period of 15 years of just a few percent.

I mean it was single-digit growth over that entire period. So not yearly growth, but over 15 years, single-digit growth. By 2023, they were forecasting to grow 2X over 15 years. Now keep in mind this is an electric utility. They do 10-year planning cycles. So because they have very long lead times for equipment for getting rights of away for transmission lines, they aren’t companies that easily respond to a 2X order of magnitude growth changed over a period of 15 years.

I mean, that is a massive change for electric utility, particularly given the fact that the growth rate over the last 15 to 20 years has been close to zero. So there’s been relatively no load growth in 15 to 20 years. Now suddenly you have utilities having to pivot to doubling the size of their system in that same horizon.

Tracy (13:10):

I want to ask a very basic question, but I think it will probably inform the rest of this conversation, but when we say that AI consumes a lot of energy, where is that consumption actually coming from? And Joe touched on this in the intro, but is it the sheer scale of users on these platforms? Is it, I imagine the training that you need in order to develop these models. and then does that energy usage differ in any way from more traditional technologies?

Brian (13:43):

Yeah, so whenever I think about the consumption of electricity for AI or really any other application, I think you have to start at sort of the core of what we’re talking about, which is really the human capacity for data, like whether it’s AI or cloud, humans have a massive capacity to consume data.

And if you think about where we are in this curve, I mean we’re on some form of S-curve of human data consumption, which then directly ties to data centers, devices, energy consumption ultimately, because what we’re doing is we’re turning energy into data. We take electrons, we convert them to light, we move them around to your TV screens and your phones and your laptops, etc. So that’s the uber trend that we’re riding up right now. And so we’re climbing this S-curve. I don’t know that anyone has a good sense of how steep or how long this curve will go.

If you go back to look at something like electricity, it was roughly about a hundred year. S-curve started in the beginning of last century. And it really started to flat line, as I mentioned before, towards the beginning of this century. Now we have this new trajectory that we’re entering, this new S-curve that we’re entering that’s going to change that narrative. But that S-curve for electricity took about a hundred years.

No one knows where we are on that data curve today. So when you inject something like AI, you create a whole new opportunity for humans to consume data, to do new things with data that we couldn’t do before. And so you accelerate us up this curve. So we were sitting somewhere along this curve, AI comes along and now we’re just moving up even further. And of course that means more energy consumption because the energy intensity of running an AI query versus a traditional search is much higher.

Now, what you can do with AI obviously is also much greater than what you can do with a traditional search. So there is a positive return on that invested energy. Oftentimes when this conversation comes up, there’s a lot of consternation and panic over ‘Well, what are we going to do? We’re going to run out of energy.’

The nice thing about electricity is we can always make more. We’re never going to run out of electricity. Not to say that there’s not times where the grid is under constraint and you have risks of brownouts and blackouts. That’s the reality. But we can invest more in transmission lines, we can invest more in power plants and we can create enough electricity to match that demand.

Joe (16:26):

Just to sort of clarify a point and adding on to Tracy’s question, you mentioned that doing an AI query is more energy intensive than, say, if I had just done a Google search or if I had done a Bing search or something like that. What is it about the process of delivering these capabilities that makes it more computationally intensive or energy intensive than the previous generation of data usage or data querying online?

Brian (16:57):

There’s two aspects to it, and I think we sort of alluded to it earlier, but the first is the training. So the first is the building of the large language model. That itself is very energy intensive. These are extraordinarily large machines, collections of machines that use very dense chips to create these language models that ultimately then get queried when you do an inference.

So then you go to ChatGPT and you ask it to give you a menu for a dinner party you want to have this weekend, it’s then referencing that large language model and creating this response. And of course that process is more computationally intensive because it’s doing a lot more things than a traditional search does. A traditional search just matched the words you put into a database of knowledge that it had put together, but these large language models are much more complex and then therefore the things you’re asking it to do is more complex.

So it will almost by definition be a more energy intensive process. Now, that’s not to say that it can’t get more efficient and it will, and Nvidia just last week was releasing some data on some of its next generation chips that are going to be significantly more efficient than the prior generation.

But one of the things that we need to be careful of is to think that because something becomes more efficient, then therefore we’re going to use less of the input resource. In this case, electricity. That’s not how it works, because going back to the concept of human capacity for consuming data, all we do is we find more things to compute. And this is, you’ve probably heard of Jevons paradox, and this is the idea that, well, if we make more efficient steam engines, he was an economist in the 1800s and he said ‘Well, if make more efficient steam engines, then we’ll use less coal.’

And he is like ‘No, that’s not what’s going to happen. We’re going to use more coal because we’re going to mechanize more things.’ And that’s exactly what we do with data just because we’ve had Moore’s Law for years, and so chips has become incredibly more efficient than they were decades ago, but we didn’t use less energy. We used much more energy because we could put chips in everything.

So that’s the trend line that we’re on. It’s still climbing that curve of consumption. And so no amount of efficiency is going to take us at this point, at least because I don’t believe we’re anywhere close to the bend in that S-curve. No amount of efficiency is going to take us off of continuing to consume more electricity, at least in the near term…

…Brian (22:35):

Well, this is where it gets a little concerning is that you have these tech companies that have these really ambitious commitments to being carbon neutral, carbon negative, having a hundred percent zero carbon energy a hundred percent of the time, and you have to give them credit for the work they’ve done.

I mean, that industry has done amazing work over the last decade to build absolutely just gigawatts upon gigawatts of new renewable energy projects in the United States all over the world. They’ve been some of the biggest drivers in the corporate focus on decarbonization. And so you really have to give that industry credit for all it’s done and all the big tech companies have done some amazing work there.

The challenge though that we have is the environment that they did that in was that no growth environment we were talking about. They were all growing, but they were starting from a relatively small denominator 10 or 15 years ago. And so there was a lot of overhang in the utility system at that time because the utilities had overbuilt ahead of that sort of flatlining. So there was excess capacity on the system.

They were growing inside of a system that wasn’t itself growing on a net basis. So everything they did, every new wind project you brought on, every new solar project you bought on, those were all incrementally reducing the amount of carbon in the system. It was all net positive.

Now we get into this new world where their growth rates are exceeding what the utilities had ever imagined in terms of the absolute impact on the system. The utilities’ response is ‘The only thing we can do in the time horizon that we have is basically build more gas plants or keep online gas plants or coal plants that we were planning on shuttering.’

And so now that the commitments that they have to zero carbon energy to be carbon negative, etc., are coming into contrast with the response that the utilities are laying out in their what’s called integrated resource plans or IRPs.

And we’ve seen this recently just last week in Georgia. We’ve seen it in Duke and North Carolina, Dominion and Virginia. Every single one of those utilities is saying ‘With all the demand that we’re seeing coming into our system, we have to put more fossil fuel resources on the grid. It’s the only way that we can manage it in a time horizon we have.’ Now, there’s a lot of debate about whether that is true, but it is what’s happening…

…Brian (30:29):

That’s right. And that’s the big challenge that good planners have today is what loads do you say yes to and what are the long-term implications of that? And we’ve seen this play out over the rest of the globe where you’ve had these concentrations of data centers. This is a story that we saw in Dublin, we’ve seen it in Singapore, we’ve seen it in Amsterdam.

And these governments start to get really worried of ‘Wait a minute, we have too many data centers as a percentage of overall energy consumption.’ And what inevitably happens is a move towards putting either moratoriums on data center build out or putting very tight restrictions on what they can do and the scale at which they can do it. And so we haven’t yet seen that to any material degree in the United States, but I do think that’s a real risk and it’s a risk that the data center industry faces.

I think somewhat uniquely in that if you’re the governor of a state and you have a choice to give power to a say new EV car factory that’s going to produce 1,500, 2,000 jobs versus a data center that’s going to produce significantly less than that, you’re going to give it to the factory. The data centers are actually the ones that are going to face likely the most constraints as governments, utilities, regulators start wrestling with this trade-off of ‘Ooh, we’re going to have to say no to somebody.’…

…Tracy (36:36):

What are the levers specifically on the tech company or the data center side? Because again, so much of the focus of this conversation is on what can the utilities do, what can we do in terms of enhancing the grid managing supply more efficiently? But are there novel or interesting things that the data centers themselves can do here in terms of managing their own energy usage?

Brian (37:02):

Yes. There’s a few things. I mean, one is data centers have substantial ability to be more flexible in terms of the power that they’re taking from the grid at any given time. As I mentioned before, every data center or nearly every data center has some form of backup generation. They have some form of energy storage built into this.

So the way a data center is designed, it’s designed like a power plant with an energy storage plant that just happens to be sitting next to a room full of servers. And so when you break it down into those components, you say, okay, well how can we better optimize this power plant to be more of a grid resource? How can we optimize the storage plant to be more of a grid resource? And then in terms of even the servers themselves, how can we optimize the way the software actually operates and is architected to be more of a grid resource?

And that sort of thinking is what is being forced on the industry. Frankly, we’ve always had this capability. I mean, we were doing, I mean we did a project like 2016 with a utility where we put in flexible gas generators behind our meter because the utility was going to have to build a new power plant if we didn’t have a way to be more flexible.

So we’ve always known that we can do this, but the industry has never been pressurized to really think innovatively about how can we utilize all these assets that we have inside of the data center plant itself to be more part of the grid. So I think the most important thing is really thinking about how data centers become more flexible. There’s a whole ‘nother line of thinking, which is this idea of, well, utilities aren’t going to move fast enough, so data centers just need to build all their own power plants.

And this is where you start hearing about nuclear and SMRs and infusion, which is interesting, except it doesn’t solve the problem this decade. It doesn’t solve the problem that we’re facing right now because none of that stuff is actually ready for prime time. We don’t have an SMR that we can build today predictably on time, on budget.

So we are dependent on the tools that we have today, which are things like batteries, grid enhancing technologies, flexible load, reconductoring transmission lines to get more power over existing rights of ways. So there’s a number of things we can do with technologies we have today that are going to be very meaningful this decade and we should keep investing in things that are going to be really meaningful next decade. I’m very bullish on what we can do with new forms of nuclear technology. They’re just not relevant in the time horizon. The problem we’re talking about [now].

Joe (39:52):

At some point, we’re going to do an Odd Lots episode specifically on the promise of small modular reactors and why we still don’t have them despite the seeming benefits. But do you have a sort of succinct answer for why this sort of seeming solution of manufacturing them faster, etc., has not translated into anything in production?

Brian (40:14)

Well, quite simply, we just forgot how to do it. We used to be able to build nuclear in this country. We did it in the seventies, we did it in the eighties, but every person that was involved in any one of those projects is either not alive or certainly not still a project manager at a company that would be building nuclear plants, right?

I think we underestimate human capacity to forget things. Just because we’ve done something in the past doesn’t mean that we necessarily can do it. Again, we have to relearn these things, and as a country, we do not have a supply chain. We don’t have a labor force. We don’t have people that manage construction projects that know how to do any of these things.

And so when you look at what South Korea is doing, you look at what China’s doing, they’re building nuclear plants with regularity. They’re doing it at a very attractive cost. They’re doing it on a predictable time horizon, but they have actually built all of those resources that we just simply don’t have in this country that we need and we need to rebuild that capability. It just doesn’t exist today…

…Brian (41:50):

Absolutely. And so if you go back to the era that we’ve been in of relative no load growth, if you’re a utility regulator and utility comes and asks you for a billion dollars for new investment and you’re used to saying ‘no,’ you’re used to saying ‘Well, wait a minute. Why do you need this? What is this for? How is this going to help manage again, reliability, cost, predictability, etc.?’

Now you’re in this whole new world and going back to this concept of we easily forget things — no one who’s a regulator today or the head of utility today has ever lived through an environment where we’ve had this massive expansion of the demand for electricity. So everyone now, including the regulators are having to relearn, okay, how do we enable utility investment in a growth environment? It’s not something they’ve ever done before. And so they’re having to figure out, okay, how do we create the bandwidth for utilities to make these investments?

Because one of the fundamental challenges that utilities have is that they struggle to invest if there’s no customer sitting there asking for the request, so they can’t sort of invest. I mean, if I’m Nvidia and I’m thinking about the world five years from now and think ‘Wow, how many chips do I want to sell in 2030?’ I can go out and build a new factory. I can go out and invest capital and I can go do all, I mean, I don’t need to have an order from a Microsoft or an Amazon or a Meta to go do that. I can build speculatively.

Utilities can’t really do that. They’re basically waiting for the customer to come ask for it. But when you have all this demand show up at the same time, well, what happens? The lead time start to extend. And so instead of saying ‘Yeah, I’ll give you that power in a year or two years,’ it’s now like, ‘Well, I’ll give it to you in five to seven years.’ And so that’s an unsustainable way to run the electric utility grid. So we do need regulators to adapt and evolve to this new era of growth.

5. Reflections from the heart of Japan’s ancient cedar forest – Thomas Chua

Yakushima was particularly memorable, an island near Kagoshima famous for its wildlife and ancient cedar forests. These majestic cedars, some of the oldest trees in the world, grow steadily through centuries, unaffected by the transient storms and seasonal fluctuations.

This is Sennensugi, which means a thousand-year-old cedar tree even though it’s still young. Yakushima’s oldest tree (and the oldest tree in Japan) is Jōmon Sugi, which is estimated to be between 2,170 and 7,200 years old.

This resonates deeply with my investment strategy. Just as these enduring cedars are not swayed by the fleeting changes in their environment, I focus on “Steady Compounders”—companies with significant economic moats and consistent intrinsic value growth.

When friends learn about my extensive travels, they often ask, “What about your investments? Don’t you need to monitor them constantly?” What they usually mean about ”monitoring” isn’t analyzing quarterly business results, but rather obsessively tracking stock prices and consuming every tidbit of news to stay perpetually informed.

However, I liken such constant vigilance to setting up a camera in a forest to watch the trees grow, this approach isn’t just tedious—it’s unnecessary and potentially harmful, often prompting rash decisions.

Everyone invests to grow wealth, but understanding why you invest is crucial. For me, it serves to enrich my curiosity and intellect, rewards my eagerness to learn, and more importantly, grants me the freedom to live life on my terms and cherish moments with my loved ones.

Therefore, I don’t pursue obscure, unproven companies which require intensive monitoring. Instead, I look for Steady Compounders — firms with a significant economic moat that are growing their intrinsic value steadily.

Like the steady growth of Yakushima’s cedars, these firms don’t need constant oversight; they thrive over long periods through economic cycles, much as the cedars endure through seasonal changes. Investing in such companies gives me the freedom to explore the world, knowing my investments are growing steadily, mirroring the quiet, powerful ascent of those ancient trees.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.