What We’re Reading (Week Ending 10 November 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 November 2024:

1. Why I’m Leaving OpenAI and What I’m Doing Next – Miles Brundage

So how are OpenAI and the world doing on AGI readiness?

In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.

To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).

Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.

As a sidenote, I think that AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense. One of the things my team has been working on lately is fleshing out the “levels of AI” framework referenced here. I hope that OpenAI and I will be able to publish a related paper before long. But for now I’d just note that when I say “ready for AGI,” I am using this as shorthand for something like “readiness to safely, securely, and beneficially develop, deploy, and govern increasingly capable AI systems.”…

…I think the upsides of AI are already big and could be dramatically bigger, as are the downsides. As someone who has worked in this field for longer than most, it has been very sad to see increasing polarization along the lines of whether people focus on one side of the cost/benefit ledger or the other, or have different risk priorities, etc. My view is that there is a lot to worry about and a lot to be excited about, we don’t have to choose one thing to care about, and we should find common ground where it exists.

I think AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion. Notably, this is true not just for risk mitigation but also for ensuring equitable distribution of the benefits, as is the case with, e.g., electricity and modern medicine as well. This is true for a few reasons, including, non-exhaustively, collective action problems, various unpriced negative externalities, and unequal starting positions of digital infrastructure access, wealth, etc. that affect who benefits and is harmed by default and to what degrees. As with railroads, electricity, etc., corporate and government policies will be critical to ensuring safe and fair outcomes.

I think AI capabilities are improving very quickly and policymakers need to act more urgently…

..I think quantitative evaluations of AI capabilities and extrapolations thereof, in combination with analysis of the impacts of certain policies, will be critical in truthfully and persuasively demonstrating that urgency. There’s great work happening on measuring frontier models from a safety perspective, measuring trends over time in AI, and a growing body of work assessing the labor market implications of AI, but more is definitely needed.

I think we don’t have all the AI policy ideas we need, and many of the ideas floating around are bad or too vague to be confidently judged. This is particularly true of international competition over AI, where I find the existing proposals to be especially bad (e.g. “race against [competing country] as quickly as possible”) and vague (e.g. “CERN for AI”), although it’s encouraging to see a growing trend towards more nuanced discussion of some of these ideas. There are also many aspects of frontier AI safety and security that will require creative solutions…

…I think that improving frontier AI safety and security is quite urgent, given the number of companies (dozens) that will soon (next few years at most) have systems capable of posing catastrophic risks. Given that that is not much time to set up entirely new institutions, I’m particularly interested in opportunities for action under existing legal authorities, as well as shaping the implementation of already-approved legislation such as the EU AI Act.

As noted above, and explained in more detail in this paper and similar work, companies and governments will not necessarily give AI safety and security the attention it deserves by default (this is not a comment specifically about OpenAI, as discussed above). There are many reasons for this, one of which is a misalignment between private and societal interests, which regulation can help reduce. There are also difficulties around credible commitments to and verification of safety levels, which further incentivize corner-cutting: people assume others are going to cut corners to gain an advantage and can’t tell what the ground truth is, or think they will change their minds later. Corner-cutting occurs across a range of areas, including prevention of harmfully biased and hallucinated outputs as well as investment in preventing the catastrophic risks on the horizon. There are, to be clear, some ways in which commercial incentives encourage safety, though I think it would be irresponsible to assume that those incentives will be sufficient, particularly for ambiguous, novel, diffuse, and/or low-probability/high-magnitude safety risks.

I’m excited about understanding how companies can credibly demonstrate safety while protecting valuable and potentially misusable IP. The difficulty of demonstrating compliance without compromising sensitive information is a major barrier to arms control agreements, which requires innovation to address. This issue is also at the core of effective domestic regulation. I’m excited to collaborate with people working on this and other related technical AI governance questions.

While some think that the right approach to the global AI situation is for democratic countries to race against autocratic countries, I think that having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security, an attack on Taiwan (given its central role in the AI chip supply chain), and other very bad outcomes. I would like to see academics, companies, civil society, and policymakers work collaboratively to find a way to ensure that Western AI development is not seen as a threat to other countries’ safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead.

Even if, as I think is very likely, Western countries continue to substantially outcompete China on AI, there is more than enough “gas in the tank” of computing hardware and algorithmic progress in autocratic countries for them to build very sophisticated capabilities, so cooperation will be essential. I realize many people think this sounds naive but I think those people haven’t thought through the situation fully or considered how frequently international cooperation (enabled by foresight, dialogue, and innovation) has been essential to managing catastrophic risks…

…I think it’s likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable (assuming appropriate policies to ensure fair distribution of that bounty). Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely. In the near-term, I worry a lot about AI disrupting opportunities for people who desperately want work, but I think it’s simultaneously true that humanity should eventually remove the obligation to work for a living and that doing so is one of the strongest arguments for building AI and AGI in the first place. Likely some will continue to work in the long-term but the incentive to do so might be weaker than before (whether this is true depends on a variety of cultural and policy factors). That is not something we’re prepared for politically, culturally, or otherwise, and needs to be part of the policy conversation. A naive shift towards a post-work world risks civilizational stagnation (see: WALL-E), and much more thought and debate about this is needed…

…Compared to software, data, and talent, computing hardware has unique properties that make it an important focal point for AI policy: “it is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain” (quoted from this paper I worked on). This makes it worrying that the part of the US government responsible for overseeing what happens when that compute is shipped overseas is severely understaffed and underfunded, and that more generally there is little serious policy discussion of what the endgame is here (besides occasionally tightening export controls and requiring companies to report their big datacenters and training runs).

To the extent that there is serious analysis of compute governance happening in the academic literature, it generally lags behind developments in industry by a fair amount – e.g., to those within frontier AI companies, it has become increasingly clear in recent years that scaling up inference, not just training, can enable higher performance, but public analysis of the policy implications of this has only begun in earnest relatively recently. Ideas for distributing computing power (and the associated benefits of AI) more widely, such as via the government providing greater compute for academics, are generally too little too late and neglect issues specific to the developing world, which is in a quite different situation.

2. Industry Is Not Destiny – Greg Obenshain

We’d go as far as to argue that industry analysis generally is much less valuable than fundamental investors or strategy consultants might hope.

Mauboussin’s new study, Measuring the Moat: Assessing the Magnitude and Sustainability of Value Creation, grapples with this issue. Mauboussin’s study includes a chart that is difficult to unsee once you’ve seen it (h/t Edward Conard’s Macro Roundup for highlighting this)…

…This chart shows that profitability varies more within industry (the vertical bars) than across industries (the dots). Over the long run, the fate of a company is not primarily determined by its industry—a finding consistent with Chicago school research from the 1980s that dealt a death blow to structure-conduct-performance theory in antitrust law.

Mauboussin notes that while industry analysis matters when it comes to deciding where to compete, ultimately the right unit of analysis is not the industry level but the company level…

…Industries with higher overall profitability have more companies that are profitable, but even within industries with low profitability, there are still companies that have returns well above the cost of capital and some companies that have profitability substantially above.

Industry is not destiny. Great companies can emerge from mediocre industries.

3. Watch Out: Wall Street Is Finding New Ways to Slice and Dice Loans – Matt Wirz

Goldman Sachs GS 2.14%increase; green up pointing triangle this month sold $475 million of public asset-backed securitization, or ABS, bonds backed by loans the bank makes to fund managers that tide them over until cash from investors comes in. The first-of-its-kind deal is a lucrative byproduct of the New York bank’s push into loans to investment firms, such as these so-called capital-call lines.

Goldman’s new deal reflects two trends transforming financial markets. Increasingly large managers of private-debt and private-equity funds are moving up in the Wall Street pecking order, but they often need money fast. Banks, once again, are reinventing themselves to adapt…

…The transactions are relatively small for now. Still, they are intertwining banks (in Wall Street parlance, the sell side) with investors (the buy side) in ways that are new and difficult to parse for analysts, regulators and others…

…Capital-call loans function like credit cards for private-fund managers. The funds borrow money to invest quickly in private debt, private equity, real estate and infrastructure. They then “call up” cash commitments from clients in the funds, mostly institutions such as pensions and insurers, and repay the loans when the clients deliver.

Defaults on capital-call commitments from large institutions “have been historically close to 0%,” according to a marketing document for Goldman’s bond viewed by The Wall Street Journal. That makes the bonds extremely safe, said debt fund managers to whom Goldman offered the deal.

Even so, the shiny new products that banks are inventing have yet to be tested through market cycles…

…As Goldman and other banks make more capital-call loans to private-fund managers, they are also buying insurance from many of the same investment firms to protect against potential losses from corporate, consumer and real-estate loans. The so-called synthetic risk transfers, or SRTs, help banks reduce risk to meet new regulatory requirements and give fund managers investments to put into their wildly popular private-credit funds.

Some private-credit funds are developing another product that is similar to capital-call lines called net-asset-value, or NAV loans, made to private-equity fund managers. Rising interest rates have made it harder for private-equity funds to sell companies they own to repay their limited partners. NAV loans help them to start returning cash to clients until they can dispose of the companies. Many of the firms that manage private-equity funds also manage private-credit funds…

…The International Monetary Fund published a report in April warning that “interconnections and potential contagion risks many large financial institutions face from exposures to the asset class are poorly understood and highly opaque.”

4. Big Banks Cook Up New Way to Unload Risk – Matt Wirz

U.S. banks have found a new way to unload risk as they scramble to adapt to tighter regulations and rising interest rates…

…These so-called synthetic risk transfers are expensive for banks but less costly than taking the full capital charges on the underlying assets. They are lucrative for the investors, who can typically get returns of around 15% or more, according to the people familiar with the transactions.

U.S. banks mostly stayed out of the market until this autumn, when they issued a record quantity as a way to ease their mounting regulatory burden…

…In most of these risk transfers, investors pay cash for credit-linked notes or credit derivatives issued by the banks. The notes and derivatives amount to roughly 10% of the loan portfolios being de-risked. Investors collect interest in exchange for shouldering losses if borrowers of up to about 10% of the pooled loans default…

…The deals function somewhat like an insurance policy, with the banks paying interest instead of premiums. By lowering potential loss exposure, the transfers reduce the amount of capital banks are required to hold against their loans.

Banks globally will likely transfer risk tied to about $200 billion of loans this year, up from about $160 billion in 2022, according to a Wall Street Journal analysis of estimates by ArrowMark Partners, a Denver-based firm that invests in risk transfers…

…Banks started using synthetic risk transfers about 20 years ago, but they were rarely used in the U.S. after the 2008-09 financial crisis. Complex credit transactions became harder to get past U.S. bank regulators, in part because similar instruments called credit-default swaps amplified contagion when Lehman Brothers failed.

Regulators in Europe and Canada set clear guidelines for the use of synthetic risk transfers after the crisis. They also set higher capital charges in rules known as Basel III, prompting European and Canadian banks to start using synthetic risk transfers regularly.

U.S. regulations have been more conservative. Around 2020, the Federal Reserve declined requests for capital relief from U.S. banks that wanted to use a type of synthetic risk transfer commonly used in Europe. The Fed determined they didn’t meet the letter of its rules…

…The pressure began to ease this year when the Fed signaled a new stance. The regulator said it would review requests to approve the type of risk transfer on a case-by-case basis but stopped short of adopting the European approach.

5. Xi Stimulus Clues Found in Protest Data Showing Economic Stress – Rebecca Choong Wilkins

From a basement in Calgary, often accompanied by his pet cat, Lu Yuyu spends 10 hours a day scouring the internet to compile stats on social instability before they are scrubbed by China’s censors. The 47-year-old exile won’t reveal his exact method because it risks jeopardizing the overall goal of the project called “Yesterday,” which documents cases of group protests.

“These records provide an important basis for people to understand the truth of this period of history,” said Lu, who started the effort in January 2023 but didn’t make it public until he arrived in Canada a year ago. “I didn’t want to go to jail again,” he explained.

While Lu’s interests are political, his database — available for free — is among a growing number of metrics tracking dissent in China that investors are watching to figure out when Xi will open up the spigots to bolster growth. And some banks are now starting to develop similar products.

Morgan Stanley in September debuted a new gauge of distress that could be used to predict policy swings in China. Robin Xing, the bank’s chief China economist, says it’s nearing the low levels reached two other times in the past decade: in 2015, when Beijing took drastic steps to arrest a $7 trillion stock market rout, and in 2022 — the point at which the Communist Party abruptly dropped its strict Covid controls after simultaneous street protests in major cities…

…While China’s opaque political system makes it difficult to attribute policy moves to any single factor, investors and analysts who track instances of unrest say authorities may be especially sensitive to them when deciding on whether to roll out stimulus and how much to deploy. Economic protests have become more frequent in recent years as China’s youth unemployment rate soared and its housing crisis worsened…

…Getting a read on what’s happening on the ground is a challenge for academic researchers and finance professionals alike. Widespread censorship, heavy surveillance and suppression of dissent have made it hard to assess the depth of economic malaise in the country of 1.4 billion people…

…The rising prominence of dissent metrics is part of a blossoming industry of so-called alternative data aimed at decoding the state of the world’s second-biggest economy…

…Life has become tougher for many in recent years as pandemic lockdowns, a real estate crisis and trade tensions have slowed growth in China.

Incomes are still rising, but gains under Xi have been the weakest since the late 1980s. Faith in the country’s meritocracy also appears to be waning, leaving white-collar workers feeling increasingly disillusioned. Companies mired in fierce price wars are laying off employers, while college graduates are struggling to find work.

China Dissent Monitor’s data shows that cases of dissent rose 18% in the second quarter compared to same period last year, with the majority of events linked to financial issues.

“If you look at everything regarding social well-being — be it wage growth, urban unemployment rate, consumer confidence and even tracking labor incidents — I think it’s deteriorating,” Morgan Stanley’s Xing said.

Although protests aren’t particularly rare in China, they’re typically small scale, uncoordinated with other places and lacking in overt criticism of Beijing. Still, political criticism can bubble up, usually in cases linked to rural land actions where the local governments find themselves the target of discontent, according to China Dissent Monitor research…

…Even so, there are few signs that the unrest is coalescing around a particular instance of perceived injustice or a single issue. Unlike the Tiananmen Square protests and unrest in the late 1980s, current dissent doesn’t present an existential threat to the regime. A more likely response is therefore a dose of economic medicine that will keep the market guessing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *