All articles

What We’re Reading (Week Ending 16 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 February 2025:

1. The real threat to American prosperity – Daron Acemoglu

American economic success in the era after the second world war depended on innovation, which in turn relied on strong institutions that encouraged people to invest in new technologies, trusting that their inventiveness would be rewarded. This meant a court system that functioned, so that the fruits of their investments could not be taken away from them by expropriation, corruption or chicanery; a financial system that would enable them to scale up their new technologies; and a competitive environment to ensure that incumbents or rivals couldn’t block their superior offerings. These kinds of institutions matter under all circumstances, but they are especially critical for economies that rely heavily on innovation.

Stability requires that people trust institutions, and institutions become more likely to fail when people think they are failing. This is what explained the sudden meltdown of US economic dynamism…

…Economic growth in the US was rapid for most of the post-1980 era, but about half of the country didn’t benefit much from this. In a pattern unparalleled in the industrialised world, Americans with less than a college degree experienced a real (inflation-adjusted) decline in their wages between 1980 and 2013, while those with postgraduate degrees experienced robust growth…

…Many Americans felt that they no longer had much of a political voice. In surveys, more than 80 per cent started saying that politicians did not care about what people like them thought…

…But perhaps the most important determinant of this dwindling trust in institutions was that the US had become much more polarised, making it increasingly difficult to satisfy the majority of the voters. The flames of grievance were powerfully fanned by social media, which deepened polarisation. This then further reduced trust in democracy and in public institutions. Worse, with intensifying distrust, something essential to democracy — compromise — became more and more challenging.

By the 2010s something unprecedented was happening. Ever since data on this had been collected, an overwhelming majority of Americans saw democracy as the “only game in town” and gave it strong support relative to alternatives such as monarchy, military dictatorship or rule by unelected experts. That began changing, especially among young people, who reported growing scepticism about democracy and much more lukewarm support for these institutions.

The cracks were visible long before Trump was first elected in November 2016. He was in many ways a symptom of those troubled times…

…Turning points are useful to locate because they are symbolic of deeper causes of social change. In hindsight, an obvious turning point came just before Trump’s second inauguration. Biden, who had four years ago made defence of democracy a main agenda item, pre-emptively pardoned his family and a number of politicians and public servants, including former Republican Congresswoman Liz Cheney and the former medical adviser to the president, Anthony Fauci. The optics were clear and ugly: Biden and his camp by this point had so little trust in US institutions that they thought only such pre-emptive pardons could stop Trump’s retribution (and making the reality worse than the optics, it was only the enemies of Trump who were close to Biden that counted)…

…While Trump’s domestic agenda intensified the loss of trust in US institutions and expertise in government, his relations with foreign allies did the same for the so-called rules-based order. Of course, there was some truth to critics’ contention that these rules were designed for America’s benefit and that when they didn’t serve it well, they were bent or broken by US politicians, diplomats and companies. But the world was not ready for Trump’s tariffs, threats and military expansionist rhetoric towards Panama, Greenland and even Canada.

This set the scene for a series of catastrophic governmental failures. With morale gone and key personnel fired, the US state was ill-equipped to deal with emergencies. When new pandemics arrived, the response was haphazard, and unpreparedness cost tens of thousands of lives. The few remaining independent media sources uncovered a glaring and dangerous lack of oversight of critical infrastructure, including nuclear reactors and cyber security.

But the real extent of the damage became clear only with the tech meltdown of 2030. Economists and historians have now shown that a lot of this was the outcome of institutional failures and growing concentration in the industry. After Trump lifted all roadblocks ahead of AI acceleration and cryptocurrency speculation, there was initially a boom in the tech sector. But within a few years the industry had become even more consolidated than before, and both insiders and outsiders came to realise that only companies favoured by the administration could survive…

…By late 2029, many commentators were questioning what was going on in the tech industry, which had invested heavily in AI but had little to show for this in terms of innovation or productivity growth. There was huge enthusiasm and investment in cryptoassets, which were one by one revealed to be scams costing regular Americans billions of dollars. The AI empire had no clothes by this point, because the competitive energy had been sucked out of it. It took a while longer for the market to realise that, but when it did, a massive stock market crash followed.

This is the kind of shock that a dynamic economy can recover from, with new innovators coming in, government experts using fiscal policy and other interventions to prevent the crash from translating into a deep recession, and all sorts of people still believing in their ability to make a difference. But once malaise about US institutions had sunk in and experts were no longer around in the government, the crash became a recession and then a depression.

The depression continued and intensified. Many now understood that institutions needed to be fixed, but after the damage that Biden and Trump had done and the polarisation that had reached even higher peaks, rebuilding them proved difficult. American innovators and scientists started emigrating to Canada and the European Union. Some even went to China.

America’s collapse thus followed Hemingway’s famous line on bankruptcy. It happened gradually, as shared prosperity, high-quality public services and the operation of democratic institutions weakened, and then suddenly, as Americans stopped believing in those institutions.

2. The Drug Industry Is Having Its Own DeepSeek Moment – David Wainer

In 2020, less than 5% of large pharmaceutical transactions worth $50 million or more upfront involved China. By 2024, that number had surged to nearly 30%, according to DealForma. A decade from now, many drugs hitting the U.S. market will have originated in Chinese labs…

…China’s biotech boom mirrors its rise in tech. In both cases, China has moved up the value chain, from manufacturing goods to becoming a more sophisticated hub for innovation, competing in industries once dominated by the U.S. There are several reasons for the industry’s growth. For one, many top scientists trained in the U.S. have returned to China over the past decade, fueling the emergence of biotech hubs around Shanghai. And just as DeepSeek built a formidable chatbot—allegedly on a lean budget with limited access to semiconductors—Chinese biotech companies are also scrappier, capitalizing on a highly skilled, lower-cost workforce that can move faster.

Additionally, companies can conduct clinical trials at a fraction of what they would cost in the U.S., while recent changes in the Chinese regulatory system have streamlined and accelerated the approval process to get a study started. 

For now, much of China’s biotech innovation is incremental rather than groundbreaking. Many companies focus on improving existing drugs—tweaking the chemistry, enhancing efficacy or differentiating them in key ways.

But Chinese innovation is steadily improving and is already starting to disrupt the U.S. drug-development ecosystem…

…Chief executives of large pharmaceutical companies are broadening their horizons. Why spend $10 billion acquiring a U.S. biotech with a mid-stage drug when a similar molecule can be licensed from China for a fraction of the price?…

…In late 2024, after scouring the market for obesity assets—presumably eyeing U.S. companies like Viking Therapeutics, which trades at a market value of around $3.7 billion—Merck chose to license an oral GLP-1 drug from China’s Hansoh Pharma. The deal: $112 million upfront, with potential milestone payments of up to $1.9 billion…

…These “bargain” deals are great for Big Pharma. But for U.S. biotech companies—and their venture-capital backers—they are creating real challenges. Investors increasingly struggle to value early-stage biotechs because it is difficult to predict what competition might emerge from China.

3. All of us could be wrong about DeepSeek and OpenAI – Chin Hui Leong

China’s DeepSeek has unleashed a new wave of AI hype.

But amid the noise, one thing is clear: everyone has an opinion, and no one has the answers….

…When Apple (NASDAQ: AAPL) unveiled its iPhone in 2007, many analysts dismissed its hardware-focused strategy.

Their argument hinged on a familiar pattern: over time, consumer hardware tends to become commoditised. If the iPhone becomes popular, they reasoned, its unique appeal would fade as competitors come in with cheaper imitations.

This wasn’t a baseless concern.

The personal computer (PC) era, the previous dominant computing platform, was marked by fierce price competition among hardware manufacturers. Even Apple’s Macintosh PC had fallen victim to the cutthroat competition in the 1980s and 1990s.

In short, the precedent was clear: hardware eventually becomes a commodity.

However, this time, things would be different.

Today, nearly 18 years later, Apple boasts over 2.35 billion devices in circulation, generating upwards of US$200 billion in annual iPhone revenue. Clearly, the popular smartphone has defied the conventional wisdom of hardware commoditisation.

Therein lies a lesson.

When considering the future of AI, the iPhone’s success serves as a crucial reminder: be wary of preconceived notions…

…Too often, we fall prey to the “Highlander” fallacy, assuming that one side can only win if the other loses.

This zero-sum mindset blinds us from a range of possible future scenarios.

Think about the mobile operating system (OS) market.

On one side, you’ve got Apple’s closed iOS, with 2.35 billion devices, and on the other, Google’s open-source Android, with a massive three billion devices.

Crucially, they’ve each found their own area to thrive in.

Apple continues to dominate in the premium smartphone market, while Android is all about getting Google services out there.

Going back to AI models: can OpenAI replicate this coexistence, thriving alongside open-source models?

Could we see large, proprietary models handling general use cases while smaller, specialised models address niche needs? Could there be a main AI model, featuring a supporting cast of smaller models?

Your guess is as good as mine…

…Do you know who were among the biggest “losers” in the shift from desktop to mobile?

In my book, it may be Microsoft and Nvidia.

Nvidia tried to break into the smartphone market but threw in the towel when it failed to get a foothold in the market. Microsoft, on the other hand, had long held a monopoly in the desktop OS market but failed to extend its dominance to mobile devices.

But are we really going to brand Microsoft and Nvidia as losers, even though they got the short end of the stick in the smartphone arena?

Today, both are at the forefront of the AI revolution, proving that setbacks don’t preclude future triumphs…

…Amid the noise, it’s important to remember that ChatGPT is barely two years old, a stark reminder of the industry’s infancy.

If history teaches us anything, we may want to put our egos aside and accept that there are developments that cannot be known ahead of time.

The AI landscape is still being written.

4. Deep Research and Knowledge Value – Ben Thompson

I found a much more beneficial use case the next day. Before I conduct a Stratechery Interview I do several hours of research on the person I am interviewing, their professional background, the company they work for, etc.; in this case I was talking to Bill McDermott, the Chairman and CEO of ServiceNow, a company I am somewhat familiar with but not intimately so. So, I asked Deep Research for help…

…I found the results eminently useful, although the questions were pretty mid; I did spend some time doing some additional reading of things like earnings reports before conducting the Interview with my own questions. In short, it saved me a fair bit of time and gave me a place to start from, and that alone more than paid for my monthly subscription.

Another compelling example came in researching a friend’s complicated medical issue; I’m not going to share my prompt and results for obvious reasons. What I will note is that this friend has been struggling with this issue for over a year, and has seen multiple doctors and tried several different remedies. Deep Research identified a possible issue in ten minutes that my friend has only just learned about from a specialist last week; while it is still to be determined if this is the answer he is looking for, it is notable that Deep Research may have accomplished in ten minutes what has taken my friend many hours over many months with many medical professionals.

It is the final example, however, that is the most interesting, precisely because it is the question on which Deep Research most egregiously failed. I generated a report about another friend’s industry, asking for the major players, supply chain analysis, customer segments, etc. It was by far my most comprehensive and detailed prompt. And, sure enough, Deep Research came back with a fully fleshed out report answering all of my questions.

It was also completely wrong, but in a really surprising way. The best way to characterize the issue is to go back to that famous Donald Rumsfeld quote:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.

The issue with the report I generated — and once again, I’m not going to share the results, but this time for reasons that are non-obvious — is that it completely missed a major entity in the industry in question. This particular entity is not a well-known brand, but is a major player in the supply chain. It is a significant enough entity that any report about the industry that did not include them is, if you want to be generous, incomplete.

It is, in fact, the fourth categorization that Rumsfeld didn’t mention: “the unknown known.” Anyone who read the report that Deep Research generated would be given the illusion of knowledge, but would not know what they think they know…

…What Deep Research reveals is how much more could be known. I read a lot of things on the Internet, but it’s not as if I will ever come close to reading everything. Moreover, as the amount of slop increases — whether human or AI generated — the difficulty in finding the right stuff to read is only increasing. This is also one problem with Deep Research that is worth pointing out: the worst results are often, paradoxically, for the most popular topics, precisely because those are the topics that are the most likely to be contaminated by slop. The more precise and obscure the topic, the more likely it is that Deep Research will have to find papers and articles that actually cover the topic well…

…There is a good chance that Deep Research, particularly as it evolves, will become the most effective search engine there has ever been; it will find whatever information there is to find about a particular topic and present it in a relevant way. It is the death, in other words, of security through obscurity. Previously we shifted from a world where you had to pay for the news to the news being fed to you; now we will shift from a world where you had to spend hours researching a topic to having a topic reported to you on command.

Unless, of course, the information that matters is not on the Internet. This is why I am not sharing the Deep Research report that provoked this insight: I happen to know some things about the industry in question — which is not related to tech, to be clear — because I have a friend who works in it, and it is suddenly clear to me how much future economic value is wrapped up in information not being public. In this case the entity in question is privately held, so there aren’t stock market filings, public reports, barely even a webpage! And so AI is blind…

…That, by extension, is why AI’s like Deep Research are one of the most powerful arguments yet for prediction markets. Prediction markets had their moment in the sun last fall during the U.S. presidential election, when they were far more optimistic about a Trump victory than polls. However, the potential — in fact, the necessity — of prediction markets is only going to increase with AI. AI’s capability of knowing everything that is public is going to increase the incentive to keep things secret; prediction markets in everything will provide a profit incentive for knowledge to be disseminated, by price if nothing else.

It is also interesting that prediction markets have become associated with crypto, another technology that is poised to come into its own in an AI-dominated world; infinite content generation increases the value of digital scarcity and verification, just as infinite transparency increases the value of secrecy. AI is likely to be the key to tying all of this together: a combination of verifiable information and understandable price movements may the only way to derive any meaning from the slop that is slowly drowning the Internet.

This is the other reality of AI, and why it is inescapable. Just as the Internet’s transparency and freedom to publish has devolved into torrents of information of questionable veracity, requiring ever more heroic efforts to parse, and undeniable opportunities to thrive by building independent brands — like this site — AI will both be the cause of further pollution of the information ecosystem and, simultaneously, the only way out…

…Secrecy is its own form of friction, the purposeful imposition of scarcity on valuable knowledge. It speaks to what will be valuable in an AI-denominated future: yes, the real world and human-denominated industries will rise in economic value, but so will the tools and infrastructure that both drive original research and discoveries, and the mechanisms to price it. The power of AI, at least on our current trajectory, comes from knowing everything; the (perhaps doomed) response of many will be to build walls, toll gates, and marketplaces to protect and harvest the fruits of their human expeditions.

5. AI and the Mag 7 – Daniel Rasmussen

Last summer, Goldman Sachs was estimating a $1T spend on AI capex in the coming years, and the numbers have only gone up since then, with most of it concentrated in the Mag 7 that dominate the public markets…

…It’s necessary as an investor to at least consider how these bets might go awry…

…The skeptic’s case starts with the possibility that the Mag 7 is suffering from a classic case of “competition neglect,” where “subjects in competitive settings overestimate their own skill and speed in responding to common observable shocks and underestimate the skill and responsiveness of their competitors,” as Robin Greenwood and Samuel Hanson put it in their paper, “Waves in Ship Prices and Investment.” When shipping prices increase, shipping companies all decide to invest in ships—after all, their models are all saying these investments will be profitable at current rates. That investment not only drives up the price of building new ships, it causes a glut of supply once they are built, resulting in poor returns on these pro-cyclical investments, as low as -36%, according to Greenwood and Hanson. Meanwhile, those who invest at the bottom of that cycle—when current shipping prices are low and there’s no one else building at the shipyards—earn returns as high as 24%.

Rather than ships, today’s AI capex “is a euphemism for building physical data centers with land, power, steel and industrial capacity,” as Sequoia Capital’s David Cahn puts it…

…OpenAI, SoftBank, and the federal government’s $500 billion Project Stargate is the culmination of this race to convert tech companies into industrial manufacturers. But even winning this race could be a Pyrrhic victory. Capex at these levels is an asset-heavy business model. Asset-heavy business models historically have lower returns on capital, especially when sunk costs meet increased competition.

In this scenario, perhaps Stargate is the AI equivalent of overinvesting in new ships at the same moment that everyone else is overinvesting in ships, leading to a supply glut, price drops, and poor investment returns…

…We still don’t have many economical use cases for AI. Even in low-compute mode, a single prompt on ChatGPT’s o3 model costs $20 to perform. High-compute mode can cost much more….

…While Anthropic CEO Dario Amodei is confident AI can beat humans at most things in 2-3 years, that doesn’t mean we will all be using AI that way. There’s a difference between what can be automated and what is cost-effective to automate. Daron Acemoglu, Institute Professor at MIT, estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years. An MIT research paper looked at jobs in non-farm businesses and found 36% of tasks in jobs they studied could be automated by AI vision models, but only 8% were economically worth automating.

Scaling laws are an assumption that brute force will get us more and more powerful AI. For AI investors, it’s a playbook to outspend the competition, win the market, and trust that, eventually, more infrastructure and better chips will bring costs down and make more tasks economical to automate. But shooting for scale and achieving high ROI are not usually achieved at the same time.

Shortly after Stargate was announced, it was soon overshadowed by bigger news about China’s DeepSeek model. While the exact specs are a subject of debate, DeepSeek shattered the cost-to-performance expectations that investors and the Mag 7 have been working from…

…We’ve only just entered the true product-building era for AI. How many people today think of the internet as a product? The internet is not a single thing but a collection of services and products on common digital infrastructure (e.g., TCP/IP protocol, which was built by DARPA with US taxpayer money and isn’t a business anyone is making money on). Similarly, AI models could, like other commodities, utilities, and infrastructure projects, become a part of everything we use rather than a distinct product. Usage patterns are starting to reflect this: we are using these models less directly and more through other services built on top of them.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.

Company Notes Series (#6): Azeus Systems Holdings

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first five editions in the series can be found hereherehere, here, and here. Please give us your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes

Data as of 24 July 2023

Notes on Azeus

Place of listing and timing of IPO; Location of HQ

  • A leading provider of IT products and services, Azeus was listed on the Main Board of the SGX-ST in October 2004.
  • Principal office: 22/F Olympia Plaza 255 King’s Road, North Point Hong Kong

FY2018 annual report

  • Azeus was the first company in Hong Kong to be appraised at the highest level (Level 5) of the CMMISW model in November 2003.
  • Azeus Products segment more than doubled revenue in FY2018 (financial year ended 31 March 2018), from HK$11.9 million in FY2017 to HK$24.4 million. Growth was due to the Azeus Convene” and AzeusCare SaaS (software-as-a-service) products, as well as professional services. 
  • At the start of July 2017, Azeus was awarded the Standing Offer Agreement for Quality Professional Services 4 (SOA-QPS4), enabling the company to tender for various Hong Kong government IT professional services contracts of up to HK$15 million for the fifth consecutive term. Following which, Azeus successfully clinched a series of governmental IT projects from the Hong Kong Government, which amounted to over HK$133.4 million, which will be progressively recognised over the next two to ten years following their implementation in FY2019 and FY2020. 
  • In the course of FY2018, Azeus saw its investment in the expansion of its global product sales team pay off. Azeus made good headway in acquiring new customers for the Azeus Products segment, which resulted in higher sales for Azeus Convene and AzeusCare. Azeus Products accounted for 23.8% of Azeus’s total revenue, compared to 12.1% in FY2017
  • The Maintenance and Support Services segment was Azeus’ largest revenue contributor in FY2018, accounting for HK$46.0 million, or approximately 45.0% of total revenue. The segment registered a 13.7% decline in revenue from HK$53.3 million in FY2017 due to the expiry of a major maintenance and support outsourcing contract in the beginning of the year.
  • The IT Services segment, which recorded a lower revenue of HK$31.9 million in FY2018 compared to HK$32.7 million in FY2017, was 31.2% of Azeus’s total revenue. This was due to a decrease in sales of third-party hardware and software by HK$0.8 million in FY2018. Excluding the third-party hardware and software sales, Azeus was able to achieve the same amount of IT Services revenue as compared to FY2017.
  • Entering into FY2019, management believed that Azeus’ core business fundamentals remain sound and the company is in a good position to grow its business by building on the progress made last year, particularly for the products business which is an integral growth engine for Azeus in the years ahead. 
  • Lee Wan Lik (managing director and founder) and his wife, Lam Pui Wan (executive director), controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 30 May 2018.

FY2019 annual report

  • In FY2019, Azeus delivered total revenue of HK$147.8 million, a 44.4% increase from HK$102.4 million in FY2018. The growth was mainly supported by increased sales of Azeus’s two proprietary SaaS products – Azeus Convene and AzeusCare under the Azeus Products segment – as well as professional IT services arising from the completion of higher value implementation service projects.
  • Revenue for the Azeus Products segment more than doubled to HK$49.9 million in FY2019 from HK$24.4 million in FY2018. As a result, the segment was 33.8% of Azeus’s revenue in FY2019, up from 23.8% in FY2018. 
  • In September 2018, Azeus signed a contract valued up to £1.42 million with a local council in the United Kingdom for the supply, support and maintenance of a Social Care Case Management System with AzeusCare. The amount was progressively recognised over the next seven years of the contract. The contract win added to Azeus’s track record of public sector projects in the UK, signifying Azeus having been chosen as the preferred suite of IT solutions for social care in the country.
  • Professional IT Services revenue expanded 25.6% from HK$78.0 million in FY2018 to HK$97.9 million in FY2019. This segment is made up of two core business areas, IT services and Maintenance and Support Services, of which both performed well. Revenue from IT services increased 48.7% from HK$31.6 million in FY2018 to HK$47.0 million in FY2019 from the completion of higher value implementation service projects – its contribution to Azeus’s total revenue for FY2019 increased to 31.8% from 30.9% in FY2018.
  • Revenue from Maintenance and Support Services increased by 7.5% from HK$46.0 million in FY2018 to HK$49.5 million in FY2019, due to an increase in the number of projects in production and under maintenance period. The segment represented 33.4% of Azeus’s total revenue in FY2019.
  • IT Services is project-based and revenue can be lumpy; Maintenance and Support Services is a stable earner.
  • Entering FY2020, management was focused on growing stable recurrent revenue from the Azeus Products business segment. Management wanted to aggressively build and strengthen sales and marketing capacity to secure greater market share, as they saw Azeus Products business will increasingly serve as the growth engine of Azeus.
  • Lee Wan Lik (managing director and founder) and his wife, Lam Pui Wan (executive director), controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 31 May 2019.

FY2020 annual report

  • Azeus’s flagship product, Azeus Convene, is a leading paperless meeting solution used by directors and executives in various industries, across more than 100 countries. Through its user-friendly and intuitive functionality, Azeus Convene has enabled organisations to conduct meetings in a convenient and efficient manner, by eliminating the time and cost required for printing large amounts of hardcopies. To ensure data security, Azeus Convene is equipped with advanced security features and end-to-end encryption. In addition, Azeus Convene offers 24/7 support to all its customers worldwide. The Group has also introduced a virtual AGM solution, AGM@Convene, in response to the shifting trend towards eAGMs as a result of the COVID-19 restrictions.
  • Azeus’s proprietary social care system, AzeusCare, has also been adopted by various local councils in the United Kingdom. AzeusCare is an integrated case management system that provides a wide range of solutions for supporting the delivery of services for managing and delivering social care for both children and adults. In particular, AzeusCare supports the delivery of the requirements of the UK Care Act 2014 with a comprehensive set of tools to manage both the case management and finance requirements under a fully integrated system. 
  • Towards the end of FY2020, COVID-19 pandemic impacted countries across the world. Amidst the pandemic, management identified opportunities to boost the adoption of Azeus Convene and launched the electronic annual general meeting (“e-AGM”) product which is designed to enable listed companies to hold annual general meetings from multiple sites, while ensuring that the shareholders’ rights remain protected. Azeus experienced a very encouraging response from listed companies, enterprises, business associations and nonprofit organisations with the launch of e-AGM. In June 2020, approximately 60 customers conducted their AGMs using Azeus’s e-AGM solution. 
  • Azeus achieved another year of record-high revenue in FY2020, mainly driven by the Azeus Products segment, which gained strong momentum during the year. Azeus Convene and AzeusCare continued to contribute a steady growing stream of recurring income as these products and their associated professional services were increasingly adopted and implemented by our customers. Azeus’s total revenue was HK$181.2 million, up 22.6% from FY2019. Notably, revenue for the Azeus Products segment surged 68.1% to HK$83.9 million in FY2020 from HK$49.9 million in FY2019, accounting for 46.3% of Azeus’s total revenue, up from 33.8% in FY2019.
  • As part of its expansion strategy, Azeus bolstered its sales force in the year to ramp up customer acquisition and increase penetration among existing customers. As a result, Azeus incurred higher selling and marketing costs of HK$23.4 million, an increase of 30.0% from HK$18.0 million in FY2019.
  • Professional IT Services revenue was largely unchanged at HK$97.3 million in FY2020. The segment comprises three business areas, System implementation and enhancement; Sale of third-party hardware and software; Maintenance and Support Services. For FY2020, System implementation and enhancement decreased by 22.9% to HK$36.2 million mainly due to fewer projects and enhancements secured during the year, while Maintenance and Support Services, which contributes a stream of recurring income, decreased by 8.5% to HK$45.3 million due to a decrease in the number of ongoing maintenance projects. The decreases were partially offset by a higher sale of third-party hardware and software of HK$15.8 million in FY2020 as compared to HK$1.5 million in FY2019, mainly attributable to the delivery and acceptance of an implementation project completed during the year.
  • In FY2020, approximately 70% of Azeus’s revenue was recurring in nature. Management wanted to build and expand sales and marketing capacity to secure greater market share and address the growing demand for IT solutions amid the accelerating rate of digitalisation globally.
  • In Azeus’s FY2020 AGM in August 2020, it showcased several key functions of the e-AGM solution, including live voting and an interactive video question and answer session.
  • Lee Wan Lik (managing director, chairman, and founder) and his wife, Lam Pui Wan (executive director), controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 31 May 2020.

FY2021 annual report

  • Azeus’s flagship product, Azeus Convene, is a leading paperless meeting solution used by directors and executives in various industries, across more than 100 countries. Through its user-friendly and intuitive functionality, Azeus Convene has enabled organisations to conduct meetings in a convenient and efficient manner, by eliminating the time and cost required for printing large amounts of hardcopies. To ensure data security, Azeus Convene is equipped with advanced security features and end-to-end encryption. In addition, Azeus Convene off ers 24/7 support to all its customers worldwide. The Group has also introduced a virtual AGM solution, AGM@Convene, in response to the shifting trend towards eAGMs as a result of the COVID-19 restrictions.
  • Azeus’s proprietary social care system, AzeusCare, has also been adopted by various local councils in the United Kingdom. AzeusCare is an integrated case management system that provides a wide range of solutions for supporting the delivery of services for managing and delivering social care for both children and adults. In particular, AzeusCare supports the delivery of the requirements of the UK Care Act 2014 with a comprehensive set of tools to manage both the case management and finance requirements under a fully integrated system.
  • Azeus recorded a 1.7% decrease in revenue to HK$178.1 million in FY2021, from HK$181.2 million in FY2020.
  • Azeus started to market AGM@Convene internationally and achieved success in Singapore, the Philippines and Hong Kong.
  • Revenue from Azeus Products increased by HK$29.3 million, or 34.9%, from HK$83.9 million in FY2020 to HK$113.2 million in FY2021, as Azeus made good progress in expanding its customer and revenue base.  Azeus Products accounted for 63.6% of Azeus’s total revenue, compared to 46.3% in FY2020. Revenue from Azeus Products came from three proprietary SaaS products – Azeus Convene, AzeusCare, and AGM@Convene – and associated professional services.
  • IT Services, which includes three core business areas, System implementation and enhancement, Sale of third party hardware and software, and Maintenance and support services, recorded a 33.3% decrease to HK$64.9 million as a result of fewer projects and enhancements secured in FY2021. Revenue from Systems implementation and enhancement decreased by 47.7% to HK$19.0 million in FY2021 while revenue from Sale of third party hardware and software decreased by 96.2% from HK$15.8 million to HK$0.6 million, as the majority of the projects completed in FY2021 required Azeus’s customisation services. Revenue from Maintenance and support services remained flat in FY2021 at HK$45.3 million.
  • As management continued to invest in Azeus’ Products business segment, Azeus’s total research and development costs increased to HK$36.8 million in FY2021, 49.0% higher than in FY2020. Likewise, as Azeus pursued subscriber growth by expanding the sales teams, selling and marketing expenses increased by 36.3% to HK$31.9 million in FY2021 as compared to HK$23.4 million in FY2020.
  • Azeus’s management team respects shareholders’ rights. During Azeus’ AGM in August 2020, the company was probably the first Singapore-listed company to hold a virtual meeting in 2020 with a live Q&A and live voting. Exiting FY2021, management expected more listed companies to progressively follow its lead and improve their engagement with shareholders. 
  • Management was cautiously optimistic about the outlook for FY2022.
  • Lee Wan Lik (managing director, chairman, and founder) and his wife, Lam Pui Wan (executive director), controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 31 May 2021.

FY2022 annual report

  • Azeus’s flagship product, Convene, is a leading paperless meeting solution used by directors and executives in various industries, across more than 100 countries. Through its userfriendly and intuitive functionality, Convene has enabled organisations to promote and uphold governance through a single secure technology platform to manage and conduct formal or structured meetings – physical, remote, or hybrid – and streamline the workflows around it. This results in a greater boost in productivity, accountability, and collaboration within and beyond the boardroom. To ensure data security, Azeus Convene is equipped with advanced security features and end-to-end encryption. In addition, Convene offers 24/7 support to all its customers worldwide. The Group has also introduced a virtual AGM solution, Convene AGM in response to the shifting trend towards eAGMs as a result of the COVID-19 restrictions.
  • Azeus’s proprietary social care system, AzeusCare, has also been adopted by various local councils in the United Kingdom. AzeusCare is an integrated case management system that provides a wide range of solutions for supporting the delivery of services for managing and delivering social care for both children and adults. In particular, AzeusCare supports the delivery of the requirements of the UK Care Act 2014 with a comprehensive set of tools to manage both the case management and finance requirements under a fully integrated system. 
  • In FY2022, Azeus secured its single largest contract of over HK$1.0 billion for the implementation and maintenance of the Hong Kong government’s Central Electronic Recordkeeping System with its product, Convene Records, which was expected to further enhance Azeus’s recurring income stream. This was a show of confidence from the Hong Kong Government in the capability of Azeus in delivering “All-of-Government” large scale projects, and in the software products designed and developed by Azeus. An expected 75% of the total estimated contract value would be for the license and maintenance fees of the Convene Records software. The design and implementation work commenced in May 2022 – management expected a majority of the revenue to be contributed from FY2023 until FY2037.
    • More details from other sources: The contract has a total implementation price of HK$633.9 million and the revenue from development, deployment and licensing would last from FY2023 till FY2027; the contract also has maintenance and support value for the system of HK$381.4 million and this maintenance and support revenue is expected to start in FY2027 and last 10 years. 
  • Azeus recorded a 22.2% increase in revenue to HK$217.7 million, up from HK$178.1 million in FY2021, driven by strong growth from both its Azeus Products and IT Services segments.
  • Azeus Products, the company’s growth engine, continued to make good strides globally, as it expanded into more territories and added new product features and modules. Revenue from Azeus Products increased by 23.1%, from HK$113.2 million in FY2021 to HK$139.4 million in FY2022, and accounted for 64.1% of Azeus’s total revenue.
  • The IT Services segment grew revenue by 20.5% from HK$64.9 million in FY2021 to HK$78.2 million in FY2022, as Azeus secured more projects and undertook project implementation and maintenance work. More than 60% (HK$47.9 million) of this IT Services revenue was from maintenance and support services of existing systems which are long-term contracts. The recurring revenue from maintenance and support, which accounted for 22.0% of Azeus’s revenue in FY2022, increased by 5.7% to HK$47.9 million from HK$45.3 million in FY2021. Revenue from systems implementation and enhancement increased by HK$11.3 million or 59.5% to HK$30.2 million in FY2022. 
  • Exiting FY2022, management thought Azeus was well-placed to capitalise on the opportunities ahead because of its strong product offerings and expertise in delivering sophisticated IT systems. Management also wanted to continue investing in and grow the Azeus Products segment. Management was excited about Azeus Products’ growth potential, with the growth of the flagship product, Convene, and new product offerings such as Convene Records.
  • Lee Wan Lik (executive chairman and founder) controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 1 June 2022 (the shares include those of Lam Pui Wan).
  • Lee Wan Lik’s wife, Lam Pui Wan, passed way on 6 May 2022
  • Lee Wan Lik stepped down as managing director and CEO on 15 March 2022 but remained as executive chairman.

FY2023 annual report

  • Azeus has developed:
    • Convene – the board portal software that enables directors and executives with best-practice meetings to achieve better corporate governance
    • ConveneAGM – a virtual/ hybrid AGM platform with live voting, live Q&A, and zero-delay broadcast that transforms the landscape for shareholders and members’ meetings through physical, remote or hybrid AGMs
    • Convene in Teams (CiT) – a Teams-based meeting solution that seamlessly integrates with Microsoft 365 for a better leadership meeting experience in Teams,
    • Convene ESG – an end-to-end reporting software that digitises the Economic, Social and Governance (“ESG”) reporting journey of regulated companies to comply with the mandated local standards and global frameworks. 
    • Convene Records – a document management solution that automates the management of electronic records and documents, and facilitates information sharing in the organization; the product includes a configurable workflow management feature for approval process, and supports the filing, retrieval, distribution, archiving and version control 
    • AzeusCare – an integrated case management system that provides a wide range of solutions for supporting the delivery of services for managing and delivering social care for both children and adults. In particular, AzeusCare supports the delivery of the requirements of the UK Care Act 2014 with a comprehensive set of tools to manage both the case management and finance requirements under a fully integrated system. It has been adopted by various local councils in the United Kingdom.
  • Azeus recorded a 16.2% increase in revenue to HK$252.9 million in FY2023, from HK$217.7 million in FY2022, driven mainly by growth from the Azeus Products segment. The Azeus Products segment benefited from Azeus’s marketing efforts, increased its presence in more countries, and expanded its product offering.
  • The HK$1.02 billion Central Electronic Recordkeeping System (CERKS) project – lasting over 53-months – moved into the deployment phase in FY2023 and management expected it to contribute to the product business in the coming years.
  • Azeus Products accounted for 69.3% of Azeus’s total revenue in FY2023. Revenue from Azeus Products increased by 25.8% from HK$139.4 million in FY2022 to HK$175.3 million in FY2023, mainly attributable to the revenue contribution from Convene and Convene Records under the CERKS contract.
  • IT Services, which include two main core business areas, system implementation and enhancement and maintenance and support services, saw a marginal decline of just 0.8%, from HK$78.2 million to HK$77.6 million. Within the IT Services segment, revenue from systems implementation and enhancement declined by HK$0.7 million or just around 2.3% to HK$29.5 million in FY2023 from HK$30.2 million in FY2022, while the recurring revenue from maintenance and support increased by HK$0.2 million or 0.4%, to HK$48.1 million in FY2023 from HK$47.9 million in FY2022. 
  • Exiting FY2023, management thought Azeus was in a favourable position to capture potential opportunities, given the company’s strong product offerings as well as the competency in delivering sophisticated IT systems. Management also wanted to continue investing in and growing the Azeus Products segment. Led by the flagship product – Convene – and the rollout of new product offerings such as Convene Records, management expects growth within the Azeus Products business. Coupled with the expected rollout of the secured service segment projects, barring unforeseen circumstances, management is optimistic on Azeus’s overall growth and outlook in FY2024.
  • Lee Wan Lik (executive chairman and founder) controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 20 June 2023 (the shares include those of Lam Pui Wan).

Segmental data (Azeus Products and IT Services)

  • Recurring revenue comes from Azeus Products and Maintenance and Support (Maintenance and Support is grouped under IT services)

Historical financials

  • No dilution as share count has remained unchanged
  • Has always had earnings payout ratio (100% payout ratio in past two financial years)
  • Balance sheet had always remained robust
  • Net profit appears to have hit inflection point in the past 3-4 years

Geographical revenue

  • Can see that all regions have grown a lot over time. Is this due to Azeus Products?

Product quality for Convene

  • In all the rankings seen below, for board management software, Convene scores pretty highly (either a leader, or nearly a leader)

2021 ranking by Software Reviews

2022 ranking by Software Reviews

2023 ranking by Software Reviews

Board management software score by Software Review as of 2023-07-25

Competitive landscape by G2.com (Convene is in red circle)

User score by G2.com for Convene on 2023-07-25

Management

  • Lee Wan Lik, 61, is the executive chairman and founder of Azeus. His late wife, Lam Pui Wan, was an executive director until her passing on 6 May 2022. 
  • Lee Wan Lik controlled 24.73 million Azeus shares, or 82.44% of total shares, as of 20 June 2023 (the shares include those previously held by the deceased Lam Pui Wan)
  • Michael Yap Kiam Siew, 62 is the CEO and deputy chairman of Azeus. Served on Azeus’s board since September 2004. Became executive director and deputy chairman on 20 April 2020; appointed CEO on 15 Mar 2022. Michael Yap does not have any meaningful stake in Azeus shares
  • As shown in table below, management’s compensation is not egregious

Quick thought on valuation

  • At 24 July 2023 stock price of S$8.20, Azeus has market cap of S$246 million.
  • Azeus Products alone has trailing operating profit of HK$76 million, which is around S$12.9 million. Market cap of entire Azeus is 19 times operating profit of the Azeus Products business alone.

Questions on Azeus

  • What was Azeus Products’ annual client retention rate from FY2017 to FY2023? Convene’s website mentions that “99% clients renew every year”, but no timeframe was mentioned.
  • What was Azeus Products’ annual net-dollar expansion rate (NDER) from FY2017 to FY2023?
  • How has Azeus Products’ customer count, or the customer count for Convene specifically, changed over time?
  • How has the product-subscribed-per-customer ratio for Azeus Product changed over time?
  • What does a typical subscription for Azeus Products look like? Specifically, (a) what is the average contract size, (b) for how long does a typical subscription term last, and (c) is the software charged based on usage, or the number of seats, or a mixture of both?
  • What is the market opportunity for Convene and Convene Records?
  • The CERKS contract value can be split into HK$633.9 million in the 5-year deployment phase, and HK$381.4 million in the subsequent 10-year maintenance and support phase. Is Convene Records a subscription SaaS (software-as-a-service) product such as Convene, and ConveneAGM?  
  • What kind of margins (operating and net) will the CERKS contract have?
  • Azeus does not have any significant concentration of credit risk through exposure to individual customers – but is there significant concentration of revenue risk through exposure to individual customers?
  • The growth of Azeus’s revenue in the United Kingdom has been very impressive, rising from HK$11.9 million in FY2017 to HK$42.0 million in FY2023. Has this been mostly the result of growth in usage of AzeusCare, or has Convene or other software products played important roles too?
  • For both FY2022 and FY2023, Azeus paid out all of its earnings as dividends. What are management’s thought processes when it comes to capital allocation?

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 February 2025:

1. Robert Litan: An Economist Walks Into a Bar at TEDxKC (Transcript) – Robert Litan

First guy, he approaches the first woman that he sees, offers her a drink. She turns him down. He, then, decides to walk his way down the bar. And, of course, all the women watching this, they see what he’s up to. And they all turn him down…

…He hasn’t learned from this experience, in the real world. So he decides to go to the virtual world. He goes to the Internet and joins Cupid.com and he tries the same technique, and sure enough, with the same result. They all turn him down…

…Cupid.com is in trouble too. And the reason they are, is that the women who have joined Cupid.com are being inundated with offers for men for dates. They get turned off, they quit. And if they quit, men quit. Cupid is in trouble. Who are you going call, to solve this problem. Know the answer is more obvious than ghost busters. You call an economist. Don’t laugh, you call economists. In fact, you call two of them.

This is Muriel Niederle of Stanford, and Dan Ariely of Duke. And they spend a lot of time, studying the problem of artificial scarcity and abundance, in the online dating context, which is a reason Cupid call them up. And they wanted to know how to fix their problem and the two economists said they had an idea, that was as simple as it was profound. Just put a sharp limit on the number of date offers that men could make to women each month. This is the notion of artificial scarcity. Taking what looks like an abundant resource, which is date offers, and artificially constraining them.

And the economists said to Cupid that if you do this, the men will take their offer seriously. They’ll look at more than just the women’s pictures and they’ll actually look at their profiles. And the women will know this, and they’ll be more likely to accept date-proposals. Artificial scarcity helped save Cupid.com, and other dating sites that copied the technique…

…Google collects about $50 billion a year, from advertisers, large and small, seeking placement on that right hand side. They auction off the site. But that’s not how the system started, because when Google was launched, online advertising was in its infancy, and Google, believe it or not, went door to door, advertiser to advertiser, trying to get them to place for an ad next to a search term. Highly laborious, you quickly can see that this is not going to scale, as the number of searches explodes on Google.

And so the founder of Google asked two young engineers, Eric Veach and Salar Kamangar, to come up with an automatic system, that would solve this problem. Well, they were instinctively attracted to auctions. But they were thinking about another problem. That is if they auction off the sites, they fear that the advertisers would bid a very low price, and then incrementally raise their prices just a little bit, and keep the auctions going on forever. And if this happened, and a lot of searches were also going on at the same time, the whole site would crash.

So, as an engineering solution, they came up with this idea. That the winning auction, or the winning placement will be the price, the second highest price that was bid plus one penny. This will cut off the auctions, greatly simplify the process, and in the process also solve another problem called “the winner’s curse“. I’m sure that many of you that have participated in auctions may have regretted winning because you felt like you’ve paid too much. Pretty obvious point…

…“You know, those two engineers, they have reinvented what this guy came up with.” This is William Vickrey, he was an economist at Colombia, who proved mathematically, that the second price auction was the ideal solution to the winner’s curse. And you know what, that won him the Nobel Prize in Economics in 1996.

2. Emergent Layers, Chapter 2: Overserved and Underserved Customers – Alex Danco

Returning to disruption theory, the critical element we’re going to use from that framework is the idea of the overserved customer: the customer who is being served too much by incumbents. In mature industries, where everybody agrees what the scarce resource is and the core constraints are well understood and organized around, we see this happen a lot. As incumbent companies compete with each other for business, and customers are all being served adequately (for the understood job at hand), competition becomes a feature race where products improve or expand at a greater rate than customers’ capacity to use them. There’s a misalignment between what the customer needs and is getting, with that misalignment falling onto the side of “I’m spending way too much of my money or time for this.” Crucially, when customers are overserved for a particular job, it introduces the critical space and oxygen required for a new competitor with some sort of scalable, technological advantage to enter the market at the low end. The nature of over-service creates powerful incentives for incumbents to not engage with disruptive entrants, but rather to retreat upmarket towards higher profit margins…

…For a more recent but still “classic” example, let’s look at Airbnb. Airbnb was able to get off the ground because there was a critical subset of customers in the hospitality industry — initially young people, although not exclusively so — who were overserved by many aspects of the hotel industry. Hotels were serving customers along many axes of performance — comfort, privacy, loyalty reward programs, and so forth — that just weren’t very important to a specific subset of customers who didn’t care too much about all that stuff; they just want a place to stay. This gave Airbnb the critical oxygen necessary to get a foot in the door, and then expand upwards from a dramatically cheaper cost structure than Marriott can possibly compete with. The overserved customer is a very potent and dangerous one: they know what they’re looking for, and they don’t need to be educated when a new entrant comes along with the right proposition. If that new entrant gets a few critical things right, they’re looking at a large group of early adopters that need little prodding, little education and little advance notice. That’s a great basis to start a company.

Let’s now consider another kind of pain: underserved customers. Their pain appears to be more straightforward: they have some fundamental need that isn’t being met. But this situation is trickier than it seems: if a group of customers have a genuine need, then why aren’t companies stepping in to offer solutions? What’s the catch? It could be because the solutions are genuinely too hard, or face technical or feasibility obstacles. It could also be that customers aren’t aware they have the problem. Either way, that’s tough…

…Now let’s put these two types of customer pain together. What would happen if a customer were both overserved and underserved at the same time? Is this possible?

As it turns out, this situation is not only possible, but occurs regularly. And it’s highly volatile. The trick to figuring out how this works requires venturing one step beyond disruption theory, and recasting the job-to-be-done as a stack itself with a hierarchy of low-level to high-level needs…

…We can characterize the initial job where customers are being served as being at level j, where incumbents vie for customer dollars and products will inevitably trend towards over-service. Meanwhile, we can characterize the higher-order job as being at level j+1, which encompass the customer’s higher level objectives, and where companies are not, for whatever reason, currently serving anyone…

…Consider Uber: you have a large group of customers (myself included) who are overserved by owning their own vehicle. If your car sits idle & parked more than 95% of the time (which is about average in North America), you are clearly overserved by owning this car! Yet at the same time, that same set of customers is underserved at level j+1, or the reason why they own a car in the first place — “I need to get to specific places at specific times”. You have a schedule to keep, and it’s hard.

Notice that both of these conditions must hold true in order for Uber to work. If customers were not overserved, it would be difficult for them to abandon their current solution. (Consider someone who drives their vehicle for a living, many hours per day. They are significantly less overserved by their vehicle, and quite unlikely to switch to using Uber for the equivalent job.) At the same time, if they weren’t underserved for a higher-level job (get me places at a certain time), then the only way for a new solution to be truly compelling would be dramatically lower price — which makes for a tough business model. This is another thing outside observers get wrong about Uber when they exclaim, “I don’t see how this is cheaper than owning a car!” Well, here’s the thing — Uber doesn’t have to be cheaper than driving, because it’s superior to driving your own vehicle in many ways! You don’t have to worry about parking, insurance, drinking, maintenance, gas, or anything else. The simultaneous condition of being overserved and underserved by existing solutions is what made Uber so compelling, in a way that other ride-sharing services or carpooling didn’t quite get right. Uber works because it’s cheap, but its appeal is because it’s better…

…If customers only check off the “underserved” box, then it seems likely you’re dealing with a problem that’s a. very hard, or b. the customer isn’t aware they have. This isn’t a great position to be in — it’ll be very hard to build an initial solution and attract early adopters.

If they only check off the “overserved” box, then customers know what they want — but it may be that they’re only motivated by price. And that’s also not a great position to be in: you may get lots of adopters really quickly, but find it very difficult to extract any profit from them…

…The particular combination of customers overserved at level j while being underserved at level j+1, when it happens, explains how from time to time we see markets where the demand is zero and then all of a sudden a vertical line straight up.

3. Why Housing May Be In for Another Cost Shock Next Year – Tracy Alloway, Joe Weisenthal, and Lee Everett

Lee (04:44):

It’s interesting. I think stress is hitting sort of all sides of the market. You have your bigger, more well established shops that have been managing through this, able to handle the higher rate environment, but have obviously taken a very real valuation hit on their existing portfolios. Like 20% to 30% depending upon the portfolio composition. At the same time you’ve had record demand hitting the sector because cost to buy housing is exceptionally unattainable today. And then on the other side you’re having a very material impact on the supply side and I think that’s what’s really unique. If you think back to September, the 10-year was around a 3.6%, I think, the day Chair Powell cut us by 50 basis points. Well, we’re at almost a 4.6% today and I remember that night you heard reports about developers out at local dinners and they were calling it Fed Day and getting ready to put shovels in the ground.

Joe (05:37):

Drinking champagne and stuff like that.

Lee (05:38):

Exactly. And what you’ve seen instead is increased stress on both the short end and the long end of the curve. That’s given you trouble on the short end, to start new housing, and trouble on the long end to afford longer term for ownership housing…

…Lee (11:29):

Yes, I think frankly we’re about to transition from what has been a very renter friendly market to again a landlord friendly market over the course of the next two to three years. And that’s going to be particularly driven by what we’re seeing on the supply side. We’re going to have over a million units come to market over a two-year period here in ’24 and ’25, but peak supply is hitting in the next six months and if you look at relative time from a) peak supply and then b) to getting to a level of lower supply than you saw last cycle, every major market in the country will be there by the end of 2026.

Joe (12:13):

Be where?

Lee (12:15):

Delivering less housing units than they did on average from ’17 to ’19 in apartment buildings. So you’re going to go below prior cycle supply very quickly. At the same time, we do have exceptionally strong labor markets here and the demand story has been outstanding. So 2024 is going to end the year, depending upon the data provider you use, as the first or third highest year for rental demand ever. 2021 was the prior record. So we’re seeing people form rental households at unprecedented rate in the US and as that supply comes down, you’re going to see that demand struggle to frankly find high quality, well-located assets to move in, and you’re likely to see that relationship flip at that point.

Tracy (13:08):

So the other thing that affects multifamily housing construction other than interest rates has to be just general confidence, I guess, in the direction of the economy, the direction of the world and certainly there’s a lot going on right now. We’re recording this on January 28th and there’s news that the Trump administration is freezing a whole bunch of federal spending. I think it’s something like 20% of federal spending. That includes presumably stuff like Section 8 and other affordable housing measures. Would that be expected to hit multifamily as well?

Lee (13:46):

Yeah, and I think it’s probably easiest to sort of start at the top, right? When you’re building multifamily, you’re generally trying to build to an acceptable return on cost, but frankly what we’re doing is putting an investor’s money together and generating returns for them. Multifamily isn’t built for free and it can’t be in this sort of economic world and a general rule of thumb is a 6+% return on cost. So cost to build, you want to yield over 6% of that to get a building to pencil. That tracks up closer to 7% depending upon the institution, because you need to build to that yield on cost, you have to have rents that are high enough to generate enough rental revenue to drive that return. So in order to build today, you have to build it exceptionally high rent levels, because of the cost to build, because of the cost of interest rates.

The only way to drop that is to drop the cost and that cost drop typically comes for affordable housing from the federal government, be it HUD grants that are then deployed through the local housing agency, be it LIHTC, be it any sort of an ensemble of ways to cut costs. That’s how you can get to affordable rents on the supply side. And then on the demand side, you can cut rents by literally giving people a rent check, which is what Section 8 is. And that again comes from the federal government via grants given to the local housing agencies to deploy. And if that money dries up, you have immense problems in terms of a) fueling the demand for these people, because you’re cutting rent on the Section 8 side and b) encouraging future construction of affordable apartment buildings…

…Joe (17:47):

Let’s talk about deportation impacts on labor. What are the estimates for what percentage of the multifamily workforce, whether it’s construction or maintenance, whatever else, is undocumented labor?

Lee (18:01):

It’s estimated 20% of construction workers in this country are undocumented labor. I’d venture to guess it’s similar for the whole multifamily industry when you look at staffing and things along those lines, and I think when you look at a combination of deportation of construction workers as well as the sheer amount of labor it’s going to require to rebuild huge swaths of California, I think you could be looking at a massive deficit in labor within the construction space. And when you think about that, that’s going to be your strongest lever that’s going to hit your cost to build and that’s what’s going to drive up those rents that are necessary. Is all of this immense pressure you’re going to see in the labor costs.

4. Test-Time Search: A Path To AGI – Akash Bajwa

The GPT family of models performed poorly relative to o3 on the ARC benchmark because large models memorise knowledge rather than reasoning processes…

…As an example, Meta intentionally overtrained Llama 3 on 15 trillion tokens to lower inference costs (as they served their billions of users). The model weights become more optimised for common patterns and in-distribution tasks, trading off generalisability to novel tasks.

This architecture combined with ‘internet scale’ data has produced incredible recent advances, but the next leap will come from a new paradigm – instead of outputs, models will be trained on reasoning steps…

…This new vector of scaling will rely on a combination of synthetic and human generated reasoning data. As we’ll see, both will be expensive forms of reinforcement learning (o3’s performance of 87.5% on ARC AGI in high-compute mode cost thousands of $ per task)…

…Synthetic data will be most useful for domains where functional verification is possible, e.g. code, maths and engineering…

…Scaling inference time compute is in line with the Bitter Lesson – there are only 2 techniques that scale indefinitely with compute: learning & search.

DeepMind’s AlphaGo used Monte Carlo Tree Search during test time to attain superhuman status – if stripped of this capabilities, it drops in Elo from ~5,200 to 3,000 (top humans are around ~3,800)…

…The exorbitant costs stem from the many, many Chains Of Thought generated as the model searches for the chains that lead to the right answer – all of the other tokens are useless, but cost a lot to generate…

…Functionally verifiable domains are the most amenable to synthetic CoTs because engineering the reward is much easier than in domains where subjectivity is involved…

…Code execution provides an unambiguous, binary reward signal – either the code executes successfully or it fails, creating clearly defined success criteria for training.

In functionally verifiable domains, the correct CoT tokens become training data…

…Over time, this should have a deflationary effect on the inference cost of reasoning models, as we’ve seen with frontier models in the pre-training paradigm…

…As pre-training gains plateau (or become too expensive), we’ve found a new vector of scaling (test time search) that is demonstrating a path to truly general intelligence.

Data acquisition/generation remains the bottleneck on progress, not compute. Microsoft’s announcement of $80bn in capex for 2025 underscores the Street’s underestimation of hyperscaler capex and compute buildout.

The implications of inference scaling run up and down the stack. Instead of the densely interconnected supercomputers of the pre-training paradigm, we will see more distribution of workloads, perhaps some even running locally. How will market share evolve as companies look to optimise test time search workloads – will AI ASICs eat into Nvidia market share?

Instead of prohibitively expensive pre-training runs, enterprises developing their own models may opt to train smaller models with reasoning cores and decide when to scale up test time search for certain economically valuable tasks. The result is the alchemy of capex to opex and fixed costs to variable costs. CIOs will decide which tasks merit more investment and test time search – inevitably, this will still be cheaper than human labour.

5. Don’t Freak Out – Ben Carlson

The common theme across the Apollo missions was the sheer amount of planning involved.  There were months and months of simulations and training exercises to review every possible scenario. They wanted every process to be automatic.

But there was always the risk of an unplanned error, considering they were propelling these giant hunks of metal through space using rocket fuel that would allow them to reach speeds of more than 24,000 miles per hour…

…When Apollo 13 had an explosion mid-flight, it wasn’t something anyone thought could have been even a remote possibility. Astronaut Jack Swigert explained it after the fact like this:

Nobody thought the spacecraft would lose two fuel cells and two oxygen tanks. It couldn’t happen. If somebody had thrown that at us in the simulator, we’d have said, ‘Come on, you’re not being realistic.’

This is why NASA trained the astronauts in one skill more than any other leading up to their space flights — the art of not panicking. The only reason they could turn the Apollo 13 spacecraft around 200,000 miles from earth following an explosion onboard is because the astronauts and everyone on the ground remained levelheaded. No one freaked out.

Or if they were freaking out internally, they didn’t act on those emotions.

In a nutshell, that is successful investing.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time.

Potential Bargains In A Niche Corner Of The US Stock Market

Small community banks in the USA undergoing a change in ownership structure could be interesting to look at

I first came across a niche corner of the US stock market known as thrift conversions in January 2024. Upon further research over the subsequent months, I realised it could be an interesting hunting ground for potential bargains. 

For the purpose of this article, thrifts, which have roots in the USA tracing back to the early 19th century, are small community banks in the country that are mutually owned by their depositors. The mutual ownership structure means that these thrifts have no shareholders. As a result, a thrift’s depositors – despite being owners – have no legal way to access its economics. In the 1970s, regulations were introduced to allow thrifts to convert their ownership structure (hence the term “thrift conversions”) and become public-listed companies with shareholders. Today, there are two main ways for thrifts to convert:

  • The first is a standard conversion, where a thrift converts fully into a public-listed entity at one go.
  • The second is a two-step conversion. In the first-step, a thrift converts only a minority interest in itself into a public-listed entity and thus still has a partial mutual ownership structure. In the second-step, a thrift that has undergone the first-step conversion process goes on to convert fully into a public-listed entity. As far as we know, there’s no time limit for a thrift that has undergone the first-step conversion to partake in the second-step of the process.

Subsequently in this article, I will be using the word “conversion”, or other forms of the same word, to refer only to the standard conversion, unless otherwise stated.

A thrift conversion can be thought of as a thrift undergoing an initial public offering (IPO). During a conversion, the incentives of a thrift’s management and those of its would-be shareholders are highly aligned. In the process, a thrift offers shares to management and depositors first; if there’s insufficient demand, the thrift will then offer shares to outsiders. Importantly, management would be buying the thrift’s shares during the conversion at the same price as other would-be shareholders (the other would-be shareholders are the depositors and outsiders; as a reminder, prior to a conversion, a thrift has no shareholders1). This means it’s very likely that management wants a thrift’s shares to have as cheap a valuation as possible during the conversion. Moreover, new capital that’s raised from management and would-be shareholders in the conversion goes directly to the thrift’s coffers. This new capital adds to the thrift’s equity (calculated by deducting the thrift’s liabilities from its assets) that it has built from the profits it has accumulated over time from providing banking services. These features mean that a thrift often becomes a full public-listed entity at a low valuation while having a high equity-to-assets ratio. It’s worth noting that a thrift can conduct share buybacks and sell itself to other financial institutions after the one-year and three-year marks, respectively, from its conversion.2

Investor Jim Royal’s comprehensive book on thrift conversions (referring to both standard and two-step conversions), aptly titled The Zen of Thrift Conversions, referenced a 2016 study by investment bank Piper Jaffray. The study showed that since 1982, thrifts that became full public-listed entities did so at an average price-to-tangible book (P/TB) ratio of just 0.75. After becoming public-listed entities, thrifts tend to continue trading at low P/TB ratios. This is because they also tend to have very low returns on equity – a consequence of them having a high equity-to-assets ratio after their conversion – and a bank with a low return on equity deserves to trade at a low P/TB ratio. But the chronically low P/TB ratio is why thrift conversions could be a fertile space for bargains.

Assuming that converted thrifts have low P/TB ratios of less than 1, those that conduct share buybacks increase their tangible book value per share over time even when they have low returns on equity. Moreover, as mentioned earlier, converted thrifts tend to have high equity-to-asset ratios, which means they have overcapitalised balance sheets and thus have plenty of excess capital to buy back shares without harming their financial health. To top it off, the 2016 study from Piper Jaffray also showed that since 1982, 70% of thrifts were acquired after the third anniversary of them becoming full public-listed entities and these thrifts were acquired at an average P/TB ratio of 1.43 (the median time between them becoming fully public and them being acquired was five years).

The growth in a converted thrift’s tangible book value per share from buybacks, and the potential increase in its P/TB ratio when acquired, could result in a strong annualised return for an investor. For example, consider a thrift conversion with the following traits:

  1. It has a return on equity of 3% in each year;
  2. It has a P/TB ratio that consistently hovers at 0.7;
  3. It buys back 5% of its outstanding shares annually for four years after the first anniversary of its conversion, and;
  4. It gets acquired at a P/TB ratio of 1.4 five years after its conversion

Such a thrift will generate a handsome annualised return of 20% over five years. Investing in the thrift on the third-anniversary of its conversion – when the thrift can legally sell itself to other financial institutions – will result in an even more impressive annualised return of 52% when the thrift’s acquired4. There are also past examples of converted thrifts that go on to produce impressive gains even without being acquired. In his book Beating The Street, Peter Lynch, the famed ex-manager of the Fidelity Magellan Fund, shared many examples. Here’s a sample (emphasis is mine):

“In 1991, 16 mutual thrifts and savings banks came public. Two were taken over at more than four times the offering price, and of the remaining 14, the worst is up 87 percent in value. All the rest have doubled or better, and there are four triples, one 7-bagger, and one 10-bagger. Imagine making 10 times your money in 32 months by investing in Magna Bancorp, Inc., of Hattiesburg, Mississippi.”

But not every thrift conversion leads to a happy ending. Table 1 below shows some pertinent figures of Mid-Southern Bancorp, a thrift which produced a pedestrian return from its second-step conversion in July 2018 to its acquisition by Beacon Credit Union in January 2024.

Table 1

There are a few important things I look out for in thrift conversions5:

  • The equity-to-assets ratio: The higher the better, as it signifies an over-capitalised and strong balance sheet, and would make a thrift look attractive to a would-be acquirer
  • The P/TB ratio: The lower the better, as a P/TB ratio that is materially below 1 will (a) make share buybacks a value-enhancing activity for a thrift’s shareholders, and (b) enhance the potential return for us as investors
  • Share buybacks: The more buybacks that happen at a P/TB ratio below 1, the better, as it is not only value-enhancing, but also indicates that management has a good understanding of capital allocation
  • Non-performing assets as a percentage of total assets: The lower the better, as it signifies a thrift that is conducting its banking business conservatively
  • Net income: If the play is for a potential acquisition of a thrift, we want to avoid a chronically loss-making thrift as consistent losses indicate risky lending practices, but the amount of net income earned by the thrift is not important because an acquirer would be improving the thrift’s operations; if the play is for a thrift to generate strong returns for investors from its underlying business growth, then we would want to see a history of growth in net income and at least a decent return on equity (say, 8% or higher)
  • Change in control provisions: This relates to payouts that a thrift’s management can receive upon being acquired and such information can typically be found in a thrift’s DEF 14-A filing; if management can receive a nice payout when a thrift is acquired, management is incentivised to sanction a sale
  • Management’s compensation: The annual compensation of a thrift’s management should not be high relative to the monetary value of management’s ownership stakes in the thrift

Expanding on the last point of what I look out for, I’ve seen cases of fully-public thrifts with poor long-term business results have management teams with high compensation and relatively low dollar-amounts in ownership stakes. In such cases, I think there’s a low possibility of these thrifts being acquired in a reasonable amount of time to maximise shareholder value because it’s lucrative for the management teams to entrench their positions.

If any of you reading this letter is interested to have deeper conversations about investing in thrifts, please reach out, I would love to engage.

1. Thrifts that undertake the two-step conversion process would have no shareholders prior to the first-step conversion. After the first-step conversion is completed and before the second-step conversion commences, these thrifts would have shareholders who own only a minority economic interest in them.  

2. Thrifts that decide to participate in the second-step of the two-step conversion process after completing the first step can begin share buybacks after the first anniversary of the second-step; they can also be acquired on the third anniversary. 

3. Why would a converted thrift (referring to both standard conversions and two-step conversions) be an attractive acquisition target and be acquired at a premium to its tangible book value? This is because the acquirer of a converted thrift can easily cut significant costs and make more efficient use of the thrift’s overcapitalised balance sheet; this means an acquirer can pay a premium to book value (i.e. a P/TB ratio of more than 1) for a converted thrift and still end up with a good deal. 

4. The potential return of a thrift that has completed the second-step of the two-step conversion process is identical to a thrift that has completed the standard conversion, ceteris paribus. This is because the former has the same important features as the latter, such as the low valuation, the over-capitalised balance sheet, and the possibility of being acquired by other financial institutions at a premium to tangible book value. 

5. What I look out for in a thrift that has completed the standard conversion is the same as what I look out for in a thrift that has completed the second-step of the two-step conversion.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 February 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 February 2025:

1. DeepSeek: The View from China – Jordan Schneider, Irene Zhang, Angela Shen, and Yiwen

In this newsletter, we share a translation of insights from a January 26 closed-door session hosted by Shixiang 拾象, a VC spun out from Sequoia China. Attended by dozens of AI researchers, investors, and industry insiders, the event captures how the Chinese AI community is processing the DeepSeek shock…

…The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant…

…In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough…

…Why did DeepSeek catch up so fast?

Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.

One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier….

…AI is similar to a step function, where the compute requirements for followers have decreased by a factor of 10. Followers have historically had lower compute costs, but explorers still need to train many models. The exploration of new algorithms and architectures will not stop. Behind the step function, there are significant investments by many people, meaning compute investments will continue to advance. Many resources will also be allocated to products. Apart from reasoning, there are other directions that are compute-intensive. While the vast amount of compute resources spent by explorers may not be visible, without such investment, the next “step” might not occur. Additionally, many are dissatisfied with current architectures and RL methods, and progress will continue.

When exploring directions, performance achieved with 10,000 GPUs may not always be significantly better than that of 1,000 GPUs, but there is a threshold somewhere. It’s unlikely that meaningful results can be achieved with only 100 GPUs because the iteration time for each solution would be too long…

…The question of why OpenAI and Anthropic did not do work in DeepSeek’s direction is a question of company-specific focus. OpenAI and Anthropic might have felt that investing their compute towards other areas was more valuable.

One hypothesis for why DeepSeek was successful is that unlike Big Tech firms, DeepSeek did not work on multi-modality and focused exclusively on language. Big Tech firms’ model capabilities aren’t weak, but they have to maintain a low profile and cannot release too often. Currently, multimodality is not very critical, as intelligence primarily comes from language, and multimodality does not contribute significantly to improving intelligence…

…2025 will, first and foremost, see interest in new architectures beyond Transformers. Some initial exploration is already underway, aiming to reduce costs while pushing the boundaries of intelligence. Secondly, the potential of reinforcement learning (RL) has yet to be tapped into completely. On the product side, there is significant interest in agents, though they have yet to see widespread application…

…It is reported that Meta is still in the process of reproducing DeepSeek, but so far, this has not significantly impacted their infrastructure or long-term roadmap. In the long run, beyond exploring the boundaries of the technology, cost efficiency must also be considered. Lowering costs will let us have more fun…

…From the developer’s perspective, models like Claude-3.5-Sonnet have been specifically trained for tool use, making them highly suitable for agent development. In contrast, models like DeepSeek have not yet focused on this area, but the potential for growth with DeepSeek is immense…

…Currently, reinforcement learning (RL) solves problems with standard answers but has not achieved breakthroughs beyond what AlphaZero accomplished. In fact, it is often simpler. Distillation addresses problems with standard answers, and RL methods work effectively when training with such answers. This explains why distillation and RL have made rapid progress in recent years.

Humanity’s demand for intelligence is vastly underestimated. Many critical problems, such as cancer and SpaceX’s heat shield materials, remain unsolved. Existing AI primarily automates tasks, but there are numerous unsolved challenges ahead. Looking forward, the potential for explosive growth is immense, and the advancement of intelligence cannot stop…

…Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.

The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency. DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization…

…Open source controls the margins of the whole market. If open source can do 95% of what closed source can do and closed source is too expensive, then open source can be used completely. If the capabilities of open source and closed source do not differ greatly, then this presents a big challenge for closed source…

…AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.

2. Explaining International Valuations –  Daniel Rasmussen

Perhaps the single greatest divergence in equity markets has been the continued outperformance of US versus international equities—and thus the widening of the valuation gap between the US and the rest of the world…

…By far the most significant difference, explaining about half the valuation gap, is the domicile of listing. US-listed stocks are substantially more expensive than internationally listed stocks for no reason other than the place of listing.

It’s particularly interesting that the regression shows having a higher percentage of sales in the US results in cheaper valuations. A key driver of this is that several of the US tech giants most responsible for high US equity valuations having a relatively low percentage of sales in the US (Alphabet, Microsoft, and Tesla at around 50%; Apple, Netflix, Meta, and NVIDIA at around 40%). The big question, then, is why half the valuation gap is explained simply by being listed on US exchanges. Even large internationally listed companies with >40% of their revenue coming from the US, like Toyota, Mitsubishi, Roche or Deutsche Telekom (which owns T-Mobile), trade at steep value multiples relative to US peers.

Were a larger percentage of the valuation gap explained by fundamentals, we’d expect such a gap to persist. But given that the valuation gap is primarily explained simply by the location of listing, we think there’s a strong reason to expect a convergence—and therefore to favor international over US-listed stocks, despite their terrible relative performance over the past decade.

3. The Most Impressive Prediction of All Time – Jeffrey Emanuel

My candidate for the most impressive prediction of all time came from a person who is practically unknown in the West except for a relatively small group of historians and people interested in niche subjects. The person I’m thinking of is named Pyotr Durnovo, and he was an Imperial Russian government official who lived from 1842 to 1915.

We will discuss more about him later and how his life experience may have prepared him to be able to make such an impressive prediction, but the short version of it is that he initially studied to be in the Navy and served there for around a decade, and then became the Director of Police for the Ministry of Internal Affairs for the entire Russian Empire under Tsar Alexander III. Later, he served as the Minister of the Interior under Tsar Nicholas II (the one who was ultimately executed with his family by the Bolsheviks in 1917 during the Russian Revolution).

So what is this prediction he made, anyway, and why is it so impressive? Well, in 1914, six months prior to the outbreak of World War 1, Durnovo wrote a truly remarkable ~7,600-word memorandum for Tsar Nicholas II and his top 2 or 3 ministers, which we know was given to them, since it was found in Nicholas’ papers and later published in 1922 by communist historians after the revolution. If they had only read it carefully and took its warnings more seriously, the world we live in today might look very different!…

…For one, it predicted an imminent war on the horizon, which he ultimately blamed on the collision course between England and Germany, which were the two greatest industrial powers at the time. This was certainly not some earth shattering or special prediction; a lot of people predicted some kind of big conflict, and it was often said that “war was in the air” at the time…

…It’s how he analyzed the situation, and then used that reasoning to predict the exact groupings of countries that would participate in the conflict and on which side, and how the situation would evolve from there, that is so impressive…

…His predictions about alliances and national behaviors were almost unbelievably specific and ran counter to the conventional wisdom of the time:

  • He predicted that Italy would not side with Germany despite being part of the Triple Alliance, and would instead join the opposing side if victory seemed likely, seeking territory from both Austria and Turkey. This is exactly what happened; Italy joined the Allies in 1915 after negotiating for territorial concessions.
  • He predicted that Romania would remain neutral until it was clear which side would win, then join the victorious side to claim territory. This also came true— Romania entered the war in 1916 on the Allied side after significant Russian successes.
  • Most surprsingly, he predicted that Bulgaria would side against Serbia and by extension against Russia, despite Russia being Bulgaria’s historic liberator from Ottoman rule— a prediction that seemed almost unthinkable to most observers at the time. This came true exactly as he foresaw, with Bulgaria joining the Central Powers in 1915.
  • He correctly predicted that Serbia and Montenegro would side against Austria, while Greece would likely remain neutral until the outcome was more or less predetermined.
  • He predicted unrest among Muslims in the Caucasus and Turkestan (which occurred).
  • He predicted the possibility of Afghanistan moving against Russia (which happened in 1919).
  • He predicted serious complications in Poland (the Polish-Soviet War of 1919-1921).
  • He predicted an uprising in Finland if Sweden joined Germany (Finland did declare independence in 1917)

…If all of that weren’t already so ridiculous to get right, he went way beyond all that to realize that, regardless of who won, the war would lead to “social revolution” in both the defeated AND victorious countries, starting with the losing side and then spreading to the winners. This was perhaps his most extraordinary prediction, as it came true in spectacular fashion:

  • Russia, despite being on the winning side, experienced the Bolshevik Revolution in 1917; we will go into much more detail about these predictions below.
  • Germany, after losing the war, experienced the German Revolution of 1918-1919; Durnovo predicted that unrest and revolution would be specifically tied to economic factors and class interests rather than purely political ones: he outlined how German workers would turn against the agricultural interests that had dominated pre-war German policy once defeat cut off their export markets and industrial employment, and this exact dynamic played out in the German Revolution of 1918-1919.

Now, you might object here that “Well, it’s not that crazy to believe there might be a revolution in a country which suffered massive losses in a catastrophic war; lots of people might have predicted that.” But the thing is, Durnovo went so far beyond merely predicting that there would be a Russian Revolution. He basically predicted every contour of the Revolution, the driving forces behind it, how it impacted different segments of Russian society, and how it would all unfold, step by step!…

…So how was Durnovo able to accomplish this incredible feat of prediction? Obviously, he was a genius of the first order, which is perhaps not so surprising given that he was a close relative of the famous Tolstoy family. But raw IQ is certainly not enough, nor is being well informed and knowledgeable. What kind of man could see so clearly what virtually everyone else missed? He was a complex character whose very contradictions likely enabled his extraordinary insights; he was, at the same time:

  • A conservative police chief who often expressed liberal thoughts in private
  • A supposed reactionary who opposed anti-Semitic measures and defended Jews
  • A cynical operator who nevertheless would help others when he could
  • A man capable of both strict officialdom and surprising gentleness
  • A high official who preferred informal interactions (his subordinates would warn visitors not to address him as “Your Excellency”)

These contradictions suggest someone who wasn’t bound by conventional ideological frameworks or social expectations— a crucial trait for seeing beyond accepted wisdom. He also had a wide range of professional experience that prepared him to see things in a multi-faceted, sophisticated way, as by 1915, he had done the following:

  • Naval officer (9 years of far-sea cruises)
  • Military legal training
  • Assistant Prosecutor in various parts of Russia
  • Director of Police Department for 10 years
  • Assistant Minister of Interior under multiple ministers
  • Minister of Interior
  • Member of State Council

This combination of experiences was extraordinary and atypical to say the least:

  • His naval and legal background gave him insight into the military, maritime trade, and the Russian legal system.
  • His prosecutorial work exposed him to conditions across Russia, not just in the big cities.
  • His police work gave him unparalleled insight into social discontent and the strategies and thinking of professional revolutionaries like Lenin, Stalin, and Trotsky.
  • His ministerial positions showed him the workings (and limitations) of state power.

He also occupied a unique position as both an insider and an outsider: 

  • He was from old nobility but not wealthy or particularly influential
  • He reached high office but was temporarily dismissed in disgrace (a sordid story in which Durnovo had his secret police officers search the private letters of a foreign ambassador— inside an embassy building no less— so they could steal love letters sent by Durnovo’s mistress to the ambassador; when the ambassador complained to Tsar Alexander III, he was furious, ordering his minister to “remove this swine within twenty-four hours.”)
  • He was a conservative who often disagreed with other conservatives
  • He understood both state power and its limitations

This dual perspective may have freed him from the groupthink that afflicted both conservative and liberal circles.

4. USA, Inc – Michael Batnick

Consider this face blower of a stat from Goldman: “Since 1992, earnings growth in the US has outpaced earnings in non-US developed economies by an annual average of 2.4 percentage points.”

Most of the world is barely earning more than they were prior to the pandemic. The U.S. looks like an unstoppable freight train…

…The one sided performance has driven valuations between us and the rest of the world to record levels. We’ve all seen a version of these charts before…

…BUT! These charts aren’t comparing apples with apples. Goldman notes that only 1% of the U.K. market is in technology companies. Another example they cite is that energy is 5% of S&P 500 earnings, 19% of UK, and just 1% of Japan. We’re not comparing apples with apples.

They did a great job adjusting for differences in sector weights…

…The U.S. still trades at a premium to the rest of the world ex-India, but not as much as the prior chart would have you believe. Before any adjustments, the Eurozone trades at a 39% discount to the U.S. And after the adjustments, that falls to 23%.

5. DeepSeek FAQ – Ben Thompson

Let’s work backwards: what was the V2 model, and why was it important?

The DeepSeek-V2 model introduced two important breakthroughs: DeepSeekMoE and DeepSeekMLA. The “MoE” in DeepSeekMoE refers to “mixture of experts”. Some models, like GPT-3.5, activate the entire model during both training and inference; it turns out, however, that not every part of the model is necessary for the topic at hand. MoE splits the model into multiple “experts” and only activates the ones that are necessary; GPT-4 was a MoE model that was believed to have 16 experts with approximately 110 billion parameters each.

DeepSeekMoE, as implemented in V2, introduced important innovations on this concept, including differentiating between more finely-grained specialized experts, and shared experts with more generalized capabilities. Critically, DeepSeekMoE also introduced new approaches to load-balancing and routing during training; traditionally MoE increased communications overhead in training in exchange for efficient inference, but DeepSeek’s approach made training more efficient as well.

DeepSeekMLA was an even bigger breakthrough. One of the biggest limitations on inference is the sheer amount of memory required: you both need to load the model into memory and also load the entire context window. Context windows are particularly expensive in terms of memory, as every token requires both a key and corresponding value; DeepSeekMLA, or multi-head latent attention, makes it possible to compress the key-value store, dramatically decreasing memory usage during inference.

I’m not sure I understood any of that.

The key implications of these breakthroughs — and the part you need to understand — only became apparent with V3, which added a new approach to load balancing (further reducing communications overhead) and multi-token prediction in training (further densifying each training step, again reducing overhead): V3 was shockingly cheap to train. DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.

That seems impossibly low.

DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses; from the V3 paper:

Lastly, we emphasize again the economical training costs of DeepSeek-V3, summarized in Table 1, achieved through our optimized co-design of algorithms, frameworks, and hardware. During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs. Consequently, our pre- training stage is completed in less than two months and costs 2664K GPU hours. Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M. Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.

So no, you can’t replicate DeepSeek the company for $5.576 million.

I still don’t believe that number.

Actually, the burden of proof is on the doubters, at least once you understand the V3 architecture. Remember that bit about DeepSeekMoE: V3 has 671 billion parameters, but only 37 billion parameters in the active expert are computed per token; this equates to 333.3 billion FLOPs of compute per token. Here I should mention another DeepSeek innovation: while parameters were stored with BF16 or FP32 precision, they were reduced to FP8 precision for calculations; 2048 H800 GPUs have a capacity of 3.97 exoflops, i.e. 3.97 billion billion FLOPS. The training set, meanwhile, consisted of 14.8 trillion tokens; once you do all of the math it becomes apparent that 2.8 million H800 hours is sufficient for training V3. Again, this was just the final run, not the total cost, but it’s a plausible number.

Scale AI CEO Alexandr Wang said they have 50,000 H100s.

I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.

Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.

Meanwhile, DeepSeek also makes their models available for inference: that requires a whole bunch of GPUs above-and-beyond whatever was used for training…

Is this why all of the Big Tech stock prices are down?

In the long run, model commoditization and cheaper inference — which DeepSeek has also demonstrated — is great for Big Tech. A world where Microsoft gets to provide inference to its customers for a fraction of the cost means that Microsoft has to spend less on data centers and GPUs, or, just as likely, sees dramatically higher usage given that inference is so much cheaper. Another big winner is Amazon: AWS has by-and-large failed to make their own quality model, but that doesn’t matter if there are very high quality open source models that they can serve at far lower costs than expected.

Apple is also a big winner. Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU (neural processing unit) have access to a shared pool of memory; this means that Apple’s high-end hardware actually has the best consumer chip for inference (Nvidia gaming GPUs max out at 32GB of VRAM, while Apple’s chips go up to 192 GB of RAM).

Meta, meanwhile, is the biggest winner of all. I already laid out last fall how every aspect of Meta’s business benefits from AI; a big barrier to realizing that vision is the cost of inference, which means that dramatically cheaper inference — and dramatically cheaper training, given the need for Meta to stay on the cutting edge — makes that vision much more achievable.

Google, meanwhile, is probably in worse shape: a world of decreased hardware requirements lessens the relative advantage they have from TPUs. More importantly, a world of zero-cost inference increases the viability and likelihood of products that displace search; granted, Google gets lower costs as well, but any change from the status quo is probably a net negative…

...How did DeepSeek make R1?

DeepSeek actually made two models: R1 and R1-Zero. I actually think that R1-Zero is the bigger deal…

…R1-Zero, however, drops the HF part — it’s just reinforcement learning. DeepSeek gave the model a set of math, code, and logic questions, and set two reward functions: one for the right answer, and one for the right format that utilized a thinking process. Moreover, the technique was a simple one: instead of trying to evaluate step-by-step (process supervision), or doing a search of all possible answers (a la AlphaGo), DeepSeek encouraged the model to try several different answers at a time and then graded them according to the two reward functions.

What emerged is a model that developed reasoning and chains-of-thought on its own…

…Here again it seems plausible that DeepSeek benefited from distillation, particularly in terms of training R1. That, though, is itself an important takeaway: we have a situation where AI models are teaching AI models, and where AI models are teaching themselves.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms, Microsoft, Netflix, and Tesla. Holdings are subject to change at any time.

Company Notes Series (#5): Edilizi Acrobatica

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first three editions in the series can be found here, here, here, and here. Please give us your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes

Data as of 17 July 2023

Background

  • HQ: Milan, Italy
  • Founding: 2004 (idea for the company came in 1994)
  • Main listing: In Italy on the Milan stock exchange
  • IPO date: 19 November 2018
  • Employees: Average number for 2022 was 1,055

Business

  • Edilizi Acrobatica is the leading company in Italy and Europe in the field of operational construction using the double safety rope technique. The company’s main services include:
    • Securing and Prompt Intervention: Services that are provided urgently, such as removal of rickety objects on the outside of a building
    • Renovation and maintenance: Restructuring and maintenance of facades, balconies, ledges; ordinary maintenance of hedges as well as rebuilding
    • Building cleaning: Cleaning of walls and facades (glazing and/or cladding panels), roofs, solar panels and windmills, gutters and downpipes
    • Proofing intervention: Removal of localized infiltrations or the complete rebuilding of the waterproofing system that may concern balconies, roofs, ledges and hedges
  • Founder Riccardo Iovino was previously a skipper (a boat captain) who was accustomed to moving at high altitudes to carry out maintenance on the masts of boats. In the 1990s, he had a friend who had a gutter to be repaired in a poorly accessible spot. Iovino decided to climb up the roof with the ropework technique and repaired the gutter in a few hours. The experience gave Iovino a great idea: rope works allow a person to intervene effectively outside buildings with enormous advantages in terms of time and money that traditional construction cannot offer. Figure 1 shows Edilizi Acrobatica’s employees in action. Edilizi Acrobatica’s management believes that the double safety rope technique has the following advantages over scaffolding:
    • Better safety for workers: In 2017, Edilizi Acrobatica conducted 222,577 hours of work, with only 2,872 hours of injury (16 injuries), corresponding to an injury frequency index of 1.14%.
    • No risk of theft
    • Less invasiveness for any works conducted: For example, Edilizi Acrobatica employees can work at heights on monuments and historical buildings without disturbing tourists (the company’s rope access technicians worked on Ponte Vecchio in Florence, on the Roman Forum and the Rocca Salimbeni in Siena)
    • Greater cost- and time-effectiveness
    • Better accessibility to areas on buildings that are not reachable with traditional techniques
    • Better for the environment: The Life Cycle Assessment conducted in 2021 showed that of the four main types of techniques used for building-interventions, the double rope technique allows a reduction of between 45% and 76% of the global warming potential by means of a reduced number of journeys; double rope technique allows uses an estimated 51% to 68% of energy consumption and between 7% and 40% of water consumption compared to other techniques.
Figure 1
  • Edilizi Acrobatica has more than 130 branches in Italy, France, Spain, Monaco, United Arab Emirates, Saudi Arabia, Nepal. In Europe, it has more than 120 branches, which includes 30 franchises; majority of the branches are in Italy (83 company-branches and 30 franchise-branches at end-2022). The branches in Dubai come from Edilizi Acrobatica’s March 2023 acquisition of 51% of Enigma Capital Investments, which is active in the Middle East in the construction sector, rope access, cleaning services for residential and commercial buildings, and some facility management services; Enigma performs cleaning work for the exterior of the Burj Khalifa, Dubai’s iconic skyscraper. Edilizi Acrobatica offers its services through its wide network of operating offices – both directly-owned and by franchises – which allow for a strong commercial presence at a national level. Edilizi Acrobatica’s branches look attractive and inviting (see Figure 2):
Figure 2
  • Edilizi Acrobatica customers come from the residential sector (the company receives orders from private individuals, condominium administrators, or technicians), public administration sector (where the company works on buildings owned by public administration, such as schools, universities, public offices, and hospitals), corporate sector (where the company works on industrial sites, company headquarters, hotels, wind farms, and photovoltaic plants), and religious sector (where the company works on religious structures including churches, monasteries, and convents). In 2017, residential was 80.9% of Edilizi Acrobatica’s revenue from direct operating offices; public administration was 5.3%; corporate was 8.6%; religious structures was 5.1%. Unclear what the split is like in 2022.
  • In 2022, Edilizi Acrobatica earned €134.5 million in revenue, of which 89.9% was from Italy, 3.6% from France, 5.9% from a new business called Energy Acrobatica 110 (involved with energy efficiency, anti seismic interventions, installation of photovoltaic systems), and 0.6% from Spain. In 2022, 6.1% of Edilizi Acrobatica’s revenue came from franchises. The average order size in 2022 was €7,000.

Market opportunity

  • Edilizi Acrobatica is active in the field of external restructuring of buildings. This market represents over half of the entire construction sector. There’s been a trend toward professionalization in external restructuring in recent years with the growing presence of professionals in the management of buildings, including condominiums both in Italy and abroad, as has already been the case in France for several years. Management believes this market evolution is a tailwind for Ediizi Acrobatica, since it is increasingly a point of reference for large customers who demand fast execution and high-quality standards. Moreover, external restructuring using rope access is gaining popularity with condominium owners and administrators since there are no installation costs for scaffolding or aerial platforms and rope access guarantees the possibility of conducting external restructuring of the buildings through medium small interventions planned in several phases of time, with completion of the works also in a wider period.
  • Figure 3 shows the size of the renovation market in Italy for 2007-2016 where renovation interventions include demolition operations, removal and construction of partitions, plastering and smoothing, floors and coverings, painter works, plumbing works, heating system, electrical system, masonry assistance, air conditioning, fixtures and supply of materials. In 2016 renovation works in Italy amounted to €69.4 billion, up by 3.6% compared to 2015 (€67 billion), and giving rise to a 2011-2016 CAGR of 1.7 %. Around 71.5% of the total renovation works (€49.6 billion) were for residential buildings. Worth noting that the renovation market has been very stable, even during the Great Financial Crisis period. Steady growth in the market continued in 2017 and 2018; total renovation works spending was €71.0 billion in 2017 (€50.4 billion for residential buildings) and €72.6 billion in 2018 (€51.4 billion for residential buildings).

 

Figure 3 (“Totale edifici” refers to “total buildings” and “Edifici residenziali” refers to residential buildings)
  • In 2011, ISTAT (Italian National Institute of Statistics) compiled a study of buildings and complexes in Italy and found a total of 14.516 million, 13.3% more than in 2001. More specifically, there were 14.453 million buildings and 63,115 complexes, with an inter-census increase of 13.1% and 64.4% respectively. 84.3% of the total buildings surveyed were residential buildings, equal to 12.188 million, up by 8.6% in the decade between the censuses.
  • In France, Edilizi Acrobatica’s market opportunity is about €60 billion, which consists of the following activities: Support the completion of new buildings with external and covering finishes, installation of panels in facade, installation of photovoltaic panels, installation of lifelines, and works aimed at improving and maintaining the exterior of buildings.
  • Worth pointing out that Edilizi Acrobatica’s competitors (companies that offer similar services as Edilizi Acrobatica using the double rope technique) in Italy and Europe are tiny. Figure 4 show competitors in Italy and their revenues in 2016 and Figures 5, 6, 7 show competitors in France, Switzerland, Spain, and Portugal, and their revenues in 2016. Their revenues are all tiny compared to Edilizi Acrobatica – in 2016, Edilizi Acrobatica’s revenue was €13.3 million. Even in 2022, there are no major new competitors, and the trend of small competitors on a local scale remains unchanged.
Figure 4 (“ricavi medi dichiarati” refers to “average revenue reported”)
Figure 5 (“ricavi medi dichiarati” refers to “average revenue reported”)
Figure 6 (“ricavi medi dichiarati” refers to “average revenue reported”)
Figure 7 (“ricavi medi dichiarati” refers to “average revenue reported”)
Figure 8 (“ricavi medi dichiarati” refers to “average revenue reported”)

Growth strategy

For growth, Edilizi Acrobatica’s management communicated the following in its 2018 IPO prospectus:

  • Consolidate Edilizi Acrobatica’s presence in the Italian market 
  • Strengthen the company’s commercial activity in the residential sector, through the opening of new operating offices, directly-owned and through franchising 
  • Develop dedicated divisions to target Corporate, Public Administration and Religious sectors
  • Acquire leading foreign companies operating in the construction market with rope access technique (Edilizi Acrobatica acquired a French company in 2018 and the aforementioned Dubai company in March 2023)
  • Strengthen Edilizi Acrobatica’s brand image through the creation of promotional campaigns and promotional activities, through traditional channels and social media (the company now has a very fun social media presence – its FB page has 215,000 followers!)

Figure 9 below, from Edilizi Acrobatica’s 2021 earnings presentation, offers great insight into how it wants to expand into Europe (note the reminder again of the small size of peers):

Figure 9

Financials

  • Very strong historical revenue growth. 2016-2022 CAGR of 47.0%; 2019-2022 CAGR of 47.7%; 2022 growth of 53.4%
  • Profitable since at least 2016, but net income margin has fluctuated between 13.6% (2016) and 2.6% (2019). Net income margin was 11.3% in 2022. Edilizi Acrobatica’s net income has CAGR-ed at 42.6% for 2016-2022, 140.6% for 2019-2022, and 37.5% for 2022
  • Operating cash flow data only available from 2017 and since then, operating cash flow has been mostly positive. But, the operating cash flow margin was meagre from 2017 to 2020, coming in between 1.0% (2017) and -6.6% (2020). Operating cash flow only inflected upwards in 2021, with a margin of 16.9%. 
  • Free cash flow follows a similar dynamic as operating cash flow, with the difference being it was negative from 2017-2020.
  • Balance sheet has fluctuated between low net-debt or low net-cash position.
  • Not much dilution since IPO in November 2018, based on end-of-year share count.
  • As far as I could tell, started paying a dividend in 2020. Dividend has increased substantially, but payout ratio is low at 27% for 2022.
  • Worth noting that the Italian government introduced a “bonus facade” for 2020, which allowed Italian building owners to recover 90% of the costs incurred in 2020 for the maintenance of their building facades with no maximum spending limit. The Bonus Facade was applicable for 2021. In 2022, the Bonus Facade was reduced to 60% of the costs incurred. The Bonus Facade was not renewed for 2023. Edilizi Acrobatica’s strong financial performance in 2021 and 2022 may have been due to the Bonus Facade.

Management

  • Edilizi Acrobatica’s founder, Riccardo Iovino, 54, is CEO. His mother (Simonetta Simoni) and partner (Anna Marras) are also on the board of directors; Simoni is the President of Edilizi Acrobatica. 
  • Iovino and Marras control Arim Holdings (80-20 split), an investment vehicle which owns 74% of Edilizi Acrobatica’s shares as of 31 December 2022. This equates to 6.09 million Edilizi Acrobatica shares. At 17 July 2023 stock price of €17.15, that’s a stake worth over €104 million, which is significant skin in the game.
  • During Edilizi Acrobatica’s IPO, Simoni also had a stake in shares of the company held by Arim Holdings that equated to 8.5% of Edilizi Acrobatica’s shares; unsure if this still holds true. 
  • In 2007, when Marras joined Edilizi Acrobatica, it was a turning point in the company as she helped create a sales network, and an internal HR department focused on people and the continuous recruitment of talents. 

Compensation of Management

  • Very little detail on compensation of management. Only data is the overall compensation to the directors of Edilizi Acrobatica. Besides  Iovino, Simoni, and Marras, the other directors are Marco Caneva and Simone Muzio. Cavena is an independent director and has worked in the financial services and strategic consulting sector for over 20 years, including 10 in the investment banking division of Goldman Sachs (London, Paris, Milan). Muzio is the Technical Director of Italy for Edilizi Acrobatica and joined the company in 2007.
  • Overall compensation of directors vs Edilizi Acrobatica’s net income is shown in table below. Overall compensation used to be very high as percentage of net income and is now lower, but 2022’s level of 9.8% is still fairly high.

Valuation (as of 17 July 2023)

  • 17 July 2023 share price of €17.15
  • Trailing diluted EPS is €1.85, hence PE is 9.3
  • Trailing FCF per share is €1.48, hence PFCF is 11.6
  • Low valuations based on trailing earnings and current stock price. Looks likely that Edilizi Acrobatica can continue to win market share from a very fragmented space of direct competitors, and from facade maintenance companies that use scaffolding or other forms of machinery. But unsure how the company’s growth profile will look like in 2023 given the removal of the Bonus Facade.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 January 2025:

1. Thoughts On A Month With Devin – Hamel Husain, Isaac Flath, and Johno Whitaker

Unlike typical AI assistants, Devin operates through Slack and spins up its own computing environment. When you chat with Devin, you’re talking to an AI that has access to a full computing environment – complete with a web browser, code editor, and shell. It can install dependencies, read documentation, and even preview web applications it creates…

…The experience is designed to feel like chatting with a colleague. You describe what you want, and Devin starts working. Through Slack, you can watch it think through problems, ask for credentials when needed, and share links to completed work. Behind the scenes, it’s running in a Docker container, which gives it the isolation it needs to safely experiment while protecting your systems. Devin also provides a web interface, which also allows you to gain access to its envirnoment and watch it work with IDEs, Web Browsers and more in real time…

…Our first task was straightforward but real: pull data from a Notion database into Google Sheets. Devin tackled this with surprising competence. It navigated to the Notion API documentation, understood what it needed, and guided me through setting up the necessary credentials in Google Cloud Console. Rather than just dumping API instructions, it walked me through each menu and button click needed – saving what would typically be tedious documentation sleuthing. The whole process took about an hour (but only a few minutes of human interaction). At the end, Devin shared a link to a perfectly formatted Google Sheet containing our data.

The code it produced was a bit verbose, but it worked. This felt like a glimpse into the future – an AI that could handle the “glue code” tasks that consume so much developer time. Johno had similar success using Devin to create a planet tracker for debunking claims about historical positions of Jupiter and Saturn. What made this particularly impressive was that he managed this entirely through his phone, with Devin handling all the heavy lifting of setting up the environment and writing the code…

…Over the course of a month, we systematically documented our attempts across these categories:

  1. Creating new projects from scratch
  2. Performing research tasks
  3. Analyzing & Modifying existing projects

The results were sobering. Out of 20 tasks, we had 14 failures, 3 successes (including our 2 initial ones), and 3 inconclusive results. Even more telling was that we couldn’t discern any pattern to predict which tasks would work. Tasks that seemed similar to our early successes would fail in unexpected ways…

…Working with Devin showed what autonomous AI development aspires to be. The UX is polished – chatting through Slack, watching it work asynchronously, seeing it set up environments and handle dependencies. When it worked, it was impressive.

But that’s the problem – it rarely worked. Out of 20 tasks we attempted, we saw 14 failures, 3 inconclusive results, and just 3 successes. More concerning was our inability to predict which tasks would succeed. Even tasks similar to our early wins would fail in complex, time-consuming ways…

…This reflects a pattern we’ve observed repeatedly in AI tooling. Social media excitement and company valuations have minimal relationship to real-world utility. We’ve found the most reliable signal comes from detailed stories of users shipping products and services. For now, we’re sticking with tools that let us drive the development process while providing AI assistance along the way.

2. Transcript: The Hidden History of Eurodollars, Part 1: Cold War Origins – Joe Weisenthal, Tracy Alloway, Lev Menand, and Josh Younger

Tracy (01:30):
It can be admittedly confusing. So why don’t we just define it right away. So eurodollars are dollar-denominated bank deposits held at foreign banks or overseas branches of US banks. And you can think of them as basically offshore dollars that sit outside the US banking system and kind of away from the Federal Reserve. They’re basically a very special form of money. You could call them shadow money.

Joe (01:57):
And it’s totally gigantic. So it’s almost $10 trillion. And I just find it so interesting, right? Because when I think of dollars, they’re either coming from, you know, the government spends dollars into existence or US bank credit. US banks [have a] license to de facto create dollars or deposits at will. And yet, eurodollars are kind of this weird thing, I guess because they’re not that.

Tracy (02:21):
Yeah, they’re not either of those. And eurodollars didn’t just spring up fully formed out of thin air. They were the result of a series of decisions all aimed at solving particular problems…

…Josh (04:27):
So eurodollars are among the most important financial instruments in the world and they are really the backbone of the global dollar system. But they come from very humble beginnings, very idiosyncratic start. And really it all started in Yugoslavia…

…So in 1945 in November, there’s a communist revolution and the US is miffed in a bunch of ways, but one of them is that the old government owes them money. And so the question is, how are they going to get it? And a few months later, Tito asked for his gold back because the Yugoslavia government had $70 million worth of gold in New York. And the Secretary of State, who was George Marshall of the Marshall Plan, he realizes he’s got a bargaining chip, which is the gold. It’s in New York and they don’t get it back until they settle their claims.

Now, even people within the State Department were kind of skeptical of this, the Yugoslavian government is obviously furious. And so are the Russians who, at this point, you know, Tito and Stalin have a falling out eventually a few years later. But at this point, they’re quite closely aligned..

…The Russians get the sense that the US is willing to use gold as a bargaining chip. They’d previously actually been building up dollar balances in New York. This is this kind of a misnomer about the post-war period. There’s this sense that that the Russians are extracting all their resources from the US, but they’re actually building up reserves of dollars because the thought is ‘We’re probably going to need to trade with these people. We have a trading company based in the US and they need resources.’ And so they’re building up foreign currency deposits and gold, but in 1947, they realize it’s not going to go well, potentially. And they pull all the gold out. They actually just called banks in New York and they say ‘We want our gold back.’ A massive reversal of the policy.

And the question is, where’s it going to go? And so they need dollars because the US dollar is the currency of foreign exchange. If they want to trade with the West, they have to trade in dollars. They need gold because gold is the basis for the monetary system. And so the question is, where can they put gold and dollars in a safe place that’s still on the right side of what was then already known as the iron curtain?

And so it turns out Paris is the ticket. They’ve actually been secretly stockpiling cash in gold in Paris. They put it in briefcases. They would fly people to Paris and put it in the consulate offices. They would just build up piles of cash and gold. And in particular, there’s a bank — BCEN — I won’t try to do it in French. And BCEN is owned by, or run by, a notorious communist sympathizer, who has a very good relationship with the Politburo. And so this is a friendly bank. And so they take on deposit the Soviet money and BCEN’s moniker in the Telex system they used to communicate was “Eurobank.”

And so, eurodollars were initially, in the late forties, just deposits issued by Eurobank, BCEN, generally for the Soviets, although also for the Chinese. And slowly this starts to percolate. There’s another communist-owned bank in London. There’s one in Brussels, which DCIA just describes as run by ‘someone with few scruples, I think is the way they put it. And so there’s some friendlies across Europe who are willing to take their money and the eurodollar market begins this way, which is preemptive sanctions evasion, basically…

…And so the first use case of eurodollars is sanctions evasion. The second use is to facilitate cross-Iron Curtain trade, although that’s a pretty small business. And so the third, and much larger business, is cross-border interest rate arbitrage. And that sounds really technical, but what it’s really doing is using foreign exchange markets and derivative markets to source dollars that the UK in particular needs in this post-war environment.

So imagine a eurodollar bank, a euro bank, takes in a eurodollar deposit, which means it gets a dollar in cash — let’s think of a physical bill, that’s an asset. It issues a eurodollar liability. And then, what is it going to do next? Because it needs to do some sort of investing. And what it does is it exchanges that dollar asset for a sterling cash, and it invests that sterling cash in some short term sterling investment — short bills or something like that. And after it does that, it says ‘I want to hedge my foreign exchange risk, because now I have a dollar liability and a sterling asset. So I’m going to use the foreign exchange forward market to agree to sell that sterling back for dollars at some point in the future at a fixed price that we agree on today.’

So that’s the bank’s position. Who’s on the other side of that trade? Let’s say a corporation, a manufacturing entity, they make radios, and that radio production process requires inputs. Those inputs are imported. And so that radio production company needs dollars with which to buy the raw materials that it uses to make the radio that it then sells for dollars in foreign markets. And so, they get those dollars from the eurobank, in exchange for the sterling they have on hand, they go buy all the parts, but they want to make sure that they know how much they’re going to receive in local currency at the end of the production process. When they sell that radio abroad, they don’t want the value of the dollar to go down. So they sell those dollars forward in exchange for sterling. And so they’ve entered into a derivative agreement, which is the opposite of the one that the euro bank has or the euro banking system.

And so then they put together the radio, they sell it abroad, they receive dollar proceeds, they turn those into sterling, which is what they pay their employees in, that’s what they pay for their land and equipment in. And that exchange rate was the one they agreed upon in advance through the foreign exchange forward contract. And so, basically what’s happening is the euro banks are pulling in dollars from abroad, distributing them through the foreign exchange market that’s trading onshore to those that need dollars today, and then providing hedges to those that will receive dollars in the future. And in the case of the euro bank, the dollars they’ll owe in the future, potentially, to their eurodollar deposit holder.

Lev (18:32):
Think about this from the perspective of the City of London coming out of the war and those bankers and the world that they grew up in, which is a world that we’ve completely forgotten, but was the world of sterling dominance before the First World War and the role that the empire played in financing global trade.

What we’re looking at in the 1950s is a group of London-based financial institutions trying to figure out a way to continue their dominance in a global economy that runs on dollars now and not on sterling. And so, the eurodollars are sort of worth the risk to the City of London, and to some extent to UK financial regulators like the Bank of England, because they need to fix their business model for a dollar world, and they want to get in on the dollar world…

…Josh (20:43):
And so this cross-border interest rate arbitrage is really just the way markets distribute the currency according to who needs it and provide the hedges that facilitate the functioning of British corporations as well. It’s what we’d call now like a use case, right? This is like a real underlying use case that doesn’t involve the Soviet Union for dollar deposits issued by non-US banks, which is, you can’t emphasize enough how fundamentally strange that is because if I tried to make dollars by writing it on piece of paper, I don’t think I’d get very far. But at the time, that’s essentially what these banks are doing.

And in particular London is a more, let’s say, reputable locale, particularly banks that are not known to be communist sympathizers. There’s a little bit of a funny thing about being a communist bank, but we won’t get into that specifically, but these are blue chip banks in London issuing dollar deposits. And that means you can use them for things and you can feel more comfortable…

…Lev (26:54):
Although, just let’s size this a little bit, right? It was a billion dollars in, say, 1960, which is maybe the equivalent of $50 billion today…

…So we have way more to go in terms of the growth of this market subsequent to 1960. It’s still pretty nascent in 1960…

…Josh (31:08):
So the question at this point is, it’s a nascent market, it’s half a Tether, and it’s unclear whether or not it’s become a big major global actor. We know it eventually becomes that, but at the time, that’s super unclear, but it becomes eventually and soon the solution to a big problem. So eurodollars are the solution to big problem because, in the background of all of this buildup, there’s massive trouble brewing and the whole global edifice of the dollar system is starting to crack.

And the question is, you know, how are we going to save it? Or should we?

3. Emergent Layers, Chapter 1: Scarcity, Abstraction & Abundance – Alex Danco

One foundational principle of the tech world is that as it builds upwards and outwards into the rest of the world, it’s doing so by building on top of these abundant resources and progressively leveraging them. We can think about the world that we know and understand today — with its constraints, and business models and maturing industries that are generally understood by all — as forming a layer, which we’ll call layer i. In time, as certain elements become abstracted and subsequently abundant, others emerge as newly scarce, or in play for new reasons and in new business models. The critical skill for understanding how this works (which is worth practicing!) is being able to work one’s way up and down between stack layers so as to understand when an abundant and scalable element has blossomed at layer i of a stack, and its scarce, non-scalable counterpart has emerged at a new layer — which we’ll call layer i+1…

…Microsoft

The original scarce resource at layer i = PC hardware. In the early days of PCs, manufacturers could compete along many axes of performance — memory, speed, functionality, and so forth — while being sufficiently differentiated from one another. But it was very hard to standardize common functions and applications that people could run across any computer, making it difficult for these use cases to grow rapidly — until Bill Gates and Paul Allen realized, Hey, there isn’t a software industry yet but there’s gonna be, so we should start it. Microsoft abstracted away the capabilities of a computer into software, so now anyone else could write their own software on top of Microsoft’s software without having to worry about the underlying machinery. PCs became an abundantly available commodity, and Microsoft became dominant and mega-profitable. A new scarce resource emerged at layer i+1: the ability to connect these PCs and get them to talk to one another…

…Facebook

Scarce resource at layer i = connections between humans using the internet. The internet was awash in people and content, but authentic human interaction was still relatively scarce and difficult. As such, all of the attempts at connecting people to content and advertising and services were feature-stuffed, spammy, bloated and bad. The critical step forward that Facebook accomplished was abstracting away the “reciprocal friendship” into a functioning social graph. And we’ve seen what’s happened since: Facebook, and social connectivity in general, has exploded and become a newly abundant resource. Facebook became dominant and mega-profitable…

…One critical aspect of this layering is that at each higher level of abstraction, the lever with which one can create value and extract profit becomes successively longer. You can see this by looking at market cap per employee of these dominant companies:

Intel: 106k employees, 55B revenue, 149B mkt cap

Microsoft: 120k employees, 93B revenue, 429B mkt cap

Google / Alphabet: 60k employees 75B revenue, 510B mkt cap

Facebook: 13k employees, 6B revenue, 320B mkt cap…

…A non-obvious but critical point to appreciate here is that for of the first n movers mobilizing around a scarce element, the arrival and eventual dominance of the last mover will be seen as a Black Swan event of sorts. By abstracting away the scarce resource instead of organizing around its scarcity, these companies become the first to be fully playing in the sandbox at level i+1, as opposed to the non-scalable scarcity-governed sandbox at level i…

…The last decade saw plenty of startups go after the transportation market, and I’m sure all of them described themselves as “scalable” in their investor decks. Meanwhile, the whole valley was busy passing on Uber because it was initially just a better way to do a black car service, and few people understood the true scalable potential in abstracting away the driver-rider trust required for UberX. The take home lesson here should be taken to heart: when the first n companies go after an issue, no matter what language they use in their pitch, their business models typically don’t truly venture beyond the constraints at layer i that anybody can see and understand. They’re easier to work through, make more sense to “rational investors”, and require fewer non-linear leaps of thinking to understand. As such, when the last mover emerges at level i+1, they’re a Black Swan event: few people foresaw their opportunity, their impact is enormous, and everybody rationalizes what happened after the fact…

…At level i+1 of the stack, the newly valuable resource is that which emerges as scarce out of the transition from scarcity to abstraction to abundance at layer i.

4. The Default Position: LevFin’s Latest Game Just Got Shut Down…Sort Of – JunkBondInvestor

Serta was no small player. We’re talking about the company behind Serta and Beautyrest—the beds you see in every department store in America. But by 2020, they were in serious trouble. Drowning in debt and sales were tanking.

That’s when a group of savvy lenders saw their opportunity. Already holding a chunk of Serta’s debt, they approached with what would become lawyers’ new favorite playbook.

The deal? A group holding 51% of their term loans would provide new money, but only if they got to exchange their old loans for new “super-senior” debt that jumps to the front of the line. The other 49%? They didn’t even get a phone call.

Here’s a sobering fact: non-participating lenders saw their position so deeply subordinated that their recovery prospects plummeted. The new super-senior debt was worth nearly full value, while the excluded lenders saw their position crater.

But here’s where they screwed up.

Their loan agreement only allowed “open market purchases.” Serta’s lawyers tried arguing that their private backroom deal counted as “open market” because… well, just because.

The Fifth Circuit wasn’t having any of it. They said what everyone was thinking: A private deal with hand-picked lenders isn’t an “open market” any more than a private club is a public park…

…On the exact same day—I’m not making this up—a New York court looked at pretty much the identical deal from Mitel Networks and said “Sure, go right ahead.”…

…Mitel pulled the exact same move as Serta. They were drowning in debt, so they cut a deal with friendly lenders to jump them to the front of the line. New super-priority debt paper. Everyone else got pushed to the back.

So what made this different from Serta?

Three words. That’s it. Instead of requiring “open market purchases,” Mitel’s agreement just said they could “purchase by way of assignment.” No mention of open markets anywhere.

The New York court basically said: “Look, if you didn’t want the company doing private deals, you should have said so in the contract.” Those excluded lenders who were screaming about their “sacred rights”? The court told them their rights weren’t so sacred after all.

Here’s the brutal truth—the same transaction either flies or dies based entirely on a few words in your documents. If that doesn’t scare the hell out of every lender out there, it should.

5. Tyler Cowen – The #1 Bottleneck to AI progress Is Humans – Dwarkesh Patel and Tyler Cowen

Dwarkesh Patel 00:00:11
Why won’t we have explosive economic growth, 20% plus, because of AI?

Tyler Cowen 00:00:17
It’s very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can’t use AI very well.

Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I’m guessing education is 6 to 7%. The nonprofit sector, I’m not sure the number, but you add it all up, that’s half of the economy right there.

How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you’ll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.

Dwarkesh Patel 00:01:04
The mechanism behind cost disease is that there’s a limited amount of laborers, and if there’s one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?

Tyler Cowen 00:01:25
Cost disease is more general than that. Let’s say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.

Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.

Dwarkesh Patel 00:01:57
If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you’d have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?

Tyler Cowen 00:02:11
He and I would agree, I hope. I think I would tell him, “Hey, it’s going to take a long time.” And he’d say, “Hmm, I don’t see it happening yet. I think it’s going to take a long time.” And we’d shake hands and walk off into the sunset. And then I’d eat some of his rice or wheat or whatever, and that would be awesome.

Dwarkesh Patel 00:02:29
But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don’t just eat it away, you could agree with that, right?

Tyler Cowen 00:02:38
I don’t know what the word “could” means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you’d call prediction markets are not forecasting super rapid growth anytime soon…

…Dwarkesh Patel 00:03:13
In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.

Tyler Cowen 00:03:26
I don’t agree.

Dwarkesh Patel 00:03:27
Why not buy the models?

Tyler Cowen 00:03:28
His model is far too much a one-factor model, right? Population. I don’t think it’s very predictive. We’ve had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.

So it’s really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it’s unclear what’s happened, but it’s also fragile. There’s the perspective of the economist, but also that of the anthropologist, the sociologist.

They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there’s any simple lever you can push on, intelligence or not, that’s going to give you breakaway economic growth.

Dwarkesh Patel 00:04:11
What you just said, where you’re bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you’re going to be bottlenecked by the restaurants…

…Here’s a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.

We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.

Dwarkesh Patel 00:04:53
So about a year ago, your co-writer on Martial Revolution, Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That’s why you have to do semiconductors in Taiwan, because that’s where they’re putting their nominal amount of geniuses. We’re putting ours in finance and tech.

If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn’t going to grow any faster?

Tyler Cowen 00:05:32
I didn’t agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they’re amazingly low. They’re pretty insignificant.

People who are very successful, they’re very smart, but they’re people who have say eight or nine areas where they’re like, on a scale of 1 to 10, there are nine. Like they have one area where they’re just like an 11 and a half on a scale of 1 to 10. And then on everything else, they’re an eight to a nine and have a lot of determination.

And that’s what leads to incredible success. And IQ is one of those things, but it’s not actually that important. It’s the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

Great Stocks Can Come From The Worst Industries

A gleaming diamond can be found amongst a lump of coal for those with the ability to spot a true bargain.

I’ve long been sector-agnostic when it comes to the companies I’m interested in because I believe that great companies – and thus, great stocks – can come from anywhere. 

My belief was formed because of something I learnt more than a dozen years ago about the US-based airline, Southwest Airlines. Ned Davis Research was tasked by CNN’s MONEY Magazine in 2002 to find the five US-listed stocks with the highest returns over the past 30 years. The winner was Southwest Airlines with its annualised return of 26% from 1972 to 2002; a $1,000 investment at the start of the period would have become $1 million by the end. What is noteworthy here is airlines were widely regarded back then as businesses with horrendous economics. In Berkshire Hathaway’s 2007 annual shareholders’ letter, Warren Buffett wrote (emphasis is mine): 

The worst sort of business is one that grows rapidly, requires significant capital to engender the growth, and then earns little or no money. Think airlines. Here a durable competitive advantage has proven elusive ever since the days of the Wright Brothers. Indeed, if a farsighted capitalist had been present at Kitty Hawk, he would have done his successors a huge favor by shooting Orville down.

The airline industry’s demand for capital ever since that first flight has been insatiable. Investors have poured money into a bottomless pit, attracted by growth when they should have been repelled by it. And I, to my shame, participated in this foolishness when I had Berkshire buy U.S. Air preferred stock in 1989. As the ink was drying on our check, the company went into a tailspin, and before long our preferred dividend was no longer being paid. But we then got very lucky. In one of the recurrent, but always misguided, bursts of optimism for airlines, we were actually able to sell our shares in 1998 for a hefty gain. In the decade following our sale, the company went bankrupt. Twice.” 

And yet, it was an airline that topped the charts in 2002 for the best-performing US stock in the past 30 years. The timeframe of 30 years is also sufficiently long, such that Southwest Airlines’ gains had to be the result of its business’s excellent long-term performance, and not some fortunate short-term hiccup in the fortunes of its business or its stock price.

A recent study from the highly-regarded investment researcher Michael Maubossin, titled Measuring the Moat: Assessing the Magnitude and Sustainability of Value Creation, bolsters my belief. He found that differences in the return on invested capital (ROIC) between industries is lower than the differences in ROICs of companies within industries. In Mauboussin’s data-set, the industry with the highest median ROIC from 1963 to 2023 is Personal Care Products at around 18%. But within Personal Care Products, the companies have ROICs ranging from a low of around 5% to a high of around 40%. Meanwhile, the Wireless Telecom Services industry has one of the lowest median ROICs at around 1%. Yet, the companies within have ROICs ranging from just below 40% to deeply negative figures. Said another way, the best company in a poor industry (Wireless Telecom Services) still has an excellent business that performs significantly better than the median company in a great industry (Personal Care Products)

I continue to believe that excellent investing opportunities can be found everywhere, so I will, for the foreseeable future, remain sector-agnostic. Sometimes, a gleaming diamond can be found amongst a lump of coal for those with the ability to spot a true bargain.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 January 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 January 2025:

1. OpenAI o3 Breakthrough High Score on ARC-AGI-Pub – François Chollet

OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.

This is a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models. For context, ARC-AGI-1 took 4 years to go from 0% with GPT-3 in 2020 to 5% in 2024 with GPT-4o. All intuition about AI capabilities will need to get updated for o3…

…The high-efficiency score of 75.7% is within the budget rules of ARC-AGI-Pub (costs <$10k) and therefore qualifies as 1st place on the public leaderboard!

The low-efficiency score of 87.5% is quite expensive, but still shows that performance on novel tasks does improve with increased compute (at least up to this level.)

Despite the significant cost per task, these numbers aren’t just the result of applying brute force compute to the benchmark. OpenAI’s new o3 model represents a significant leap forward in AI’s ability to adapt to novel tasks. This is not merely incremental improvement, but a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before, arguably approaching human-level performance in the ARC-AGI domain.

Of course, such generality comes at a steep cost, and wouldn’t quite be economical yet: you could pay a human to solve ARC-AGI tasks for roughly $5 per task (we know, we did that), while consuming mere cents in energy. Meanwhile o3 requires $17-20 per task in the low-compute mode. But cost-performance will likely improve quite dramatically over the next few months and years, so you should plan for these capabilities to become competitive with human work within a fairly short timeline.

o3’s improvement over the GPT series proves that architecture is everything. You couldn’t throw more compute at GPT-4 and get these results. Simply scaling up the things we were doing from 2019 to 2023 – take the same architecture, train a bigger version on more data – is not enough. Further progress is about new ideas…

…Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible…

…To adapt to novelty, you need two things. First, you need knowledge – a set of reusable functions or programs to draw upon. LLMs have more than enough of that. Second, you need the ability to recombine these functions into a brand new program when facing a new task – a program that models the task at hand. Program synthesis. LLMs have long lacked this feature. The o series of models fixes that.

For now, we can only speculate about the exact specifics of how o3 works. But o3’s core mechanism appears to be natural language program search and execution within token space – at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search. In the case of o3, the search is presumably guided by some kind of evaluator model. To note, Demis Hassabis hinted back in a June 2023 interview that DeepMind had been researching this very idea – this line of work has been a long time coming.

So while single-generation LLMs struggle with novelty, o3 overcomes this by generating and executing its own programs, where the program itself (the CoT) becomes the artifact of knowledge recombination. Although this is not the only viable approach to test-time knowledge recombination (you could also do test-time training, or search in latent space), it represents the current state-of-the-art as per these new ARC-AGI numbers.

Effectively, o3 represents a form of deep learning-guided program search. The model does test-time search over a space of “programs” (in this case, natural language programs – the space of CoTs that describe the steps to solve the task at hand), guided by a deep learning prior (the base LLM). The reason why solving a single ARC-AGI task can end up taking up tens of millions of tokens and cost thousands of dollars is because this search process has to explore an enormous number of paths through program space – including backtracking.

2. Energy Cheat Sheet – Brian Potter

Most energy we consume gets wasted. Of the 93.6 quads (~27,400 TWh) the US consumed in 2023, only around 1/3rd of that went towards producing useful work. The rest was lost due to various inefficiencies, such as heat engine and transmission losses…

…Another obvious fact is that despite the burgeoning construction of renewable energy infrastructure, the majority of our energy still comes from burning hydrocarbons. Petroleum, coal, and natural gas combined are responsible for roughly 82% of total energy consumption in the US.

Related to this fact is that electricity generation is a relatively small fraction of our energy system: roughly ⅓ of energy inputs go towards generating electricity. For residential and commercial consumption, only around half of energy use comes from electricity. For industrial and transportation energy (the two largest sources of consumption), electricity is around 13% and less than 0.1%.

What this chart makes clear, but also sort of abstracts away, is the enormous amount of infrastructure we’ve built for moving around hydrocarbons. The US has close to 1 million oil and natural gas wells, 3 million miles of natural gas pipeline, 145,000 gas stations, and capacity to refine 18.4 million barrels of oil a day.

This is why environmental advocates often focus on electrifying everything: decarbonizing energy infrastructure requires much more than just building low-carbon sources of energy like solar panels and wind turbines — it requires fundamentally reworking how our society moves energy around. It’s also why eliminating roadblocks and bottlenecks to energy infrastructure construction is so important.

We can also dive deeper and look at a sector-by-sector breakdown of energy use. The residential sector uses around 11.5 quads (3370 TWh) of energy, a little over 12% of total US energy consumption…

…One major takeaway here is that most residential energy consumption goes into heating things up: Space heating (5.74 quads), water heating (1.69 quads), and clothes dryers (0.26 quads) together account for ⅔rds of residential energy consumption.4 You sometimes see air conditioners decried as wasteful by energy-minded environmentalists, but air conditioning is a much smaller share of energy consumption than heating…

…Most transportation energy in the US is consumed in the form of gasoline and diesel fuel, with a relatively small amount of jet fuel. If we look at it by transportation mode, most energy (~78%) is consumed by cars, trucks, and motorcycles…

…The huge amount of energy used by transportation also means that households are using a lot of energy that isn’t captured by the residential energy consumption statistics above. In fact, in a year, the average US household consumes more energy from burning gasoline (~24,000 kilowatt-hours) than what’s used by the entire rest of the house (~22,500 kilowatt-hours).

The commercial sector is not that different from the residential sector, with heating air and water using the largest fraction, with cooling and ventilation (ie: moving air around) also using large fractions.5 As with residential, its energy consumption is roughly split between electricity and natural gas…

…With industrial energy use, we see a lot of the same patterns that we see in other sectors. One is that utility electricity is a relatively small amount of industrial energy consumption (less than 20%). Most industrial energy comes from burning fuel (mostly natural gas) directly. Once again, we see that heating things up accounts for a huge fraction of energy consumption: roughly half of all manufacturing energy goes into process heating: If we add process heat to residential and commercial air and water heating, we find that roughly 20% of total US energy consumption goes towards heating things up…

…It’s clear that most energy used in the US is ultimately wasted, with only a small fraction being used to perform useful work (moving cars, heating homes, operating electronics, and so on). Moving energy around and changing its form can’t be done perfectly efficiently (thanks in part to the 2nd law of thermodynamics), and all those conversions we require to get energy where it needs to be and in the form we need it whittle away the energy available to get things done…

…The biggest source of losses is probably heat engine inefficiencies. In our hydrocarbon-based energy economy, we often need to transform energy by burning fuel and converting the heat into useful work. There are limits to how efficiently we can transform heat into mechanical work (for more about how heat engines work, see my essay about gas turbines).

The thermal efficiency of an engine is the fraction of heat energy it can transform into useful work. Coal power plant typically operates at around 30 to 40% thermal efficiency. A combined cycle gas turbine will hit closer to 60% thermal efficiency. A gas-powered car, on the other hand, operates at around 25% thermal efficiency. The large fraction of energy lost by heat engines is why some thermal electricity generation plants list their capacity in MWe, the power output in megawatts of electricity…

…The low thermal efficiency of ICE cars and heat engines in general and the high efficiency of electrical equipment (especially things like heat pumps) are the biggest counterweight to the high energy capacity of hydrocarbons. The gas tank on an ICE car technically stores much more energy than a Tesla battery pack but only a small fraction of that gasoline energy can be converted into useful motion. Switching to EVs, even if that electricity is still provided by burning fossil fuels, could save large amounts of energy (and thus carbon emissions), as it could mean switching from a 25% efficient gasoline engine to a 60% efficient combined cycle gas turbine. And of course, with electric vehicles, there’s the possibility of powering them by non-carbon emitting sources of electricity like solar or wind. 

3. Stocks Are More Expensive Than They Used to Be – Michael Batnick

In January 2018, they wrote an article, CAPE Fear: Why CAPE Naysayers Are Wrong. The article featured yours truly…

…It’s hard to believe seven years have passed since this article. It’s harder to believe that the S&P 500 is up almost 100% since their article came out, and delivered the highest 7-year performance for any CAPE starting at 33x. I did not see this coming. At all.

My whole thing was, yes, valuations are high. But companies are better today and deserve the premium multiple. I was not saying that a high CAPE is bullish. In fact, I ended most of my posts on this topic with the message of, “Expect lower returns.” I’ve never been happier to be wrong.

I want to return to some of the arguments I made, and what the CAPE zealots missed.

To use a long-term average that goes back to the late 1800s is foolish for three reasons. First, we didn’t have CAPE data back in 1929. It was first “discovered” in the late 90s. The discovery of data in financial markets changes the very essence of it. Markets are not governed by the laws of physics. They’re alive. They adapt and evolve and adjust, like an micro organism.

Second, the CAPE ratio has been rising over time since the 1980s. We’ve only visited the long-term average once in the last 25 years, and that was at the bottom of the GFC. If that’s what it takes to return to the long-term average, maybe you should reconsider what an appropriate comp level really is.

Third, and most important, the companies are far better today than they were in the past.

4. AI’s Uneven Arrival – Ben Thompson

What o3 and inference-time scaling point to is something different: AI’s that can actually be given tasks and trusted to complete them. This, by extension, looks a lot more like an independent worker than an assistant — ammunition, rather than a rifle sight. That may seem an odd analogy, but it comes from a talk Keith Rabois gave at Stanford:

So I like this idea of barrels and ammunition. Most companies, once they get into hiring mode…just hire a lot of people, you expect that when you add more people your horsepower or your velocity of shipping things is going to increase. Turns out it doesn’t work that way. When you hire more engineers you don’t get that much more done. You actually sometimes get less done. You hire more designers, you definitely don’t get more done, you get less done in a day.

The reason why is because most great people actually are ammunition. But what you need in your company are barrels. And you can only shoot through the number of unique barrels that you have. That’s how the velocity of your company improves is adding barrels. Then you stock them with ammunition, then you can do a lot. You go from one barrel company, which is mostly how you start, to a two barrel company, suddenly you get twice as many things done in a day, per week, per quarter. If you go to three barrels, great. If you go to four barrels, awesome. Barrels are very difficult to find. But when you have them, give them lots of equity. Promote them, take them to dinner every week, because they are virtually irreplaceable. They are also very culturally specific. So a barrel at one company may not be a barrel at another company because one of the ways, the definition of a barrel is, they can take an idea from conception and take it all the way to shipping and bring people with them. And that’s a very cultural skill set.

The promise of AI generally, and inference-time scaling models in particular, is that they can be ammunition; in this context, the costs — even marginal ones — will in the long run be immaterial compared to the costs of people, particularly once you factor in non-salary costs like coordination and motivation…

…What will become clear once AI ammunition becomes available is just how unsuited most companies are for high precision agents, just as P&G was unsuited for highly-targeted advertising. No matter how well-documented a company’s processes might be, it will become clear that there are massive gaps that were filled through experience and tacit knowledge by the human ammunition.

SaaS companies, meanwhile, are the ad agencies. The ad agencies had value by providing a means for advertisers to scale to all sorts of media across geographies; SaaS companies have value by giving human ammunition software to do their job. Ad agencies, meanwhile, made money by charging a commission on the advertising they bought; SaaS companies make money by charging a per-seat licensing fee. Look again at that S-1 excerpt I opened with:

Our business model focuses on maximizing the lifetime value of a customer relationship. We make significant investments in acquiring new customers and believe that we will be able to achieve a positive return on these investments by retaining customers and expanding the size of our deployments within our customer base over time…

The positive return on investment comes from retaining and increasing seat licenses; those seats, however, are proxies for actually getting work done, just as advertising was just a proxy for actually selling something. Part of what made direct response digital advertising fundamentally different is that it was tied to actually making a sale, as opposed to lifting brand awareness, which is a proxy for the ultimate goal of increasing revenue. To that end, AI — particularly AI’s like o3 that scale with compute — will be priced according to the value of the task they complete; the amount that companies will pay for inference time compute will be a function of how much the task is worth. This is analogous to digital ads that are priced by conversion, not CPM.

The companies that actually leveraged that capability, however, were not, at least for a good long while, the companies that dominated the old advertising paradigm. Facebook became a juggernaut by creating its own customer base, not by being the advertising platform of choice for companies like P&G; meanwhile, TV and the economy built on it stayed relevant far longer than anyone expected. And, by the time TV truly collapsed, both the old guard and digital advertising had evolved to the point that they could work together.

If something similar plays out with AI agents, then the most important AI customers will primarily be new companies, and probably a lot of them will be long tail type entities that take the barrel and ammunition analogy to its logical extreme. Traditional companies, meanwhile, will struggle to incorporate AI (outside of whole-scale job replacement a la the mainframe); the true AI takeover of enterprises that retain real world differentiation will likely take years.

None of this is to diminish what is coming with AI; rather, as the saying goes, the future may arrive but be unevenly distributed, and, contrary to what you might think, the larger and more successful a company is the less they may benefit in the short term. Everything that makes a company work today is about harnessing people — and the entire SaaS ecosystem is predicated on monetizing this reality; the entities that will truly leverage AI, however, will not be the ones that replace them, but start without them.

5. Don’t let interest-rate predictions dictate your investment decisions – Chin Hui Leong

A little over a year ago, the US Federal Reserve signalled its intention to cut interest rates three times in 2024. This commentary sparked a flurry of predictions, with market watchers vying to outguess the Fed on the number, timing, and size of these cuts. Goldman Sachs, for instance, boldly predicted five cuts.

We ended up with just three interest-rate cuts in 2024 – a significant miss, to say the least…

…According to Visual Capitalist, four firms – Morgan Stanley, Bank of America, Citigroup and Nomura – pencilled in a one-percentage-point cut for 2024. Credit should be given where it’s due: their forecasts were right.

However, did getting these predictions right matter in the end? As it turns out, not so much.

Morgan Stanley, Bank of America and Citi set 2024’s S&P 500 price targets at 4,500, 5,000 and 5,100 respectively… 

…The S&P 500, of course, closed the year at 5,881…

…Forecasts and expectations may look similar, but they are different. My friend Eugene Ng puts it best: Forecasts rely on knowing when something will occur. Expectations, on the other hand, are the acknowledgement of what’s likely to occur without professing insight into when it will happen.

For example, it’s reasonable to expect the stock market to fall by 10 per cent or more sometime in the future. After all, history has shown that corrections are a common occurrence…

…In my eyes, calmness can be achieved by having the right expectations, and preparing well for any market turbulence even when we don’t know when the market will fall.

If you are prepared, you will have fewer worries. If you worry less, you will stand a better chance of doing better than average. And that’s more than any investor can hope for, whether the forecasts are right or wrong.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Deepmind), Meta Platforms (parent of Facebook), and Tesla. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q4 2024

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the fourth quarter of 2024.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the fourth quarter of 2024 – was held earlier this week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy remains resilient, but two significant risks remain, namely, persistent inflation and dangerous geopolitical conditions 

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy remains resilient, with low unemployment and healthy consumer spending; businesses are now more optimistic about the economy

The U.S. economy has been resilient. Unemployment remains relatively low, and consumer spending stayed healthy, including during the holiday season. Businesses are more optimistic about the economy, and they are encouraged by expectations for a more pro-growth agenda and improved collaboration between government and business.

2. Management sees two significant risks, namely, persistent inflation, and the most dangerous geopolitical conditions since World War II; management thinks a high level of optimism is embedded in asset prices; management is focused on being prepared for a wide range of scenarios

Two significant risks remain. Ongoing and future spending requirements will likely be inflationary, and therefore, inflation may persist for some time. Additionally, geopolitical conditions remain the most dangerous and complicated since World War II…

…We think it’s important to acknowledge the tension in the risks and uncertainties in the environment and the degree of optimism embedded in asset prices and expectations. In that context, we remain upbeat about the strength of the franchise, but we are focused on being prepared for a wide range of scenarios.

3. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose from US$2.2 billion a year ago; Consumer & Community Banking’s net charge-offs rose by US$0.4 billion from a year ago

Credit costs were $2.6 billion, reflecting net charge-offs of $2.4 billion and a net reserve of $267 million…

…In terms of credit performance this quarter, credit costs were $2.6 billion, reflecting net charge-offs of $2.1 billion, up $428 million year-on-year driven by card. The net reserve build was $557 million predominantly driven by higher card revolving balances.

4. JPMorgan’s credit card outstanding loans was up double-digits; management expects card loans to grow in 2025, but at a slower pace than in 2024

Card outstandings were up 11% due to strong account acquisition and revolvers…

… We expect healthy card loan growth again this year but below the 12% pace we saw in 2024 as tailwinds from revolver normalization are largely behind us. 

5. Auto originations were up

In auto, originations were $10.6 billion, up 7%, reflecting higher lease volume on robust new vehicle inventory. 

6. JPMorgan’s investment banking fees had strong growth in 2024 Q4, with strong growth in debt underwriting and equity underwriting fees, signalling higher appetite for capital-markets activity from companies; management is optimistic about companies’ enthusiasm towards capital markets activities

IB fees were up 49% year-on-year, and we ranked #1 with wallet share of 9.3% for 2024. Advisory fees were up 41%, benefiting from large deals and share growth in a number of key sectors. Underwriting fees were up meaningfully with debt up 56% and equity up 54% primarily driven by favorable market conditions. In terms of the outlook for the overall Investment Banking wallet, in light of the positive momentum, we remain optimistic about our pipeline. 

7. Management is seeing companies paydown bank loans and is not seeing loan growth, but the lack of loan growth is not necessarily a negative thing, as it involves companies having wide access to capital markets

Global Corporate and Investment Banking loans were down 2% quarter-on-quarter driven by paydowns and lower short-term financing, primarily offset by originations. In Commercial Banking, middle market loans were also down 2% driven by paydowns, predominantly offset by new originations. And commercial real estate loans were flat as new originations were offset by paydowns…

…I think given the significant improvement in business sentiment and the general optimism out there, you might have expected to see some big open loan growth. We are not really seeing that. I don’t particularly think that’s a negative. I think it’s probably explained by a combination of wide open capital markets and so many of the larger corporates accessing the capital markets and healthy balance sheets in small businesses and maybe some residual caution. And maybe there are some pockets in some industries where some aspects of the policy uncertainty that we might be facing are making them a little bit more cautious than they otherwise would be about what they’re executing in the near term. But we’ll see what the new year brings. The current optimism starts getting tested with reality one way or the other.

8. Management is incorporating interest rate cuts in 2025

We expect 2025 NII ex Markets to be approximately $90 billion. Going through the drivers, as usual, the outlook assumes that rates follow the forward curve. It’s worth noting that the NII decrease is driven by both the cut expected in 2025 and the impact of the 100 basis points of cuts in the back half of 2024. 

9. Management expects credit card net charge-offs in 2025 of 3.6%, up from 3.34% in 2024

On credit, we expect the 2025 card net charge-off rate to be in line with our previous guidance of approximately 3.6%.

10. Management has extra capital for JPMorgan as they think there’s a good chance the bank can deploy the capital at better prices in the future, but they’re not increasing the size of the extra capital

The way we’re thinking about it right now is that we feel very comfortable with the notion that it makes sense for us to have a nice store of extra capital in light of the current environment. We believe there is a good chance that there will be a moment where we get to deploy it at better levels essentially in whatever way than the current opportunities would suggest. And so that feels like a correct kind of strategic and financial decision for us. Having said that, having studied it quite extensively over the last 6 months and have all these debates you would expect, we’ve concluded that we do have enough. We have not [indiscernible]. And given that, we would like to not have the excess grow from here.

11. The mortgage market for housing looks poor given the high interest rates there

You know well the state of the mortgage market given rates. 

12. Management thinks that the biggest sources of risk to the credit market are unemployment and stagflation

Just the biggest driver of credit has been and always will be unemployment, both on the consumer side and it feeds into the corporate side. It feeds into mortgages, subprime, credit card. So really it’s your forecast of unemployment. You have to make your own, which will determine that over time. And so the second thing you said vulnerabilities. It’s unemployment, but the worst case would be stagflation. High rates with higher unemployment will drive higher credit losses literally across the board. I’m not — we’re not predicting that, but you just ask for the vulnerabilities. That’s the vulnerabilities.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.